Ingredient-oriented multi-degradation learning for image restoration

J Zhang, J Huang, M Yao, Z Yang… - Proceedings of the …, 2023 - openaccess.thecvf.com
Learning to leverage the relationship among diverse image restoration tasks is quite
beneficial for unraveling the intrinsic ingredients behind the degradation. Recent years have …

Reasonable effectiveness of random weighting: A litmus test for multi-task learning

B Lin, F Ye, Y Zhang, IW Tsang - arXiv preprint arXiv:2111.10603, 2021 - arxiv.org
Multi-Task Learning (MTL) has achieved success in various fields. However, how to balance
different tasks to achieve good performance is a key problem. To achieve the task balancing …

Auto-lambda: Disentangling dynamic task relationships

S Liu, S James, AJ Davison, E Johns - arXiv preprint arXiv:2202.03091, 2022 - arxiv.org
Understanding the structure of multiple related tasks allows for multi-task learning to improve
the generalisation ability of one or all of them. However, it usually requires training each …

Mtmamba: enhancing multi-task dense scene understanding by mamba-based decoders

B Lin, W Jiang, P Chen, Y Zhang, S Liu… - European Conference on …, 2025 - Springer
Multi-task dense scene understanding, which learns a model for multiple dense prediction
tasks, has a wide range of application scenarios. Modeling long-range dependency and …

Three-way trade-off in multi-objective learning: Optimization, generalization and conflict-avoidance

L Chen, H Fernando, Y Ying… - Advances in Neural …, 2024 - proceedings.neurips.cc
Multi-objective learning (MOL) often arises in emerging machine learning problems when
multiple learning criteria or tasks need to be addressed. Recent works have developed …

Effective structured prompting by meta-learning and representative verbalizer

W Jiang, Y Zhang, J Kwok - International Conference on …, 2023 - proceedings.mlr.press
Prompt tuning for pre-trained masked language models (MLM) has shown promising
performance in natural language processing tasks with few labeled examples. It tunes a …

Direction-oriented multi-objective learning: Simple and provable stochastic algorithms

P Xiao, H Ban, K Ji - Advances in Neural Information …, 2024 - proceedings.neurips.cc
Multi-objective optimization (MOO) has become an influential framework in many machine
learning problems with multiple objectives such as learning with multiple criteria and multi …

[PDF][PDF] Mitigating gradient bias in multi-objective learning: A provably convergent approach

H Fernando, H Shen, M Liu, S Chaudhury… - 2023 - par.nsf.gov
Machine learning problems with multiple objectives appear either i) in learning with multiple
criteria where learning has to make a trade-off between multiple performance metrics such …

Improvable gap balancing for multi-task learning

Y Dai, N Fei, Z Lu - Uncertainty in Artificial Intelligence, 2023 - proceedings.mlr.press
In multi-task learning (MTL), gradient balancing has recently attracted more research interest
than loss balancing since it often leads to better performance. However, loss balancing is …

Online constrained meta-learning: Provable guarantees for generalization

S Xu, M Zhu - Advances in Neural Information Processing …, 2023 - proceedings.neurips.cc
Meta-learning has attracted attention due to its strong ability to learn experiences from
known tasks, which can speed up and enhance the learning process for new tasks …