Skip to content

A comprehensive list of gradient-based multi-objective optimization algorithms in deep learning.

Notifications You must be signed in to change notification settings

Baijiong-Lin/Awesome-Multi-Objective-Deep-Learning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 

Repository files navigation

Awesome-Multi-Objective-Deep-Learning

Awesome paper Hits Made With Love

⭐ This repository hosts a curated collection of literature associated with gradient-based multi-objective algorithms in deep learning. Feel free to star and fork. For further details, refer to the following paper:

Gradient-Based Multi-Objective Deep Learning: Algorithms, Theories, Applications, and Beyond
Weiyu Chen1,*, Xiaoyuan Zhang2,*, Baijiong Lin3,*, Xi Lin2, Han Zhao4, Qingfu Zhang2, and James T. Kwok1
1HKUST, 2CityU, 3HKUST(GZ), 4UIUC, *Equal Contribution

MODL

If you find this repository is useful for you, please cite our paper:

@article{chen2025modl,
      title={Gradient-Based Multi-Objective Deep Learning: Algorithms, Theories, Applications, and Beyond}, 
      author={Weiyu Chen and Xiaoyuan Zhang and Baijiong Lin and Xi Lin and Han Zhao and Qingfu Zhang and James T. Kwok},
      journal={arXiv preprint arXiv:2501.10945},
      year={2025},
}

Contents

Related Survey

  1. Multi-task learning with deep neural networks: A survey [Paper]
    Michael Crawshaw
    arXiv:2009.09796, 2020.

  2. A Survey on Multi-Task Learning [Paper]
    Yu Zhang and Qiang Yang
    IEEE Transactions on Knowledge and Data Engineering (TKDE), 2021.

  3. A Review on Evolutionary Multi-Task Optimization: Trends and Challenges [Paper]
    Tingyang Wei, Shibin Wang, Jinghui Zhong, Dong Liu, and Jun Zhang
    IEEE Transactions on Evolutionary Computation (TEVC), 2021.

  4. Multi-Task Learning for Dense Prediction Tasks: A Survey [Paper]
    Simon Vandenhende, Stamatios Georgoulis, Wouter Van Gansbeke, Marc Proesmans, Dengxin Dai, and Luc Van Gool
    IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2021.

  5. Twenty years of continuous multiobjective optimization in the twenty-first century [Paper]
    Gabriele Eichfelder
    EURO Journal on Computational Optimization, 2021.

  6. Advances and Challenges of Multi-task Learning Method in Recommender System: A Survey [Paper]
    Mingzhu Zhang, Ruiping Yin, Zhen Yang, Yipeng Wang, and Kan Li
    arXiv:2305.13843, 2023.

  7. Unleashing the Power of Multi-Task Learning: A Comprehensive Survey Spanning Traditional, Deep, and Pretrained Foundation Model Eras [Paper]
    Jun Yu, Yutong Dai, Xiaokang Liu, Jin Huang, Yishan Shen, Ke Zhang, Rong Zhou, Eashan Adhikarla, Wenxuan Ye, Yixin Liu, Zhaoming Kong, Kai Zhang, Yilong Yin, Vinod Namboodiri, Brian D. Davison, Jason H. Moore, and Yong Chen
    arXiv:2404.18961, 2024.

  8. Multi-Task Learning in Natural Language Processing: An Overview [Paper]
    Shijie Chen, Yu Zhang, and Qiang Yang
    ACM Computing Surveys (CSUR), 2024.

  9. Multi-objective Deep Learning: Taxonomy and Survey of the State of the Art [Paper]
    Sebastian Peitz and Sedjro Salomon Hotegni
    arXiv:2412.01566, 2024.

Finding a Single Solution

Loss Balancing Methods

  1. Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics [Paper]
    Alex Kendall, Yarin Gal, and Roberto Cipolla
    CVPR, 2018.

  2. End-To-End Multi-Task Learning With Attention [Paper]
    Shikun Liu, Edward Johns, and Andrew J. Davison
    CVPR, 2019.

  3. Multi-Objective Meta Learning [Paper] [Code]
    Feiyang Ye, Baijiong Lin, Zhixiong Yue, Pengxin Guo, Qiao Xiao, and Yu Zhang
    NeurIPS, 2021; Journal version in AIJ, 2024.

  4. Towards Impartial Multi-task Learning [Paper]
    Liyang Liu, Yi Li, Zhanghui Kuang, Jing-Hao Xue, Yimin Chen, Wenming Yang, Qingmin Liao, and Wayne Zhang
    ICLR, 2021.

  5. Reasonable Effectiveness of Random Weighting: A Litmus Test for Multi-Task Learning [Paper]
    Baijiong Lin, Feiyang Ye, Yu Zhang, and Ivor W. Tsang
    Transactions on Machine Learning Research (TMLR), 2022.

  6. Auto-Lambda: Disentangling Dynamic Task Relationships [Paper] [Code]
    Shikun Liu, Stephen James, Andrew Davison, and Edward Johns
    Transactions on Machine Learning Research (TMLR), 2022.

  7. Dual-Balancing for Multi-Task Learning [Paper]
    Baijiong Lin, Weisen Jiang, Feiyang Ye, Yu Zhang, Pengguang Chen, Ying-Cong Chen, Shu Liu, and James T. Kwok
    arXiv:2308.12029, 2023.

  8. A First-Order Multi-Gradient Algorithm for Multi-Objective Bi-Level Optimization [Paper] [Code]
    Feiyang Ye, Baijiong Lin, Xiaofeng Cao, Yu Zhang, and Ivor W. Tsang
    ECAI, 2024.

  9. Smooth Tchebycheff Scalarization for Multi-Objective Optimization [Paper]
    Xi Lin, Xiaoyuan Zhang, Zhiyuan Yang, Fei Liu, Zhenkun Wang, and Qingfu Zhang
    ICML, 2024.

Gradient Balancing Methods

Gradient Weighting Methods

  1. GradNorm: Gradient Normalization for Adaptive Loss Balancing in Deep Multitask Networks [Paper]
    Zhao Chen, Vijay Badrinarayanan, Chen-Yu Lee, and Andrew Rabinovich
    ICML, 2018.

  2. Multi-Task Learning as Multi-Objective Optimization [Paper] [Code]
    Ozan Sener and Vladlen Koltun
    NeurIPS, 2018

  3. Towards Impartial Multi-task Learning [Paper]
    Liyang Liu, Yi Li, Zhanghui Kuang, Jing-Hao Xue, Yimin Chen, Wenming Yang, Qingmin Liao, and Wayne Zhang
    ICLR, 2021.

  4. Conflict-Averse Gradient Descent for Multi-task Learning [Paper] [Code]
    Bo Liu, Xingchao Liu, Xiaojie Jin, Peter Stone, and Qiang Liu
    NeurIPS, 2021

  5. Multi-Task Learning as a Bargaining Game [Paper] [Code]
    Aviv Navon, Aviv Shamsian, Idan Achituve, Haggai Maron, Kenji Kawaguchi, Gal Chechik, and Ethan Fetaya
    ICML, 2022.

  6. Mitigating Gradient Bias in Multi-objective Learning: A Provably Convergent Approach [Paper]
    Heshan Devaka Fernando, Han Shen, Miao Liu, Subhajit Chaudhury, Keerthiram Murugesan, and Tianyi Chen
    ICLR, 2023.

  7. Independent Component Alignment for Multi-Task Learning [Paper] [Code]
    Dmitry Senushkin, Nikolay Patakin, Arseny Kuznetsov, and Anton Konushin
    CVPR, 2023.

  8. FAMO: Fast Adaptive Multitask Optimization [Paper]
    Bo Liu, Yihao Feng, Peter Stone, and Qiang Liu
    NeurIPS, 2023.

  9. Direction-oriented Multi-objective Learning: Simple and Provable Stochastic Algorithms [Paper] [Code]
    Peiyao Xiao, Hao Ban, and Kaiyi Ji
    NeurIPS, 2023.

  10. Dual-Balancing for Multi-Task Learning [Paper]
    Baijiong Lin, Weisen Jiang, Feiyang Ye, Yu Zhang, Pengguang Chen, Ying-Cong Chen, Shu Liu, and James T. Kwok
    arXiv:2308.12029, 2023.

  11. Fair Resource Allocation in Multi-Task Learning [Paper] [Code]
    Hao Ban and Kaiyi Ji
    ICML, 2024.

  12. Jacobian Descent for Multi-Objective Optimization [Paper] [Code]
    Pierre Quinton and Valérian Rey
    arXiv:2406.16232, 2024.

Gradient Manipulation Methods

  1. Just Pick a Sign: Optimizing Deep Multitask Models with Gradient Sign Dropout [Paper] [Code]
    Zhao Chen, Jiquan Ngiam, Yanping Huang, Thang Luong, Henrik Kretzschmar, Yuning Chai, and Dragomir Anguelov
    NeurIPS, 2020

  2. Gradient Surgery for Multi-Task Learning [Paper] [Code]
    Tianhe Yu, Saurabh Kumar, Abhishek Gupta, Sergey Levine, Karol Hausman, and Chelsea Finn
    NeurIPS, 2020

  3. Gradient Vaccine: Investigating and Improving Multi-task Optimization in Massively Multilingual Models [Paper]
    Zirui Wang, Yulia Tsvetkov, Orhan Firat, and Yuan Cao
    ICLR, 2021

Finding a Finite Set of Solutions

Preference Vector-based Methods

  1. Pareto Multi-Task Learning [Paper] [Code]
    Xi Lin, Hui-Ling Zhen, Zhenhua Li, Qingfu Zhang, and Sam Kwong
    NeurIPS, 2019.

  2. Multi-Task Learning with User Preferences: Gradient Descent with Controlled Ascent in Pareto Optimization [Paper]
    Debabrata Mahapatra and Vaibhav Rajan
    ICML, 2020.

  3. Exact Pareto Optimal Search for Multi-Task Learning and Multi-Criteria Decision-Making [Paper]
    Debabrata Mahapatra and Vaibhav Rajan
    arXiv:2108.00597, 2021

  4. A Multi-objective / Multi-task Learning Framework Induced by Pareto Stationarity [Paper]
    Michinari Momma, Chaosheng Dong, and Jia Liu
    ICML, 2022.

  5. Multi-Objective Deep Learning with Adaptive Reference Vectors [Paper]
    Weiyu Chen and James T. Kwok
    NeurIPS, 2022.

  6. PMGDA: A Preference-based Multiple Gradient Descent Algorithm [Paper]
    Xiaoyuan Zhang, Xi Lin, and Qingfu Zhang
    arXiv:2402.09492, 2024.

  7. FERERO: A Flexible Framework for Preference-Guided Multi-Objective Learning [Paper] [Code]
    Lisha Chen, A F M Saif, Yanning Shen, and Tianyi Chen
    NeurIPS, 2024.

  8. Gliding over the Pareto Front with Uniform Designs [Paper]
    Xiaoyuan Zhang, Genghui Li, Xi Lin, Yichi Zhang, Yifan Chen, and Qingfu Zhang
    NeurIPS, 2024.

Preference Vector-free Methods

  1. Multi-objective Optimization by Uncrowded Hypervolume Gradient Ascent [Paper] [Code]
    Timo M. Deist, Stefanus C. Maree, Tanja Alderliesten, and Peter A.N. Bosman
    PPSN, 2020.

  2. Multi-Objective Learning to Predict Pareto Fronts Using Hypervolume Maximization [Paper]
    Timo M. Deist, Monika Grewal, Frank J.W.M. Dankers, Tanja Alderliesten, and Peter A.N. Bosman
    arXiv:2102.04523, 2021.

  3. Profiling Pareto Front With Multi-Objective Stein Variational Gradient Descent [Paper] [Code]
    Xingchao Liu, Xin Tong, and Qiang Liu
    NeurIPS, 2021.

  4. Efficient Algorithms for Sum-Of-Minimum Optimization [Paper]
    Lisang Ding, Ziang Chen, Xinshang Wang, and Wotao Yin
    ICML, 2024.

  5. Few for Many: Tchebycheff Set Scalarization for Many-Objective Optimization [Paper]
    Xi Lin, Yilu Liu, Xiaoyuan Zhang, Fei Liu, Zhenkun Wang, and Qingfu Zhang
    ICLR, 2025.

  6. Many-Objective Multi-Solution Transport [Paper]
    Ziyue Li, Tian Li, Virginia Smith, Jeff Bilmes, and Tianyi Zhou
    arXiv:2403.04099, 2024.

Finding an Infinite Set of Solutions

Hypernetwork-based Methods

  1. Controllable Pareto Multi-Task Learning [Paper]
    Xi Lin, Zhiyuan Yang, Qingfu Zhang, and Sam Kwong
    arXiv:2010.06313, 2020.

  2. Learning the Pareto Front with Hypernetworks [Paper] [Code]
    Aviv Navon, Aviv Shamsian, Ethan Fetaya, and Gal Chechik
    ICLR, 2021.

  3. Learning a Neural Pareto Manifold Extractor with Constraints [Paper]
    Soumyajit Gupta, Gurpreet Singh, Raghu Bollapragada, and Matthew Lease
    UAI, 2022.

  4. Improving Pareto Front Learning via Multi-Sample Hypernetworks [Paper]
    Long P. Hoang, Dung D. Le, Tran Anh Tuan, and Tran Ngoc Thang
    AAAI, 2023.

  5. A Hyper-Transformer model for Controllable Pareto Front Learning with Split Feasibility Constraints [Paper]
    Tran Anh Tuan, Nguyen Viet Dung, and Tran Ngoc Thang
    arXiv:2402.05955, 2024.

Preference-Conditioned Network-based Methods

  1. You Only Train Once: Loss-Conditional Training of Deep Networks [Paper]
    Alexey Dosovitskiy and Josip Djolonga
    ICLR 2020.

  2. Scalable Pareto Front Approximation for Deep Multi-Objective Learning [Paper]
    Michael Ruchte and Josif Grabocka
    ICDM, 2021.

  3. Controllable Dynamic Multi-Task Architectures [Paper]
    Dripta S. Raychaudhuri, Yumin Suh, Samuel Schulter, Xiang Yu, Masoud Faraki, Amit K. Roy-Chowdhury, and Manmohan Chandraker
    CVPR, 2022.

Model Combination-based Methods

  1. Pareto Manifold Learning: Tackling Multiple Tasks via Ensembles of Single-Task Models [Paper] [Code]
    Nikolaos Dimitriadis, Pascal Frossard, and Francois Fleuret
    ICML, 2023.

  2. Efficient Pareto Manifold Learning with Low-Rank Structure [Paper]
    Weiyu Chen and James T. Kwok
    ICML, 2024.

  3. Panacea: Pareto Alignment via Preference Adaptation for LLMs [Paper]
    Yifan Zhong, Chengdong Ma, Xiaoyuan Zhang, Ziran Yang, Haojun Chen, Qingfu Zhang, Siyuan Qi, and Yaodong Yang
    NeurIPS, 2024.

  4. You Only Merge Once: Learning the Pareto Set of Preference-Aware Model Merging [Paper]
    Weiyu Chen and James T. Kwok
    arXiv:2408.12105, 2024.

  5. Towards Efficient Pareto Set Approximation via Mixture of Experts Based Model Fusion [Paper]
    Anke Tang, Li Shen, Yong Luo, Shiwei Liu, Han Hu, and Bo Du
    arXiv:2406.09770, 2024.

  6. Pareto Low-Rank Adapters: Efficient Multi-Task Learning with Preferences [Paper]
    Nikolaos Dimitriadis, Pascal Frossard, and Francois Fleuret
    arXiv:2407.08056, 2024.

Resources

Software

  1. LibMTL: A PyTorch Library for Multi-Task Learning [Paper] [Code] GitHub stars GitHub forks
    Baijiong Lin and Yu Zhang
    Journal of Machine Learning Research (JMLR), 2023.

  2. LibMOON: A Gradient-based MultiObjective OptimizatioN Library in PyTorch [Paper] [Code] GitHub stars GitHub forks
    Xiaoyuan Zhang, Liang Zhao, Yingying Yu, Xi Lin, Zhenkun Wang, Han Zhao, and Qingfu Zhang
    NeurIPS Datasets and Benchmarks Track, 2024.

About

A comprehensive list of gradient-based multi-objective optimization algorithms in deep learning.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published