Maximum roaming multi-task learning

Pascal, Lucas; Michiardi, Pietro; Bost, Xavier; Huet, Benoit; Zuluaga, Maria A
AAAI 2021, 35th AAAI Conference on Artificial Intelligence, 2-9 February 2021, Virtual Conference

Multi-task learning has gained popularity due to the advantages it provides with respect to resource usage and performance. Nonetheless, the joint optimization of parameters with respect to multiple tasks remains an active research topic. Subpartitioning the parameters between different tasks has proven to be an efficient way to relax the optimization constraints over the shared weights, may the partitions be disjoint or overlapping. However, one drawback of this approach is that it can weaken the inductive bias generally set up by the joint task optimization. In this work, we present a novel way to partition the parameter space without weakening the inductive bias. Specifically, we propose Maximum Roaming, a method inspired by dropout that randomly varies the parameter partitioning, while forcing them to visit as many tasks as possible at a regulated frequency, so that the network fully adapts to each update. We study the properties of our method through experiments on a variety of visual multi-task data sets. Experimental results suggest that the regularization brought by roaming has more impact on performance than usual partitioning optimization strategies. The overall method is flexible, easily applicable, provides superior regularization and consistently achieves improved performances compared to recent multi-task learning formulations.


HAL
Type:
Conference
City:
Palo Alto
Date:
2021-02-02
Department:
Data Science
Eurecom Ref:
6295
Copyright:
© AAAI. Personal use of this material is permitted. The definitive version of this paper was published in AAAI 2021, 35th AAAI Conference on Artificial Intelligence, 2-9 February 2021, Virtual Conference and is available at :

PERMALINK : https://www.eurecom.fr/publication/6295