Optimization of deep multi-task networks

Pascal, Lucas

Multi-task learning (MTL) is a learning paradigm involving the joint optimization of parameters with respect to multiple tasks. By learning multiple related tasks, a learner receives more complete and complementary information on the input domain from which the tasks are issued. This allows to gain better understanding of the domain by building a more accurate set of assumptions of it. However, in practice, the broader use of MTL is hindered by the lack of consistent performance gains observed by deep multi-task networks. It is often the case that deep MTL networks suffer from performance degradation caused by task interference. This thesis addresses the problem of task interference in Multi-Task learning, in order to improve the generalization capabilities of deep neural networks.


Data Science
Eurecom Ref:
© EURECOM. Personal use of this material is permitted. The definitive version of this paper was published in and is available at :
See also:

PERMALINK : https://www.eurecom.fr/publication/6640