The conventional, widely used treatment of deep learning models as black boxes provides limited or no insights into the mechanisms that guide neural network decisions. Significant research effort has been dedicated to building interpretable models to address this issue. Most efforts either focus on the high-level features associated with the last layers, or attempt to interpret the output of a single layer. In this paper, we take a novel approach to enhance the transparency of the function of the whole network. We propose a neural network architecture for classification, in which the information that is relevant to each class flows through specific paths. These paths are designed in advance before training leveraging coding theory and without depending on the semantic similarities between classes. A key property is that each path can be used as an autonomous single-purpose model. This enables us to obtain, without any additional training and for any class, a lightweight binary classifier that has at least
Towards Disentangling Information Paths with Coded ResNeXt
NeurIPS 2022, 36th Conference on Neural Information Processing Systems, 28 November-9 December 2022, New Orleans, USA (Hybrid Conference)
Type:
Conférence
City:
New Orleans
Date:
2022-11-28
Department:
Systèmes de Communication
Eurecom Ref:
7141
Copyright:
© NIST. Personal use of this material is permitted. The definitive version of this paper was published in NeurIPS 2022, 36th Conference on Neural Information Processing Systems, 28 November-9 December 2022, New Orleans, USA (Hybrid Conference) and is available at :
See also:
PERMALINK : https://www.eurecom.fr/publication/7141