The Role of the Information Bottleneck in Representation Learning

Abstract : A grand challenge in representation learning is the development of computational algorithms that learn the different explanatory factors of variation behind high-dimensional data. Encoder models are usually determined to optimize performance on training data when the real objective is to generalize well to other (unseen) data. Although numerical evidence suggests that noise injection at the level of representations might improve the generalization ability of the resulting encoders, an information-theoretic justification of this principle remains elusive. In this work, we derive an upper bound to the so-called generalization gap corresponding to the cross-entropy loss and show that when this bound times a suitable multiplier and the empirical risk are minimized jointly, the problem is equivalent to optimizing the Information Bottleneck objective with respect to the empirical data-distribution. We specialize our general conclusions to analyze the dropout regularization method in deep neural networks, explaining how this regularizer helps to decrease the generalization gap.
Type de document :
Communication dans un congrès
IEEE International Symposium on Information Theory (ISIT 2018), Jun 2018, Vail, United States. 〈10.1109/isit.2018.8437679 〉
Liste complète des métadonnées

https://hal-centralesupelec.archives-ouvertes.fr/hal-01756003
Contributeur : Pablo Piantanida <>
Soumis le : samedi 31 mars 2018 - 10:47:36
Dernière modification le : mardi 20 novembre 2018 - 16:40:34

Identifiants

Citation

Vera Matias, Pablo Piantanida, Leonardo Rey Vega. The Role of the Information Bottleneck in Representation Learning. IEEE International Symposium on Information Theory (ISIT 2018), Jun 2018, Vail, United States. 〈10.1109/isit.2018.8437679 〉. 〈hal-01756003〉

Partager

Métriques

Consultations de la notice

222