The Role of the Information Bottleneck in Representation Learning - CentraleSupélec Accéder directement au contenu
Communication Dans Un Congrès Année : 2018

The Role of the Information Bottleneck in Representation Learning

Résumé

A grand challenge in representation learning is the development of computational algorithms that learn the different explanatory factors of variation behind high-dimensional data. Encoder models are usually determined to optimize performance on training data when the real objective is to generalize well to other (unseen) data. Although numerical evidence suggests that noise injection at the level of representations might improve the generalization ability of the resulting encoders, an information-theoretic justification of this principle remains elusive. In this work, we derive an upper bound to the so-called generalization gap corresponding to the cross-entropy loss and show that when this bound times a suitable multiplier and the empirical risk are minimized jointly, the problem is equivalent to optimizing the Information Bottleneck objective with respect to the empirical data-distribution. We specialize our general conclusions to analyze the dropout regularization method in deep neural networks, explaining how this regularizer helps to decrease the generalization gap.
Fichier non déposé

Dates et versions

hal-01756003 , version 1 (31-03-2018)

Identifiants

Citer

Vera Matias, Pablo Piantanida, Leonardo Rey Vega. The Role of the Information Bottleneck in Representation Learning. IEEE International Symposium on Information Theory (ISIT 2018), Jun 2018, Vail, United States. ⟨10.1109/isit.2018.8437679⟩. ⟨hal-01756003⟩
351 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More