Information Bottleneck and Representation Learning - CentraleSupélec Accéder directement au contenu
Chapitre D'ouvrage Année : 2021

Information Bottleneck and Representation Learning

Résumé

A grand challenge in representation learning is the development of computational algorithms that learn the different explanatory factors of variation behind high-dimensional data. Representation models (usually referred to as encoders) are often determined for optimizing performance on training data when the real objective is to generalize well to other (unseen) data. The first part of this chapter is devoted to provide an overview of and introduction to fundamental concepts in statistical learning theory and the Information Bottleneck principle. It serves as a mathematical basis for the technical results given in the second part, in which an upper bound to the generalization gap corresponding to the cross-entropy risk is given. When this penalty term times a suitable multiplier and the cross entropy empirical risk are minimized jointly, the problem is equivalent to optimizing the Information Bottleneck objective with respect to the empirical data distribution. This result provides an interesting connection between mutual information and generalization, and helps to explain why noise injection during the training phase can improve the generalization ability of encoder models and enforce invariances in the resulting representations.
Fichier principal
Vignette du fichier
book.pdf (1.21 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-01742456 , version 1 (19-01-2022)
hal-01742456 , version 2 (22-06-2023)

Identifiants

Citer

Pablo Piantanida, Leonardo Rey Vega. Information Bottleneck and Representation Learning. Cambridge University Press. Information-Theoretic Methods in Data Science, pp.330-358, 2021, ⟨10.1017/9781108616799.012⟩. ⟨hal-01742456v2⟩
195 Consultations
155 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More