Skip to Main content Skip to Navigation
Book sections

Information Bottleneck and Representation Learning

Abstract : A grand challenge in representation learning is the development of computational algorithms that learn the different explanatory factors of variation behind high-dimensional data. Representation models (usually referred to as encoders) are often determined for optimizing performance on training data when the real objective is to generalize well to other (unseen) data. The first part of this chapter is devoted to provide an overview of and introduction to fundamental concepts in statistical learning theory and the Information Bottleneck principle. It serves as a mathematical basis for the technical results given in the second part, in which an upper bound to the generalization gap corresponding to the cross-entropy risk is given. When this penalty term times a suitable multiplier and the cross entropy empirical risk are minimized jointly, the problem is equivalent to optimizing the Information Bottleneck objective with respect to the empirical data distribution. This result provides an interesting connection between mutual information and generalization, and helps to explain why noise injection during the training phase can improve the generalization ability of encoder models and enforce invariances in the resulting representations.
Complete list of metadata

https://hal-centralesupelec.archives-ouvertes.fr/hal-01742456
Contributor : Pablo Piantanida Connect in order to contact the contributor
Submitted on : Tuesday, September 21, 2021 - 10:55:45 PM
Last modification on : Tuesday, September 28, 2021 - 3:34:08 AM

Identifiers

Citation

Pablo Piantanida, Leonardo Rey Vega. Information Bottleneck and Representation Learning. Cambridge University Press. Information-Theoretic Methods in Data Science, pp.330-358, 2021, ⟨10.1017/9781108616799.012⟩. ⟨hal-01742456⟩

Share

Metrics

Record views

214