The Role of Information Complexity and Randomization in Representation Learning

Abstract : A grand challenge in representation learning is to learn the different explanatory factors of variation behind the high dimen- sional data. Encoder models are often determined to optimize performance on training data when the real objective is to generalize well to unseen data. Although there is enough numerical evidence suggesting that noise injection (during training) at the representation level might improve the generalization ability of encoders, an information-theoretic understanding of this principle remains elusive. This paper presents a sample-dependent bound on the generalization gap of the cross-entropy loss that scales with the information complexity (IC) of the representations, meaning the mutual information between inputs and their representations. The IC is empirically investigated for standard multi-layer neural networks with SGD on MNIST and CIFAR-10 datasets; the behaviour of the gap and the IC appear to be in direct correlation, suggesting that SGD selects encoders to implicitly minimize the IC. We specialize the IC to study the role of Dropout on the generalization capacity of deep encoders which is shown to be directly related to the encoder capacity, being a measure of the distinguishability among samples from their representations. Our results support some recent regularization methods.
Liste complète des métadonnées
Contributeur : Pablo Piantanida <>
Soumis le : samedi 24 mars 2018 - 23:42:59
Dernière modification le : mercredi 1 août 2018 - 15:02:03

Lien texte intégral


  • HAL Id : hal-01742442, version 1
  • ARXIV : 1802.05355


Matias Vera, Pablo Piantanida, Leonardo Rey Vega. The Role of Information Complexity and Randomization in Representation Learning. 35 pages, 3 figures. Submitted for publication. 2018. 〈hal-01742442〉



Consultations de la notice