Skip to Main content Skip to Navigation
Conference papers

A Unifying Mutual Information View of Metric Learning: Cross-Entropy vs. Pairwise Losses

Abstract : Statistical methods protecting sensitive information or the identity of the data owner have become critical to ensure privacy of individuals as well as of organizations. This paper investigates anonymization methods based on representation learning and deep neural networks, and motivated by novel informationtheoretical bounds. We introduce a novel training objective for simultaneously training a predictor over target variables of interest (the regular labels) while preventing an intermediate representation to be predictive of the private labels. The architecture is based on three sub-networks: one going from input to representation, one from representation to predicted regular labels, and one from representation to predicted private labels. The training procedure aims at learning representations that preserve the relevant part of the information (about regular labels) while dismissing information about the private labels which correspond to the identity of a person. We demonstrate the success of this approach for two distinct classification versus anonymization tasks (handwritten digits and sentiment analysis).
Complete list of metadata
Contributor : Pablo Piantanida Connect in order to contact the contributor
Submitted on : Tuesday, September 21, 2021 - 11:26:09 PM
Last modification on : Wednesday, November 3, 2021 - 7:17:51 AM
Long-term archiving on: : Wednesday, December 22, 2021 - 7:29:13 PM


Files produced by the author(s)



Malik Boudiaf, Jérôme Rony, Imtiaz Masud Ziko, Ismail Ben Ayed, Eric Granger, et al.. A Unifying Mutual Information View of Metric Learning: Cross-Entropy vs. Pairwise Losses. 16th European Conference on Computer Vision (ECCV), Sep 2021, Glasgow (virtual), United Kingdom. pp.548-564, ⟨10.1007/978-3-030-58539-6_33⟩. ⟨hal-03351131⟩



Record views


Files downloads