Skip to Main content Skip to Navigation
Conference papers

Defending Adversarial Examples via DNN Bottleneck Reinforcement

Abstract : This paper presents a DNN bottleneck reinforcement scheme to alleviate the vulnerability of Deep Neural Networks (DNN) against adversarial attacks. Typical DNN classifiers encode the input image into a compressed latent representation more suitable for inference. This information bottleneck makes a trade-off between the image-specific structure and class-specific information in an image. By reinforcing the former while maintaining the latter, any redundant information, be it adversarial or not, should be removed from the latent representation. Hence, this paper proposes to jointly train an auto-encoder (AE) sharing the same encoding weights with the visual classifier. In order to reinforce the information bottleneck, we introduce the multi-scale low-pass objective and multi-scale high-frequency communication for better frequency steering in the network. Unlike existing approaches, our scheme is the first reforming defense per se which keeps the classifier structure untouched without appending any pre-processing head and is trained with clean images only. Extensive experiments on MNIST, CIFAR-10 and ImageNet demonstrate the strong defense of our method against various adversarial attacks.
Complete list of metadatas

Cited literature [31 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-02912189
Contributor : Teddy Furon <>
Submitted on : Friday, November 20, 2020 - 5:24:55 PM
Last modification on : Wednesday, November 25, 2020 - 3:07:54 AM

File

ACMMM_adversarial-3.pdf
Publisher files allowed on an open archive

Identifiers

Citation

Wenqing Liu, Miaojing Shi, Teddy Furon, Li Li. Defending Adversarial Examples via DNN Bottleneck Reinforcement. ACM Multimedia Conférence 2020, Oct 2020, Seattle, United States. pp.1930-1938, ⟨10.1145/3394171.3413825⟩. ⟨hal-02912189⟩

Share

Metrics

Record views

49

Files downloads

27