Y. Bakhti, S. A. Fezza, W. Hamidouche, and O. Deforges, DDSA: A Defense Against Adversarial Attacks Using Deep Denoising Sparse Autoencoder, IEEE Access, vol.7, pp.160397-160407, 2019.
URL : https://hal.archives-ouvertes.fr/hal-02349625

J. Buckman, A. Roy, C. Raffel, and I. Goodfellow, Thermometer encoding: One hot way to resist adversarial examples, ICLR, 2018.

N. Carlini and D. Wagner, Adversarial Examples Are Not Easily Detected, Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security - AISec '17, pp.3-14, 2017.

N. Carlini and D. Wagner, Towards Evaluating the Robustness of Neural Networks, 2017 IEEE Symposium on Security and Privacy (SP), 2017.

Z. Gintare-karolina-dziugaite, D. Ghahramani, and . Roy, A study of the effect of jpg compression on adversarial images, 2016.

J. Ian, J. Goodfellow, C. Shlens, and . Szegedy, Explaining and harnessing adversarial examples, ICLR, 2015.

A. Matyasko, Towards deep neural networks robust to adversarial examples, Towards deep neural network architectures robust to adversarial examples

K. He, X. Zhang, S. Ren, and J. Sun, Deep Residual Learning for Image Recognition, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.

Y. Hua, S. Ge, X. Gao, X. Jin, and D. Zeng, Defending Against Adversarial Examples via Soft Decision Trees Embedding, Proceedings of the 27th ACM International Conference on Multimedia, 2019.

X. Jia, X. Wei, X. Cao, and H. Foroosh, ComDefend: An Efficient Image Compression Model to Defend Adversarial Examples, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019.

L. Jiang, X. Ma, S. Chen, J. Bailey, and Y. Jiang, Black-box Adversarial Attacks on Video Recognition Models, Proceedings of the 27th ACM International Conference on Multimedia, 2019.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, ImageNet classification with deep convolutional neural networks, Communications of the ACM, vol.60, issue.6, pp.84-90, 2017.

A. Kurakin, I. Goodfellow, and S. Bengio, Adversarial machine learning at scale, ICLR, 2017.

X. Li and F. Li, Adversarial Examples Detection in Deep Networks with Convolutional Filter Statistics, 2017 IEEE International Conference on Computer Vision (ICCV), 2017.

F. Liao, M. Liang, Y. Dong, T. Pang, X. Hu et al., Defense Against Adversarial Attacks Using High-Level Representation Guided Denoiser, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018.

Z. Liu, Q. Liu, T. Liu, N. Xu, X. Lin et al., Feature Distillation: DNN-Oriented JPEG Compression Against Adversarial Examples, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019.

J. Lu, T. Issaranon, and D. Forsyth, SafetyNet: Detecting and Rejecting Adversarial Examples Robustly, 2017 IEEE International Conference on Computer Vision (ICCV), 2017.

C. Ma, C. Zhao, H. Shi, L. Chen, J. Yong et al., MetaAdvDet, Proceedings of the 27th ACM International Conference on Multimedia, 2019.

S. Ma, Y. Liu, G. Tao, W. Lee, and X. Zhang, NIC: Detecting Adversarial Samples with Neural Network Invariant Checking, Proceedings 2019 Network and Distributed System Security Symposium, 2019.

A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, Preprint repository arXiv achieves milestone million uploads, Physics Today, 2014.

D. Meng and H. Chen, MagNet, Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, 2017.
URL : https://hal.archives-ouvertes.fr/hal-01176128

A. Nayebi and S. Ganguli, Preprint repository arXiv achieves milestone million uploads, Physics Today, 2014.

N. Papernot, P. Mcdaniel, X. Wu, S. Jha, and A. Swami, Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks, 2016 IEEE Symposium on Security and Privacy (SP), 2016.

O. Ronneberger, P. Fischer, and T. Brox, U-Net: Convolutional Networks for Biomedical Image Segmentation, Lecture Notes in Computer Science, pp.234-241, 2015.

J. Rony, L. G. Hafemann, L. S. Oliveira, I. Ben-ayed, R. Sabourin et al., Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019.

A. Michael-saxe, Y. Bansal, J. Dapello, M. Advani, A. Kolchinsky et al., On the Information Bottleneck Theory of Deep Learning, International Conference on Learning Representations, 2018.

Y. Song, T. Kim, S. Nowozin, S. Ermon, and N. Kushman, Preprint repository arXiv achieves milestone million uploads, Physics Today, 2014.

C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan et al., Intriguing properties of neural networks, 2014.

O. Taran, S. Rezaeifar, T. Holotyak, and S. Voloshynovskiy, Defending Against Adversarial Attacks by Randomized Diversification, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019.

W. Xu, D. Evans, and Y. Qi, Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks, Proceedings 2018 Network and Distributed System Security Symposium, 2018.

P. Zhao, S. Liu, Y. Wang, and X. Lin, An ADMM-Based Universal Framework for Adversarial Attacks on Deep Neural Networks, 2018 ACM Multimedia Conference on Multimedia Conference - MM '18, 2018.