Skip to Main content Skip to Navigation
New interface
Conference papers

An Adversarial Attacker for Neural Networks in Regression Problems

Abstract : Adversarial attacks against neural networks and their defenses have been mostly investigated in classification scenarios. However, adversarial attacks in a regression setting remain understudied, although they play a critical role in a large portion of safety-critical applications. In this work, we present an adversarial attacker for regression tasks, derived from the algebraic properties of the Jacobian of the network. We show that our attacker successfully fools the neural network, and we measure its effectiveness in reducing the estimation performance. We present a white-box adversarial attacker to support engineers in designing safety-critical regression machine learning models. We present our results on various open-source and real industrial tabular datasets. In particular, the proposed adversarial attacker outperforms attackers based on random perturbations of the inputs. Our analysis relies on the quantification of the fooling error as well as various error metrics. A noteworthy feature of our attacker is that it allows us to optimally attack a subset of inputs, which may be helpful to analyse the sensitivity of some specific inputs.
Complete list of metadata
Contributor : Kavya Gupta Connect in order to contact the contributor
Submitted on : Saturday, October 22, 2022 - 3:06:54 PM
Last modification on : Monday, November 7, 2022 - 11:36:39 AM


Publisher files allowed on an open archive


  • HAL Id : hal-03527640, version 2


Kavya Gupta, Beatrice Pesquet-Popescu, Fateh Kaakai, Jean-Christophe Pesquet, Fragkiskos D. Malliaros. An Adversarial Attacker for Neural Networks in Regression Problems. IJCAI Workshop on Artificial Intelligence Safety (AI Safety), Aug 2021, Montreal/Virtual, Canada. ⟨hal-03527640v2⟩



Record views


Files downloads