Skip to main content
Author(s):
Shuai Zhou Dayong Ye Chi Liu Tianqing Zhu Philip S. Yu Wanlei Zhou
Journal
Association for Computing Machinery (ACM)
Abstract

The outstanding performance of deep neural networks has promoted deep learning applications in a broad set of domains. However, the potential risks caused by adversarial samples have hindered the large-scale deployment of deep learning. In these scenarios, adversarial perturbations, imperceptible to human eyes, significantly decrease the model’s final performance. Many papers have been published on adversarial attacks and their countermeasures in the realm of deep learning. Most focus on evasion attacks, where the adversarial examples are found at test time, as opposed to poisoning attacks where poisoned data is inserted into the training data. Further, it is difficult to evaluate the real threat of adversarial attacks or the robustness of a deep learning model, as there are no standard evaluation methods. Hence, with this article, we review the literature to date. Additionally, we attempt to offer the first analysis framework for a systematic understanding of adversarial attacks. The framework is built from the perspective of cybersecurity to provide a lifecycle for adversarial attacks and defenses.

Concluding remarks
Despite the incredible performance of DNNs for solving the tasks in our daily lives, the security problems of deep learning techniques have generally given rise to extensive concerns over how vulnerable these models are to adversarial samples. A vast body of attacks and defensive mechanisms have been proposed to find adversarial samples to evaluate the security and robustness accurately. The survey concludes with a discussion on possible fruitful directions of future study to improve existing adversarial attacks and defenses

Reference details

DOI
10.1145/3547330
Resource type
Journal Article
Year of Publication
2022
ISSN Number
0360-0300
Publication Area
Cybersecurity and defense
Date Published
2022-12-23

How to cite this reference:

Zhou, S., Ye, D., Liu, C., Zhu, T., Yu, P. S., & Zhou, W. (2022). Adversarial Attacks and Defenses in Deep Learning: From a Perspective of Cybersecurity. Association for Computing Machinery (ACM). https://doi.org/10.1145/3547330 (Original work published)