TY - JOUR AU - Shuai Zhou AU - Dayong Ye AU - Chi Liu AU - Tianqing Zhu AU - Philip S. Yu AU - Wanlei Zhou AB - The outstanding performance of deep neural networks has promoted deep learning applications in a broad set of domains. However, the potential risks caused by adversarial samples have hindered the large-scale deployment of deep learning. In these scenarios, adversarial perturbations, imperceptible to human eyes, significantly decrease the model’s final performance. Many papers have been published on adversarial attacks and their countermeasures in the realm of deep learning. Most focus on evasion attacks, where the adversarial examples are found at test time, as opposed to poisoning attacks where poisoned data is inserted into the training data. Further, it is difficult to evaluate the real threat of adversarial attacks or the robustness of a deep learning model, as there are no standard evaluation methods. Hence, with this article, we review the literature to date. Additionally, we attempt to offer the first analysis framework for a systematic understanding of adversarial attacks. The framework is built from the perspective of cybersecurity to provide a lifecycle for adversarial attacks and defenses. BT - Association for Computing Machinery (ACM) DA - 2022-12-23 DO - 10.1145/3547330 N1 - Despite the incredible performance of DNNs for solving the tasks in our daily lives, the security problems of deep learning techniques have generally given rise to extensive concerns over how vulnerable these models are to adversarial samples. A vast body of attacks and defensive mechanisms have been proposed to find adversarial samples to evaluate the security and robustness accurately. The survey concludes with a discussion on possible fruitful directions of future study to improve existing adversarial attacks and defenses N2 - The outstanding performance of deep neural networks has promoted deep learning applications in a broad set of domains. However, the potential risks caused by adversarial samples have hindered the large-scale deployment of deep learning. In these scenarios, adversarial perturbations, imperceptible to human eyes, significantly decrease the model’s final performance. Many papers have been published on adversarial attacks and their countermeasures in the realm of deep learning. Most focus on evasion attacks, where the adversarial examples are found at test time, as opposed to poisoning attacks where poisoned data is inserted into the training data. Further, it is difficult to evaluate the real threat of adversarial attacks or the robustness of a deep learning model, as there are no standard evaluation methods. Hence, with this article, we review the literature to date. Additionally, we attempt to offer the first analysis framework for a systematic understanding of adversarial attacks. The framework is built from the perspective of cybersecurity to provide a lifecycle for adversarial attacks and defenses. PY - 2022 T2 - Association for Computing Machinery (ACM) TI - Adversarial Attacks and Defenses in Deep Learning: From a Perspective of Cybersecurity UR - https://dl.acm.org/doi/full/10.1145/3547330 SN - 0360-0300 ER -