01607nas a2200181 4500000000100000008004100001260001500042100001500057700001400072700001200086700001700098700001700115700001600132245009100148856004800239520112400287022001401411 2022 d c2022-12-231 aShuai Zhou1 aDayong Ye1 aChi Liu1 aTianqing Zhu1 aPhilip S. Yu1 aWanlei Zhou00aAdversarial Attacks and Defenses in Deep Learning: From a Perspective of Cybersecurity uhttps://dl.acm.org/doi/full/10.1145/35473303 aThe outstanding performance of deep neural networks has promoted deep learning applications in a broad set of domains. However, the potential risks caused by adversarial samples have hindered the large-scale deployment of deep learning. In these scenarios, adversarial perturbations, imperceptible to human eyes, significantly decrease the model’s final performance. Many papers have been published on adversarial attacks and their countermeasures in the realm of deep learning. Most focus on evasion attacks, where the adversarial examples are found at test time, as opposed to poisoning attacks where poisoned data is inserted into the training data. Further, it is difficult to evaluate the real threat of adversarial attacks or the robustness of a deep learning model, as there are no standard evaluation methods. Hence, with this article, we review the literature to date. Additionally, we attempt to offer the first analysis framework for a systematic understanding of adversarial attacks. The framework is built from the perspective of cybersecurity to provide a lifecycle for adversarial attacks and defenses. a0360-0300