Generic placeholder image

Recent Advances in Computer Science and Communications


ISSN (Print): 2666-2558
ISSN (Online): 2666-2566

Review Article

Threat of Adversarial Attacks within Deep Learning: Survey

Author(s): Ata-us-Samad and Roshni Singh*

Volume 16, Issue 7, 2023

Published on: 29 December, 2022

Article ID: e251122211280 Pages: 10

DOI: 10.2174/2666255816666221125155715

Price: $65


In today’s era, Deep Learning has become the center of recent ascent in the field of artificial intelligence and its models. There are various Artificial Intelligence models that can be viewed as needing more strength for adversely defined information sources. It also leads to a high potential security concern in the adversarial paradigm; the DNN can also misclassify inputs that appear to expect in the result. DNN can solve complex problems accurately. It is empaneled in the vision research area to learn deep neural models for many tasks involving critical security applications. We have also revisited the contributions of computer vision in adversarial attacks on deep learning and discussed its defenses. Many of the authors have given new ideas in this area, which has evolved significantly since witnessing the first-generation methods. For optimal correctness of various research and authenticity, the focus is on peer-reviewed articles issued in the prestigious sources of computer vision and deep learning. Apart from the literature review, this paper defines some standard technical terms for non-experts in the field. This paper represents the review of the adversarial attacks via various methods and techniques along with their defenses within the deep learning area and future scope. Lastly, we bring out the survey to provide a viewpoint of the research in this Computer Vision area.

Keywords: DNN (Deep Neural Network), adversarial, perturbation, CNN (Convolutional Neural Network), quasiimperceptible, deep learning.

Rights & Permissions Print Export Cite as
© 2023 Bentham Science Publishers | Privacy Policy