Generic placeholder image

Recent Advances in Computer Science and Communications


ISSN (Print): 2666-2558
ISSN (Online): 2666-2566

Review Article

Threat of Adversarial Attacks within Deep Learning: Survey

Author(s): Ata-us-Samad and Roshni Singh*

Volume 16, Issue 7, 2023

Published on: 29 December, 2022

Article ID: e251122211280 Pages: 10

DOI: 10.2174/2666255816666221125155715

Price: $65


In today’s era, Deep Learning has become the center of recent ascent in the field of artificial intelligence and its models. There are various Artificial Intelligence models that can be viewed as needing more strength for adversely defined information sources. It also leads to a high potential security concern in the adversarial paradigm; the DNN can also misclassify inputs that appear to expect in the result. DNN can solve complex problems accurately. It is empaneled in the vision research area to learn deep neural models for many tasks involving critical security applications. We have also revisited the contributions of computer vision in adversarial attacks on deep learning and discussed its defenses. Many of the authors have given new ideas in this area, which has evolved significantly since witnessing the first-generation methods. For optimal correctness of various research and authenticity, the focus is on peer-reviewed articles issued in the prestigious sources of computer vision and deep learning. Apart from the literature review, this paper defines some standard technical terms for non-experts in the field. This paper represents the review of the adversarial attacks via various methods and techniques along with their defenses within the deep learning area and future scope. Lastly, we bring out the survey to provide a viewpoint of the research in this Computer Vision area.

Keywords: DNN (Deep Neural Network), adversarial, perturbation, CNN (Convolutional Neural Network), quasiimperceptible, deep learning.

Graphical Abstract
A. Vedaldi, and K. Lenc, "MatConvNet-Convolutional neural networks for MATLAB", MM ’15: Proceedings of the 23rd ACM international conference on Multimedia Oct 13, 2015. New York, NY, United States, pp. 689-692.
M. Abadi, "TensorFlow: Large-scale machine learning on heterogeneous distributed systems", arXiv:1603.04467, 2015.
V. Mnih, K. Kavukcuoglu, D. Silver, A.A. Rusu, J. Veness, M.G. Bellemare, A. Graves, M. Riedmiller, A.K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis, "Human-level control through deep reinforcement learning", Nature, vol. 518, no. 7540, pp. 529-533, 2015.
[] [PMID: 25719670]
A. Giusti, J. Guzzi, D.C. Ciresan, F-L. He, J.P. Rodriguez, F. Fontana, M. Faessler, C. Forster, J. Schmidhuber, G.D. Caro, D. Scaramuzza, and L.M. Gambardella, "A machine learning approach to the visual perception of forest trails for mobile robots", IEEE Robot. Autom. Lett., vol. 1, no. 2, pp. 661-667, 2016.
G. Hinton, L. Deng, D. Yu, G. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. Sainath, and B. Kingsbury, "Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups", IEEE Signal Process. Mag., vol. 29, no. 6, pp. 82-97, 2012.
About Face ID Advanced Technology, 2015. Available from:
C. Szegedy, "Intriguing properties of neural networks", arXiv:1312.6199v4, 2014.
N. Akhtar, and A. Mian, "Threat of adversarial attacks on deep learning in computer vision: A survey", IEEE Access, vol. 6, pp. 14410-14430, 2018.
Jiang Weiwei,, "MNIST-MIX: a multi-language handwritten digit recognition dataset", IOP SciNotes 1.2, p. 025002, 2020.
I.J. Goodfellow, J. Shlens, and C. Szegedy, "Explaining and harnessing adversarial examples", arXiv:1412.6572v3, 2015.
N. Papernot, "Technical report on the cleverhans v2. 1.0 adversarial examples library", arXiv:1610.00768, 2016.
K. Ren, T. Zheng, Z. Qin, and X. Liu, "Adversarial attacks and defenses in deep learning", Engineering, vol. 6, no. 3, pp. 346-360, 2020.
J. Su, D.V. Vargas, and S. Kouichi, "One pixel attack for fooling deep neural networks", arXiv:1710.08864, 2017.
S. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, "DeepFool: A simple and accurate method to fool deep neural networks", In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2016, pp. 2574-2582, .
S. Baluja, "and Fischer, Adversarial transformation networks: Learning to generate adversarial examples", arXiv:1703.09387, 2017.
J. Hayes, and G. Danezis, "Machine learning as an adversarial service: Learning black-box adversarial examples", arXiv:1708.05207, 2017.
N. Carlini, and D. Wagner, "Towards Evaluating the Robustness of Neural Networks", arXiv:1608.04644, 2016.
N. Papernot, P. McDaniel, A. Swami, and R. Harang, "Crafting adversarial input sequences for recurrent neural networks", Proc. IEEE Military Commun. Conf.,Nov 1-3, 2016, Baltimore, MD, USA, pp. 49-54, 2016.
N. Narodytska, and S.P. Kasiviswanathan, "Simple black-box adversarial perturbations for deep networks", arXiv:1612.06299, 2016.
Y. Liu, "Enhanced attacks on defensively distilled deep neural networks", arXiv:1711.05934, 2017.
S.J. Oh, M. Fritz, and B. Schiele, "Adversarial image perturbation for privacy protection a game theory perspective", 2017 IEEE International Conference on Computer Vision (ICCV), Oct 22-29, 2017, Venice, Italy, pp. 1491-1500, 2017.
K.R. Mopuri, U. Garg, and R.V. Babu, "Fast Feature Fool: A data independent approach to universal adversarial perturbations", arXiv:1707.05572, 2017.
H. Hosseini, Y. Chen, S. Kannan, B. Zhang, and R. Poovendran, "Blocking transferability of adversarial examples in black-box learning systems", arXiv:1703.04318, 2017.
C. Kanbak, S.S. Moosavi-Dezfooli, and P. Frossard, "Geometric robustness of deep networks: analysis and improvement", arXiv:1711.09115, 2017.
P. Tabacof, and E. Valle, "Exploring the space of adversarial images", IEEE International Joint Conference on Neural Networks, July 24-29, 2016, Vancouver, BC, Canada, pp. 426-433, 2016.
Y. Bengio, "Learning deep architectures for AI", Found. Trends Mach. Learn., vol. 2, no. 1, pp. 1-127, 2009.
D.P. Kingma, and M. Welling, "Auto-encoding variational bayes", arXiv:1312.6114, 2014.
J. Kos, I. Fischer, and D. Song, "Adversarial examples for generative models", arXiv:1702.06832, 2017.
D.E. Rumelhart, G.E. Hinton, and R.J. Williams, Learning representations by back-propagating errors, Cognitive modeling, vol. 5, 1988.
S. Hochreiter, and J. Schmidhuber, "Long short-term memory", Neural Comput., vol. 9, no. 8, pp. 1735-1780, 1997.
[] [PMID: 9377276]
H. Dong, J. Zhang, and X. Zhao, "Intelligent wind farm control via deep reinforcement learning and high-fidelity simulations", Appl. Energy, vol. 292, p. 116928, 2021.
S. Huang, N. Papernot, I. Goodfellow, Y. Duan, and P. Abbeel, "Adversarial attacks on neural network policies", arXiv: 1702.02284, 2017.
J.H. Metzen, M.C. Kumar, and T. Brox, "Fischer universal adversarial perturbations against semantic image segmentation", arXiv:1704.05712, 2017.
A. Arnab, O. Miksik, and P.H.S. Torr, "On the robustness of semantic segmentation models to adversarial attacks", arXiv:1711.09856, 2017.
C. Xie, J. Wang, Z. Zhang, Z. Ren, and A. Yuille, "Mitigating adversarial effects through randomization", arXiv:1711.01991, 2017.
A. Graese, A. Rozsa, and T.E. Boult, "Assessing Threat of Adversarial Examples on Deep Neural Networks", In IEEE International Conference on Machine Learning and Applications, Dec 18-20, 2016, Anaheim, CA, USA, pp. 69-74, 2016.
Z. Liu, P. Luo, X. Wang, and X. Tang, "Deep learning face attributes in the wild", arXiv:1411.7766, 2015.
V. Mirjalili, and A. Ross, "Soft biometric privacy: retaining biometric utility of face images while perturbing gender", International Joint Conference on Biometrics, Oct 1-4, 2017, Denver, CO, USA, pp. 564-573, 2017.
W. Xu, D. Evans, and Y. Qi, "Feature squeezing mitigates and detects carlini/wagner adversarial examples", arXiv:1705.10686, 2017.
P. Chen, H. Zhang, Y. Sharma, J. Yi, and C. Hsieh, "ZOO: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models", Proceedings of 10th ACM Workshop on Artificial Intelligence and Security (AISEC), Nov 03, 2017, New York, NY, United States, pp. 15-26, 2017.
M. Stefanini, M. Cornia, L. Baraldi, S. Cascianelli, G. Fiameni, and R. Cucchiara, "From show to tell: A survey on deep learning-based image captioning", IEEE Trans. Pattern Anal. Mach. Intell., vol. 1, p. 1, 2022.
[] [PMID: 35130142]
L. Deng, and Dong Yu, "Deep learning: methods and applications", In: Foundations and trends in signal processing, vol. 7.3–4. 2014, pp. 197-387.
A. Kurakin, I. Goodfellow, and S. Bengio, "Adversarial examples in the physical world", arXiv: 1607.02533, 2016.
"Objects Detection Machine Learning TensorFlow Demo", Available from:
I. Evtimov, K. Eykholt, E. Fernandes, T. Kohno, B. Li, A. Prakash, A. Rahmati, and D. Song, "Robust physical-world attacks on deep learning models", arXiv:1707.08945, 2017.
A. Mogelmose, M.M. Trivedi, and T.B. Moeslund, "Vision-based traffic sign detection and analysis for intelligent driver assistance systems: Perspectives and survey", IEEE Trans. Intell. Transp. Syst., vol. 13, no. 4, pp. 1484-1497, 2012.
M.S. Elsayed, "Ddosnet: A deep-learning model for detecting network attacks", 2020 IEEE 21st International Symposium on" A World of Wireless, Mobile and Multimedia Networks (WoWMoM). IEEE, Aug 31- Sept 3, 2020, Cork, Ireland, pp. 391-396, 2020.
T. Tokuyasu, Y. Iwashita, Y. Matsunobu, T. Kamiyama, M. Ishikake, S. Sakaguchi, K. Ebe, K. Tada, Y. Endo, T. Etoh, M. Nakashima, and M. Inomata, "Development of an artificial intelligence system using deep learning to indicate anatomical landmarks during laparoscopic cholecystectomy", Surg. Endosc., vol. 35, no. 4, pp. 1651-1658, 2021.
[] [PMID: 32306111]
V.E. Balas, S.S. Roy, D. Sharma, P. Samni, Eds., Handbook of deep learning applications., vol. 136. Springer: New York, 2019.
Machine Learning Repository, Zoo Data Set, 1990. Available from:

Rights & Permissions Print Cite
© 2024 Bentham Science Publishers | Privacy Policy