对抗制
计算机科学
对抗性机器学习
对手
计算机安全
人工智能
脆弱性(计算)
人气
机器学习
稳健性(进化)
深度学习
互联网隐私
社会心理学
心理学
基因
化学
生物化学
作者
Muhammad Irfan,Sheraz Ali,Irfan Yaqoob,Numan Zafar
出处
期刊:International Conference on Artificial Intelligence
日期:2021-04-05
标识
DOI:10.1109/icai52203.2021.9445247
摘要
Attacker determines their targets strategically and deliberately depend on vulnerabilities they have ascertained. Organization and individuals mostly try to protect themselves from one occurrence or type on an attack. Still, they have to acknowledge that the attacker may easily move focus to advanced uncovered vulnerabilities. Even if someone successfully tackles several attacks, risks remain, and the need to face threats will happen for the predictable future. Machine learning algorithms have earned much popularity in artificial intelligence (A.I) in the modern era. Large organizations like Google, Facebook, and Microsoft use large volumes of user data to train machine learning models. Then they use it for social ads. Like these days, Whatsapp will make a new privacy policy to share their data on Facebook. That data may be used for companies advertisements in future. So, in this way, the privacy of an individual might be breech out. Due to the high probability of attacks and the leakage of sensitive data in deep learning on distributive computation, adversarial examples demonstrated the vulnerability of machine learning techniques in terms of robustness. Besides, this allowed adversaries to make use of the vulnerability to target machine learning systems. Although adversarial attacks on real-world applications did not occur until recently, it is difficult to inject an artificial adversary to the model that is being hosted without infringement of the reliability. Recently few attacks occur in terms of facial recognition, road signs classification by finally, the difference between theoretical methodologies for the generation of adversarial examples and practical schemes of attacks on real-world applications. To direct future studies in the real defence of adversarial examples in real-world applications, For realistic attacks, we integrate the threat model with adversarial examples and give an overview with future direction.
科研通智能强力驱动
Strongly Powered by AbleSci AI