Localizing Adversarial Attacks To Produces More Imperceptible Noise

Authors

DOI:

https://doi.org/10.32473/flairs.38.1.139004

Abstract

Adversarial attacks in machine learning traditionally focus on global perturbations to input data, yet the potential of localized adversarial noise remains underexplored. This study systematically evaluates localized adversarial attacks across widely-used methods, including FGSM, PGD, and C&W, to quantify their effectiveness, imperceptibility, and computational efficiency. By introducing a binary mask to constrain noise to specific regions, localized attacks achieve significantly lower mean pixel perturbations, higher Peak Signal-to-Noise Ratios (PSNR), and improved Structural Similarity Index (SSIM) compared to global attacks. However, these benefits come at the cost of increased computational effort and a modest reduction in Attack Success Rate (ASR). Our results highlight that iterative methods, such as PGD and C&W, are more robust to localization constraints than single-step methods like FGSM, maintaining higher ASR and imperceptibility metrics. This work provides a comprehensive analysis of localized adversarial attacks, offering practical insights for advancing attack strategies and designing robust defensive systems.

Downloads

Published

14-05-2025

How to Cite

Reddy, P., & Gujral, A. S. (2025). Localizing Adversarial Attacks To Produces More Imperceptible Noise. The International FLAIRS Conference Proceedings, 38(1). https://doi.org/10.32473/flairs.38.1.139004

Issue

Section

Special Track: Navigating AI: Security, Privacy, Ethics, and Regulation