多源遥感影像深度识别模型对抗攻击鲁棒性评估
Adversarial robustness evaluation of multiple-source remote sensing image recognition based on deep neural networks
- 2023年27卷第8期 页码:1951-1963
纸质出版日期: 2023-08-07
DOI: 10.11834/jrs.20210597
扫 描 看 全 文
浏览全部资源
扫码关注微信
纸质出版日期: 2023-08-07 ,
扫 描 看 全 文
孙浩,徐延杰,陈进,雷琳,计科峰,匡纲要.2023.多源遥感影像深度识别模型对抗攻击鲁棒性评估.遥感学报,27(8): 1951-1963
Sun H,Xu Y J,Chen J,Lei L,Ji K F and Kuang G Y. 2023. Adversarial robustness evaluation of multiple-source remote sensing image recognition based on deep neural networks. National Remote Sensing Bulletin, 27(8):1951-1963
基于深度神经网络的多源遥感影像目标识别系统已逐步在空天遥感情报侦察、无人作战自主环境认知、多模复合末制导等多个军事场景中广泛应用。然而,由于深度学习理论上的不完备性、深度神经网络结构设计工程上的强复用性、以及多源成像识别系统在复杂电磁环境中易受到各类干扰等多因素的影响,使得现有识别系统在对抗攻击鲁棒性方面评估不足,存在极大安全隐患。本文首先从深度学习理论不完备性和识别系统攻击样式两个方面分析了潜在安全风险,并重点介绍了深度识别模型对抗样本攻击基本原理和典型方法。其次,针对光学遥感影像和SAR遥感影像两类典型数据形式,从鲁棒正确识别率和对抗攻击可解释性两个方面开展多源遥感影像深度识别模型对抗攻击鲁棒性评估,覆盖了9类常见深度识别网络架构和7类典型对抗样本攻击方法,验证了现有深度识别模型对抗攻击鲁棒性普遍不足的问题,分析了对抗样本与正常样本的多隐层特征激活差异,为下一步设计对抗样本检测算法和提升模型对抗鲁棒性提供参考。
Deep-neural-network-based multiple-source remote sensing image recognition systems have been widely used in many military scenarios
such as in aerospace intelligence reconnaissance
unmanned aerial vehicles for autonomous environmental cognition
and multimode automatic target recognition systems. Deep learning models rely on the assumption that the training and testing data are from the same distribution. However
these models show poor performance under common corruption or adversarial attacks. In the remote sensing community
the adversarial robustness of deep-neural-network-based recognition models have not received much attention
thence increasing the risk for many security-sensitive applications.
This article evaluates the adversarial robustness of deep-neural-network-based recognition models for multiple-source remote sensing images. First
we discuss the incompleteness of deep learning theory and reveal the presence of great security risks. The independent identical distribution assumption is often violated
and the system performance cannot be guaranteed under adversarial scenarios. The whole process chain of a deep-neural-network-based image recognition system is then analyzed for its vulnerabilities. Second
we introduce several representative algorithms for adversarial example generation under both the white- and black-box settings. The gradient-propagation-based visualization method is also proposed for analyzing adversarial attacks.
We perform a detailed evaluation of nine deep neural networks across two publicly available remote sensing image datasets. Both optical remote sensing and SAR remote sensing images are used in our experiments. For each model
we generate seven perturbations
ranging from gradient-based optimization to unsupervised feature distortion
for each testing image. In all cases
we observe a significant reduction in average classification accuracy between the original clean data and their adversarial images. Apart from adversarial average recognition accuracy
feature attribution techniques have also been adopted to analyze the feature diffusion effect of adversarial attacks
hence contributing to the present understanding of the vulnerability of deep learning models.
Experimental results demonstrate that all deep neural networks have suffered great losses in classification accuracy when the testing images are adversarial examples. Understanding such adversarial phenomena improves our understanding of the inner workings of deep learning models. Additional efforts are needed to enhance the adversarial robustness of deep learning models.
多源遥感影像目标识别深度神经网络对抗攻击特征可视化对抗鲁棒性评估
multiple source remote sensing imagesdeep neural networksadversarial attackfeature visualizationadversarial robustness evaluation
Berghoff C, Neu M and Von Twickel A. 2020. Vulnerabilities of connectionist AI applications: evaluation and defence. arXiv preprint arXiv:2003.08837
Blasch E. 2020. Self-proficiency assessment for ATR systems//Proceedings of SPIE 11393, Algorithms for Synthetic Aperture Radar Imagery XXVII. [s.l.]: SPIE: 113930T [DOI: 10.1117/12.2563259http://dx.doi.org/10.1117/12.2563259]
Carlini N and Wagner D. 2017. Towards evaluating the robustness of neural networks//Proceedings of 2017 IEEE Symposium on Security and Privacy. San Jose: IEEE: 39-57 [DOI: 10.1109/SP.2017.49http://dx.doi.org/10.1109/SP.2017.49]
Chen J B, Jordan M I and Wainwright M J. 2020. HopSkipJumpAttack: a query-efficient decision-based attack//Proceedings of 2020 IEEE Symposium on Security and Privacy. San Francisco: IEEE: 1277-1294 [DOI: 10.1109/SP40000.2020.00045http://dx.doi.org/10.1109/SP40000.2020.00045]
Chen P Y, Sharma Y, Zhang H, Yi J F and Hsieh C J. 2018. EAD: elastic-net attacks to deep neural networks via adversarial examples. Proceedings of the 32nd AAAI Conference on Artificial Intelligence. New Orleans: AAAI: 2
Cheng G, Xie X X, Han J W, Guo L and Xia G S. 2020. Remote sensing image scene classification meets deep learning: challenges, methods, benchmarks, and opportunities. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 13: 3735-3756 [DOI: 10.1109/JSTARS.2020.3005403http://dx.doi.org/10.1109/JSTARS.2020.3005403]
Fawzi A, Moosavi-Dezfooli S M and Frossard P. 2017. The robustness of deep networks: a geometrical perspective. IEEE Signal Processing Magazine, 34(6): 50-62 [DOI: 10.1109/MSP.2017.2740965http://dx.doi.org/10.1109/MSP.2017.2740965]
Goodfellow I J, Shlens J and Szegedy C. 2015. Explaining and harnessing adversarial examples. Proceedings of the 3rd International Conference on Learning Representations. San Diego: ICLR
Kurte K R, Durbha S S, King R L, Younan N H and Vatsavai R. 2017. Semantics-enabled framework for spatial image information mining of linked earth observation data. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 10(1): 29-44 [DOI: 10.1109/JSTARS.2016.2547992http://dx.doi.org/10.1109/JSTARS.2016.2547992]
Madry A, Makelov A, Schmidt L, Tsipras D and Vladu A. 2018. Towards deep learning models resistant to adversarial attacks. Proceedings of the 6th International Conference on Learning Representations. Vancouver: ICLR
Moosavi-Dezfooli S M, Fawzi A and Frossard P. 2016. DeepFool: a simple and accurate method to fool deep neural networks//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas: IEEE: 2574-2582 [DOI: 10.1109/CVPR.2016.282http://dx.doi.org/10.1109/CVPR.2016.282]
Naseer M, Khan S, Hayat M, Khan F S and Porikli F. 2020. A self-supervised approach for adversarial robustness//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle: IEEE: 259-268 [DOI: 10.1109/CVPR42600.2020.00034http://dx.doi.org/10.1109/CVPR42600.2020.00034]
Ras G, Xie N, Van Gerven M and Doran D. 2020. Explainable deep learning: a field guide for the uninitiated. arXiv preprint arXiv:2004.14545
Ross T D, Worrell S W, Velten V J, Mossing J C and Bryant M L. 1998. Standard SAR ATR evaluation experiments using the MSTAR public release data set//Proceedings of the SPIE 3370, Algorithms for Synthetic Aperture Radar Imagery V. Orlando: SPIE, 1998. 566-573 [DOI: 10.1117/12.321859http://dx.doi.org/10.1117/12.321859]
Selvaraju R R, Cogswell M, Das A, Vedantam R, Parikh D and Batra D. 2017. Grad-CAM: visual explanations from deep networks via gradient-based localization//Proceedings of 2017 IEEE International Conference on Computer Vision (ICCV). Venice: IEEE: 618-626 [DOI: 10.1109/ICCV.2017.74http://dx.doi.org/10.1109/ICCV.2017.74]
Sun H, Chen J, Lei L, Ji K F and Kuang G Y. 2021. Adversarial robustness of deep convolutional neural network based image recognition models: a review. Journal of Radars, 10(4): 571-594
孙浩, 陈进, 雷琳, 计科峰, 匡纲要. 2021. 深度卷积神经网络图像识别模型对抗鲁棒性技术综述. 雷达学报, 10(4): 571-594 [DOI: 10.12000/JR21048http://dx.doi.org/10.12000/JR21048]
Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I J and Fergus R. 2014. Intriguing properties of neural networks. Proceedings of the 2nd International Conference on Learning Representations. Banff: ICLR
Tong X D. 2016. Development of China high-resolution earth observation system. Journal of Remote Sensing, 20(5): 775-780
童旭东. 2016. 中国高分辨率对地观测系统重大专项建设进展. 遥感学报, 20(5): 775-780 [DOI: 10.11834/JRS.20166302http://dx.doi.org/10.11834/JRS.20166302]
Wiyatno R R, Xu A Q, Dia O and De Berker A. 2019. Adversarial examples in modern machine learning: a review. arXiv preprint arXiv:1911.05268
Xu Y H, Du B and Zhang L P. 2021. Assessing the threat of adversarial examples on deep neural networks for remote sensing scene classification: attacks and defenses. IEEE Transactions on Geoscience and Remote Sensing, 59(2): 1604-1617 [DOI: 10.1109/TGRS.2020.2999962http://dx.doi.org/10.1109/TGRS.2020.2999962]
Yang Y and Newsam S. 2010. Bag-of-visual-words and spatial extensions for land-use classification//Proceedings of the 18th SIGSPATIAL International Conference on Advances in Geographic Information Systems. San Jose: ACM: 270-279 [DOI: 10.1145/1869790.1869829http://dx.doi.org/10.1145/1869790.1869829]
Yuan X Y, He P, Zhu Q L and Li X L. 2019. Adversarial examples: attacks and defenses for deep learning. IEEE Transactions on Neural Networks and Learning Systems, 30(9): 2805-2824 [DOI: 10.1109/TNNLS.2018.2886017http://dx.doi.org/10.1109/TNNLS.2018.2886017]
Zhou B L, Khosla A, Lapedriza A, Oliva A and Torralba A. 2016. Learning deep features for discriminative localization//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas: IEEE: 2921-2929 [DOI: 10.1109/CVPR.2016.319http://dx.doi.org/10.1109/CVPR.2016.319]
Zhu X X, Montazeri S, Ali M, Hua Y S, Wang Y Y, Mou L C, Shi Y L, Xu F and Bamler R. 2020. Deep learning meets SAR. arXiv preprint arXiv:2006.10027
相关文章
相关作者
相关机构