期刊文献+
共找到16篇文章
< 1 >
每页显示 20 50 100
Adversarial Defense Technology for Small Infrared Targets
1
作者 Tongan Yu Yali Xue +2 位作者 Yiming He Shan Cui Jun Hong 《Computers, Materials & Continua》 SCIE EI 2024年第10期1235-1250,共16页
With the rapid development of deep learning-based detection algorithms,deep learning is widely used in the field of infrared small target detection.However,well-designed adversarial samples can fool human visual perce... With the rapid development of deep learning-based detection algorithms,deep learning is widely used in the field of infrared small target detection.However,well-designed adversarial samples can fool human visual perception,directly causing a serious decline in the detection quality of the recognition model.In this paper,an adversarial defense technology for small infrared targets is proposed to improve model robustness.The adversarial samples with strong migration can not only improve the generalization of defense technology,but also save the training cost.Therefore,this study adopts the concept of maximizing multidimensional feature distortion,applying noise to clean samples to serve as subsequent training samples.On this basis,this study proposes an inverse perturbation elimination method based on Generative Adversarial Networks(GAN)to realize the adversarial defense,and design the generator and discriminator for infrared small targets,aiming to make both of them compete with each other to continuously improve the performance of the model,find out the commonalities and differences between the adversarial samples and the original samples.Through experimental verification,our defense algorithm is not only able to cope with multiple attacks but also performs well on different recognition models compared to commonly used defense algorithms,making it a plug-and-play efficient adversarial defense technique. 展开更多
关键词 adversarial defense adversarial robustness small infrared targets transferable perturbation GAN
在线阅读 下载PDF
Combining Innovative CVTNet and Regularization Loss for Robust Adversarial Defense
2
作者 Wei-Dong Wang Zhi Li Li Zhang 《Journal of Computer Science & Technology》 SCIE EI CSCD 2024年第5期1078-1093,共16页
Deep neural networks(DNNs)are vulnerable to elaborately crafted and imperceptible adversarial perturbations.With the continuous development of adversarial attack methods,existing defense algorithms can no longer defen... Deep neural networks(DNNs)are vulnerable to elaborately crafted and imperceptible adversarial perturbations.With the continuous development of adversarial attack methods,existing defense algorithms can no longer defend against them proficiently.Meanwhile,numerous studies have shown that vision transformer(ViT)has stronger robustness and generalization performance than the convolutional neural network(CNN)in various domains.Moreover,because the standard denoiser is subject to the error amplification effect,the prediction network cannot correctly classify all reconstruction examples.Firstly,this paper proposes a defense network(CVTNet)that combines CNNs and ViTs that is appended in front of the prediction network.CVTNet can effectively eliminate adversarial perturbations and maintain high robustness.Furthermore,this paper proposes a regularization loss(L_(CPL)),which optimizes the CVTNet by computing different losses for the correct prediction set(CPS)and the wrong prediction set(WPS)of the reconstruction examples,respectively.The evaluation results on several standard benchmark datasets show that CVTNet performs better robustness than other advanced methods.Compared with state-of-the-art algorithms,the proposed CVTNet defense improves the average accuracy of pixel-constrained attack examples generated on the CIFAR-10 dataset by 24.25%and spatially-constrained attack examples by 14.06%.Moreover,CVTNet shows excellent generalizability in cross-model protection. 展开更多
关键词 deep learning adversarial defense vision transformer image reconstruction
原文传递
Adversarial attacks and defenses for digital communication signals identification
3
作者 Qiao Tian Sicheng Zhang +1 位作者 Shiwen Mao Yun Lin 《Digital Communications and Networks》 SCIE CSCD 2024年第3期756-764,共9页
As modern communication technology advances apace,the digital communication signals identification plays an important role in cognitive radio networks,the communication monitoring and management systems.AI has become ... As modern communication technology advances apace,the digital communication signals identification plays an important role in cognitive radio networks,the communication monitoring and management systems.AI has become a promising solution to this problem due to its powerful modeling capability,which has become a consensus in academia and industry.However,because of the data-dependence and inexplicability of AI models and the openness of electromagnetic space,the physical layer digital communication signals identification model is threatened by adversarial attacks.Adversarial examples pose a common threat to AI models,where well-designed and slight perturbations added to input data can cause wrong results.Therefore,the security of AI models for the digital communication signals identification is the premise of its efficient and credible applications.In this paper,we first launch adversarial attacks on the end-to-end AI model for automatic modulation classifi-cation,and then we explain and present three defense mechanisms based on the adversarial principle.Next we present more detailed adversarial indicators to evaluate attack and defense behavior.Finally,a demonstration verification system is developed to show that the adversarial attack is a real threat to the digital communication signals identification model,which should be paid more attention in future research. 展开更多
关键词 Digital communication signals identification AI model adversarial attacks adversarial defenses adversarial indicators
在线阅读 下载PDF
Chained Dual-Generative Adversarial Network:A Generalized Defense Against Adversarial Attacks 被引量:1
4
作者 Amitoj Bir Singh Lalit Kumar Awasthi +3 位作者 Urvashi Mohammad Shorfuzzaman Abdulmajeed Alsufyani Mueen Uddin 《Computers, Materials & Continua》 SCIE EI 2023年第2期2541-2555,共15页
Neural networks play a significant role in the field of image classification.When an input image is modified by adversarial attacks,the changes are imperceptible to the human eye,but it still leads to misclassificatio... Neural networks play a significant role in the field of image classification.When an input image is modified by adversarial attacks,the changes are imperceptible to the human eye,but it still leads to misclassification of the images.Researchers have demonstrated these attacks to make production self-driving cars misclassify StopRoad signs as 45 Miles Per Hour(MPH)road signs and a turtle being misclassified as AK47.Three primary types of defense approaches exist which can safeguard against such attacks i.e.,Gradient Masking,Robust Optimization,and Adversarial Example Detection.Very few approaches use Generative Adversarial Networks(GAN)for Defense against Adversarial Attacks.In this paper,we create a new approach to defend against adversarial attacks,dubbed Chained Dual-Generative Adversarial Network(CD-GAN)that tackles the defense against adversarial attacks by minimizing the perturbations of the adversarial image using iterative oversampling and undersampling using GANs.CD-GAN is created using two GANs,i.e.,CDGAN’s Sub-ResolutionGANandCDGAN’s Super-ResolutionGAN.The first is CDGAN’s Sub-Resolution GAN which takes the original resolution input image and oversamples it to generate a lower resolution neutralized image.The second is CDGAN’s Super-Resolution GAN which takes the output of the CDGAN’s Sub-Resolution and undersamples,it to generate the higher resolution image which removes any remaining perturbations.Chained Dual GAN is formed by chaining these two GANs together.Both of these GANs are trained independently.CDGAN’s Sub-Resolution GAN is trained using higher resolution adversarial images as inputs and lower resolution neutralized images as output image examples.Hence,this GAN downscales the image while removing adversarial attack noise.CDGAN’s Super-Resolution GAN is trained using lower resolution adversarial images as inputs and higher resolution neutralized images as output images.Because of this,it acts as an Upscaling GAN while removing the adversarial attak noise.Furthermore,CD-GAN has a modular design such that it can be prefixed to any existing classifier without any retraining or extra effort,and 2542 CMC,2023,vol.74,no.2 can defend any classifier model against adversarial attack.In this way,it is a Generalized Defense against adversarial attacks,capable of defending any classifier model against any attacks.This enables the user to directly integrate CD-GANwith an existing production deployed classifier smoothly.CD-GAN iteratively removes the adversarial noise using a multi-step approach in a modular approach.It performs comparably to the state of the arts with mean accuracy of 33.67 while using minimal compute resources in training. 展开更多
关键词 adversarial attacks GAN-based adversarial defense image classification models adversarial defense
在线阅读 下载PDF
Adversarial Attacks and Defenses in Deep Learning 被引量:20
5
作者 Kui Ren Tianhang Zheng +1 位作者 Zhan Qin Xue Liu 《Engineering》 SCIE EI 2020年第3期346-360,共15页
With the rapid developments of artificial intelligence(AI)and deep learning(DL)techniques,it is critical to ensure the security and robustness of the deployed algorithms.Recently,the security vulnerability of DL algor... With the rapid developments of artificial intelligence(AI)and deep learning(DL)techniques,it is critical to ensure the security and robustness of the deployed algorithms.Recently,the security vulnerability of DL algorithms to adversarial samples has been widely recognized.The fabricated samples can lead to various misbehaviors of the DL models while being perceived as benign by humans.Successful implementations of adversarial attacks in real physical-world scenarios further demonstrate their practicality.Hence,adversarial attack and defense techniques have attracted increasing attention from both machine learning and security communities and have become a hot research topic in recent years.In this paper,we first introduce the theoretical foundations,algorithms,and applications of adversarial attack techniques.We then describe a few research efforts on the defense techniques,which cover the broad frontier in the field.Several open problems and challenges are subsequently discussed,which we hope will provoke further research efforts in this critical area. 展开更多
关键词 Machine learning Deep neural network adversarial example adversarial attack adversarial defense
在线阅读 下载PDF
An Overview of Adversarial Attacks and Defenses
6
作者 Kai Chen Jinwei Wang Jiawei Zhang 《Journal of Information Hiding and Privacy Protection》 2022年第1期15-24,共10页
In recent years,machine learning has become more and more popular,especially the continuous development of deep learning technology,which has brought great revolutions to many fields.In tasks such as image classificat... In recent years,machine learning has become more and more popular,especially the continuous development of deep learning technology,which has brought great revolutions to many fields.In tasks such as image classification,natural language processing,information hiding,multimedia synthesis,and so on,the performance of deep learning has far exceeded the traditional algorithms.However,researchers found that although deep learning can train an accurate model through a large amount of data to complete various tasks,the model is vulnerable to the example which is modified artificially.This technology is called adversarial attacks,while the examples are called adversarial examples.The existence of adversarial attacks poses a great threat to the security of the neural network.Based on the brief introduction of the concept and causes of adversarial example,this paper analyzes the main ideas of adversarial attacks,studies the representative classical adversarial attack methods and the detection and defense methods. 展开更多
关键词 Deep learning adversarial example adversarial attacks adversarial defenses
在线阅读 下载PDF
Deep Image Restoration Model: A Defense Method Against Adversarial Attacks
7
作者 Kazim Ali Adnan N.Quershi +3 位作者 Ahmad Alauddin Bin Arifin Muhammad Shahid Bhatti Abid Sohail Rohail Hassan 《Computers, Materials & Continua》 SCIE EI 2022年第5期2209-2224,共16页
These days,deep learning and computer vision are much-growing fields in this modern world of information technology.Deep learning algorithms and computer vision have achieved great success in different applications li... These days,deep learning and computer vision are much-growing fields in this modern world of information technology.Deep learning algorithms and computer vision have achieved great success in different applications like image classification,speech recognition,self-driving vehicles,disease diagnostics,and many more.Despite success in various applications,it is found that these learning algorithms face severe threats due to adversarial attacks.Adversarial examples are inputs like images in the computer vision field,which are intentionally slightly changed or perturbed.These changes are humanly imperceptible.But are misclassified by a model with high probability and severely affects the performance or prediction.In this scenario,we present a deep image restoration model that restores adversarial examples so that the target model is classified correctly again.We proved that our defense method against adversarial attacks based on a deep image restoration model is simple and state-of-the-art by providing strong experimental results evidence.We have used MNIST and CIFAR10 datasets for experiments and analysis of our defense method.In the end,we have compared our method to other state-ofthe-art defense methods and proved that our results are better than other rival methods. 展开更多
关键词 Computer vision deep learning convolutional neural networks adversarial examples adversarial attacks adversarial defenses
在线阅读 下载PDF
VANET Jamming and Adversarial Attack Defense for Autonomous Vehicle Safety
8
作者 Haeri Kim Jong-Moon Chung 《Computers, Materials & Continua》 SCIE EI 2022年第5期3589-3605,共17页
The development of Vehicular Ad-hoc Network(VANET)technology is helping Intelligent Transportation System(ITS)services to become a reality.Vehicles can use VANETs to communicate safety messages on the road(while drivi... The development of Vehicular Ad-hoc Network(VANET)technology is helping Intelligent Transportation System(ITS)services to become a reality.Vehicles can use VANETs to communicate safety messages on the road(while driving)and can inform their location and share road condition information in real-time.However,intentional and unintentional(e.g.,packet/frame collision)wireless signal jamming can occur,which will degrade the quality of communication over the channel,preventing the reception of safety messages,and thereby posing a safety hazard to the vehicle’s passengers.In this paper,VANET jamming detection applying Support Vector Machine(SVM)machine learning technology is used to classify jamming and non-jamming situations.The analysis is based on two cases which include normal traffic and heavy traffic conditions,where the results show that the probability of packet dropping will increase when many vehicles are using the wireless channel simultaneously.When using SVM classification,the most appropriate feature set applied in determining a jamming situation shows an accuracy of 98%or higher.Furthermore,more advanced jamming attacks need to be considered for preparation of more reliable and safer autonomous ITS services.Such research can use vehicular communication transmission and reception data based on selected published datasets.In this paper,an additional adversarial defense algorithm using the Density-Based Spatial Clustering of Applications with Noise(DBSCAN)method is proposed,which assumes that evolutionary attacks of the jammer will attempt to confuse the trained classifier.The simulation results show that applying DBSCAN can improve the accuracy by elimination of outliers before conducting classification testing. 展开更多
关键词 Vehicle safety VANET jamming SVM adversarial defense
在线阅读 下载PDF
Exploratory Research on Defense against Natural Adversarial Examples in Image Classification
9
作者 Yaoxuan Zhu Hua Yang Bin Zhu 《Computers, Materials & Continua》 2025年第2期1947-1968,共22页
The emergence of adversarial examples has revealed the inadequacies in the robustness of image classification models based on Convolutional Neural Networks (CNNs). Particularly in recent years, the discovery of natura... The emergence of adversarial examples has revealed the inadequacies in the robustness of image classification models based on Convolutional Neural Networks (CNNs). Particularly in recent years, the discovery of natural adversarial examples has posed significant challenges, as traditional defense methods against adversarial attacks have proven to be largely ineffective against these natural adversarial examples. This paper explores defenses against these natural adversarial examples from three perspectives: adversarial examples, model architecture, and dataset. First, it employs Class Activation Mapping (CAM) to visualize how models classify natural adversarial examples, identifying several typical attack patterns. Next, various common CNN models are analyzed to evaluate their susceptibility to these attacks, revealing that different architectures exhibit varying defensive capabilities. The study finds that as the depth of a network increases, its defenses against natural adversarial examples strengthen. Lastly, Finally, the impact of dataset class distribution on the defense capability of models is examined, focusing on two aspects: the number of classes in the training set and the number of predicted classes. This study investigates how these factors influence the model’s ability to defend against natural adversarial examples. Results indicate that reducing the number of training classes enhances the model’s defense against natural adversarial examples. Additionally, under a fixed number of training classes, some CNN models show an optimal range of predicted classes for achieving the best defense performance against these adversarial examples. 展开更多
关键词 Image classification convolutional neural network natural adversarial example data set defense against adversarial examples
在线阅读 下载PDF
Towards Interpretable Defense Against Adversarial Attacks via Causal Inference 被引量:1
10
作者 Min Ren Yun-Long Wang Zhao-Feng He 《Machine Intelligence Research》 EI CSCD 2022年第3期209-226,共18页
Deep learning-based models are vulnerable to adversarial attacks. Defense against adversarial attacks is essential for sensitive and safety-critical scenarios. However, deep learning methods still lack effective and e... Deep learning-based models are vulnerable to adversarial attacks. Defense against adversarial attacks is essential for sensitive and safety-critical scenarios. However, deep learning methods still lack effective and efficient defense mechanisms against adversarial attacks. Most of the existing methods are just stopgaps for specific adversarial samples. The main obstacle is that how adversarial samples fool the deep learning models is still unclear. The underlying working mechanism of adversarial samples has not been well explored, and it is the bottleneck of adversarial attack defense. In this paper, we build a causal model to interpret the generation and performance of adversarial samples. The self-attention/transformer is adopted as a powerful tool in this causal model. Compared to existing methods, causality enables us to analyze adversarial samples more naturally and intrinsically. Based on this causal model, the working mechanism of adversarial samples is revealed, and instructive analysis is provided. Then, we propose simple and effective adversarial sample detection and recognition methods according to the revealed working mechanism. The causal insights enable us to detect and recognize adversarial samples without any extra model or training. Extensive experiments are conducted to demonstrate the effectiveness of the proposed methods. Our methods outperform the state-of-the-art defense methods under various adversarial attacks. 展开更多
关键词 adversarial sample adversarial defense causal inference interpretable machine learning TRANSFORMERS
原文传递
Towards Defense Against Adversarial Attacks on Graph Neural Networks via Calibrated Co-Training 被引量:1
11
作者 Xu-Gang Wu Hui-Jun Wu +2 位作者 Xu Zhou Xiang Zhao Kai Lu 《Journal of Computer Science & Technology》 SCIE EI CSCD 2022年第5期1161-1175,共15页
Graph neural networks(GNNs)have achieved significant success in graph representation learning.Nevertheless,the recent work indicates that current GNNs are vulnerable to adversarial perturbations,in particular structur... Graph neural networks(GNNs)have achieved significant success in graph representation learning.Nevertheless,the recent work indicates that current GNNs are vulnerable to adversarial perturbations,in particular structural perturbations.This,therefore,narrows the application of GNN models in real-world scenarios.Such vulnerability can be attributed to the model’s excessive reliance on incomplete data views(e.g.,graph convolutional networks(GCNs)heavily rely on graph structures to make predictions).By integrating the information from multiple perspectives,this problem can be effectively addressed,and typical views of graphs include the node feature view and the graph structure view.In this paper,we propose C^(2)oG,which combines these two typical views to train sub-models and fuses their knowledge through co-training.Due to the orthogonality of the views,sub-models in the feature view tend to be robust against the perturbations targeted at sub-models in the structure view.C^(2)oG allows sub-models to correct one another mutually and thus enhance the robustness of their ensembles.In our evaluations,C^(2)oG significantly improves the robustness of graph models against adversarial attacks without sacrificing their performance on clean datasets. 展开更多
关键词 adversarial defense graph neural network MULTI-VIEW CO-TRAINING
原文传递
A comprehensive survey of robust deep learning in computer vision
12
作者 Jia Liu Yaochu Jin 《Journal of Automation and Intelligence》 2023年第4期175-195,共21页
Deep learning has presented remarkable progress in various tasks.Despite the excellent performance,deep learning models remain not robust,especially to well-designed adversarial examples,limiting deep learning models ... Deep learning has presented remarkable progress in various tasks.Despite the excellent performance,deep learning models remain not robust,especially to well-designed adversarial examples,limiting deep learning models employed in security-critical applications.Therefore,how to improve the robustness of deep learning has attracted increasing attention from researchers.This paper investigates the progress on the threat of deep learning and the techniques that can enhance the model robustness in computer vision.Unlike previous relevant survey papers summarizing adversarial attacks and defense technologies,this paper also provides an overview of the general robustness of deep learning.Besides,this survey elaborates on the current robustness evaluation approaches,which require further exploration.This paper also reviews the recent literature on making deep learning models resistant to adversarial examples from an architectural perspective,which was rarely mentioned in previous surveys.Finally,interesting directions for future research are listed based on the reviewed literature.This survey is hoped to serve as the basis for future research in this topical field. 展开更多
关键词 ROBUSTNESS Deep learning Computer vision SURVEY adversarial attack adversarial defenses
在线阅读 下载PDF
Effective and Robust Detection of Adversarial Examples via Benford-Fourier Coefficients
13
作者 Cheng-Cheng Ma Bao-Yuan Wu +2 位作者 Yan-Bo Fan Yong Zhang Zhi-Feng Li 《Machine Intelligence Research》 EI CSCD 2023年第5期666-682,共17页
Adversarial example has been well known as a serious threat to deep neural networks(DNNs).In this work,we study the detection of adversarial examples based on the assumption that the output and internal responses of o... Adversarial example has been well known as a serious threat to deep neural networks(DNNs).In this work,we study the detection of adversarial examples based on the assumption that the output and internal responses of one DNN model for both adversarial and benign examples follow the generalized Gaussian distribution(GGD)but with different parameters(i.e.,shape factor,mean,and variance).GGD is a general distribution family that covers many popular distributions(e.g.,Laplacian,Gaussian,or uniform).Therefore,it is more likely to approximate the intrinsic distributions of internal responses than any specific distribution.Besides,since the shape factor is more robust to different databases rather than the other two parameters,we propose to construct discriminative features via the shape factor for adversarial detection,employing the magnitude of Benford-Fourier(MBF)coefficients,which can be easily estimated using responses.Finally,a support vector machine is trained as an adversarial detector leveraging the MBF features.Extensive experiments in terms of image classification demonstrate that the proposed detector is much more effective and robust in detecting adversarial examples of different crafting methods and sources compared to state-of-the-art adversarial detection methods. 展开更多
关键词 adversarial defense adversarial detection generalized Gaussian distribution Benford-Fourier coefficients image classification
原文传递
An end-to-end convolutional network for joint detecting and denoising adversarial perturbations in vehicle classification
14
作者 Peng Liu Huiyuan Fu Huadong Ma 《Computational Visual Media》 EI CSCD 2021年第2期217-227,共11页
Deep convolutional neural networks(DCNNs)have been widely deployed in real-world scenarios.However,DCNNs are easily tricked by adversarial examples,which present challenges for critical applications,such as vehicle cl... Deep convolutional neural networks(DCNNs)have been widely deployed in real-world scenarios.However,DCNNs are easily tricked by adversarial examples,which present challenges for critical applications,such as vehicle classification.To address this problem,we propose a novel end-to-end convolutional network for joint detection and removal of adversarial perturbations by denoising(DDAP).It gets rid of adversarial perturbations using the DDAP denoiser based on adversarial examples discovered by the DDAP detector.The proposed method can be regarded as a pre-processing step—it does not require modifying the structure of the vehicle classification model and hardly affects the classification results on clean images.We consider four kinds of adversarial attack(FGSM,BIM,DeepFool,PGD)to verify DDAP’s capabilities when trained on BIT-Vehicle and other public datasets.It provides better defense than other state-of-the-art defensive methods. 展开更多
关键词 adversarial defense adversarial detection vehicle classification deep learning
原文传递
Flipover outperforms dropout in deep learning
15
作者 Yuxuan Liang Chuang Niu +1 位作者 Pingkun Yan Ge Wang 《Visual Computing for Industry,Biomedicine,and Art》 2024年第1期364-372,共9页
Flipover,an enhanced dropout technique,is introduced to improve the robustness of artificial neural networks.In contrast to dropout,which involves randomly removing certain neurons and their connections,flipover rando... Flipover,an enhanced dropout technique,is introduced to improve the robustness of artificial neural networks.In contrast to dropout,which involves randomly removing certain neurons and their connections,flipover randomly selects neurons and reverts their outputs using a negative multiplier during training.This approach offers stronger regularization than conventional dropout,refining model performance by(1)mitigating overfitting,matching or even exceeding the efficacy of dropout;(2)amplifying robustness to noise;and(3)enhancing resilience against adversarial attacks.Extensive experiments across various neural networks affirm the effectiveness of flipover in deep learning. 展开更多
关键词 Model robustness Regularization Flipover Dropout adversarial defense
在线阅读 下载PDF
A Survey of Adversarial Examples in Computer Vision:Attack,Defense,and Beyond
16
作者 XU Keyizhi LU Yajuan +1 位作者 WANG Zhongyuan LIANG Chao 《Wuhan University Journal of Natural Sciences》 2025年第1期1-20,共20页
Recent years have witnessed the ever-increasing performance of Deep Neural Networks(DNNs)in computer vision tasks.However,researchers have identified a potential vulnerability:carefully crafted adversarial examples ca... Recent years have witnessed the ever-increasing performance of Deep Neural Networks(DNNs)in computer vision tasks.However,researchers have identified a potential vulnerability:carefully crafted adversarial examples can easily mislead DNNs into incorrect behavior via the injection of imperceptible modification to the input data.In this survey,we focus on(1)adversarial attack algorithms to generate adversarial examples,(2)adversarial defense techniques to secure DNNs against adversarial examples,and(3)important problems in the realm of adversarial examples beyond attack and defense,including the theoretical explanations,trade-off issues and benign attacks in adversarial examples.Additionally,we draw a brief comparison between recently published surveys on adversarial examples,and identify the future directions for the research of adversarial examples,such as the generalization of methods and the understanding of transferability,that might be solutions to the open problems in this field. 展开更多
关键词 computer vision adversarial examples adversarial attack adversarial defense
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部