With the rapid development of deep learning-based detection algorithms,deep learning is widely used in the field of infrared small target detection.However,well-designed adversarial samples can fool human visual perce...With the rapid development of deep learning-based detection algorithms,deep learning is widely used in the field of infrared small target detection.However,well-designed adversarial samples can fool human visual perception,directly causing a serious decline in the detection quality of the recognition model.In this paper,an adversarial defense technology for small infrared targets is proposed to improve model robustness.The adversarial samples with strong migration can not only improve the generalization of defense technology,but also save the training cost.Therefore,this study adopts the concept of maximizing multidimensional feature distortion,applying noise to clean samples to serve as subsequent training samples.On this basis,this study proposes an inverse perturbation elimination method based on Generative Adversarial Networks(GAN)to realize the adversarial defense,and design the generator and discriminator for infrared small targets,aiming to make both of them compete with each other to continuously improve the performance of the model,find out the commonalities and differences between the adversarial samples and the original samples.Through experimental verification,our defense algorithm is not only able to cope with multiple attacks but also performs well on different recognition models compared to commonly used defense algorithms,making it a plug-and-play efficient adversarial defense technique.展开更多
Deep neural networks(DNNs)are vulnerable to elaborately crafted and imperceptible adversarial perturbations.With the continuous development of adversarial attack methods,existing defense algorithms can no longer defen...Deep neural networks(DNNs)are vulnerable to elaborately crafted and imperceptible adversarial perturbations.With the continuous development of adversarial attack methods,existing defense algorithms can no longer defend against them proficiently.Meanwhile,numerous studies have shown that vision transformer(ViT)has stronger robustness and generalization performance than the convolutional neural network(CNN)in various domains.Moreover,because the standard denoiser is subject to the error amplification effect,the prediction network cannot correctly classify all reconstruction examples.Firstly,this paper proposes a defense network(CVTNet)that combines CNNs and ViTs that is appended in front of the prediction network.CVTNet can effectively eliminate adversarial perturbations and maintain high robustness.Furthermore,this paper proposes a regularization loss(L_(CPL)),which optimizes the CVTNet by computing different losses for the correct prediction set(CPS)and the wrong prediction set(WPS)of the reconstruction examples,respectively.The evaluation results on several standard benchmark datasets show that CVTNet performs better robustness than other advanced methods.Compared with state-of-the-art algorithms,the proposed CVTNet defense improves the average accuracy of pixel-constrained attack examples generated on the CIFAR-10 dataset by 24.25%and spatially-constrained attack examples by 14.06%.Moreover,CVTNet shows excellent generalizability in cross-model protection.展开更多
As modern communication technology advances apace,the digital communication signals identification plays an important role in cognitive radio networks,the communication monitoring and management systems.AI has become ...As modern communication technology advances apace,the digital communication signals identification plays an important role in cognitive radio networks,the communication monitoring and management systems.AI has become a promising solution to this problem due to its powerful modeling capability,which has become a consensus in academia and industry.However,because of the data-dependence and inexplicability of AI models and the openness of electromagnetic space,the physical layer digital communication signals identification model is threatened by adversarial attacks.Adversarial examples pose a common threat to AI models,where well-designed and slight perturbations added to input data can cause wrong results.Therefore,the security of AI models for the digital communication signals identification is the premise of its efficient and credible applications.In this paper,we first launch adversarial attacks on the end-to-end AI model for automatic modulation classifi-cation,and then we explain and present three defense mechanisms based on the adversarial principle.Next we present more detailed adversarial indicators to evaluate attack and defense behavior.Finally,a demonstration verification system is developed to show that the adversarial attack is a real threat to the digital communication signals identification model,which should be paid more attention in future research.展开更多
Neural networks play a significant role in the field of image classification.When an input image is modified by adversarial attacks,the changes are imperceptible to the human eye,but it still leads to misclassificatio...Neural networks play a significant role in the field of image classification.When an input image is modified by adversarial attacks,the changes are imperceptible to the human eye,but it still leads to misclassification of the images.Researchers have demonstrated these attacks to make production self-driving cars misclassify StopRoad signs as 45 Miles Per Hour(MPH)road signs and a turtle being misclassified as AK47.Three primary types of defense approaches exist which can safeguard against such attacks i.e.,Gradient Masking,Robust Optimization,and Adversarial Example Detection.Very few approaches use Generative Adversarial Networks(GAN)for Defense against Adversarial Attacks.In this paper,we create a new approach to defend against adversarial attacks,dubbed Chained Dual-Generative Adversarial Network(CD-GAN)that tackles the defense against adversarial attacks by minimizing the perturbations of the adversarial image using iterative oversampling and undersampling using GANs.CD-GAN is created using two GANs,i.e.,CDGAN’s Sub-ResolutionGANandCDGAN’s Super-ResolutionGAN.The first is CDGAN’s Sub-Resolution GAN which takes the original resolution input image and oversamples it to generate a lower resolution neutralized image.The second is CDGAN’s Super-Resolution GAN which takes the output of the CDGAN’s Sub-Resolution and undersamples,it to generate the higher resolution image which removes any remaining perturbations.Chained Dual GAN is formed by chaining these two GANs together.Both of these GANs are trained independently.CDGAN’s Sub-Resolution GAN is trained using higher resolution adversarial images as inputs and lower resolution neutralized images as output image examples.Hence,this GAN downscales the image while removing adversarial attack noise.CDGAN’s Super-Resolution GAN is trained using lower resolution adversarial images as inputs and higher resolution neutralized images as output images.Because of this,it acts as an Upscaling GAN while removing the adversarial attak noise.Furthermore,CD-GAN has a modular design such that it can be prefixed to any existing classifier without any retraining or extra effort,and 2542 CMC,2023,vol.74,no.2 can defend any classifier model against adversarial attack.In this way,it is a Generalized Defense against adversarial attacks,capable of defending any classifier model against any attacks.This enables the user to directly integrate CD-GANwith an existing production deployed classifier smoothly.CD-GAN iteratively removes the adversarial noise using a multi-step approach in a modular approach.It performs comparably to the state of the arts with mean accuracy of 33.67 while using minimal compute resources in training.展开更多
With the rapid developments of artificial intelligence(AI)and deep learning(DL)techniques,it is critical to ensure the security and robustness of the deployed algorithms.Recently,the security vulnerability of DL algor...With the rapid developments of artificial intelligence(AI)and deep learning(DL)techniques,it is critical to ensure the security and robustness of the deployed algorithms.Recently,the security vulnerability of DL algorithms to adversarial samples has been widely recognized.The fabricated samples can lead to various misbehaviors of the DL models while being perceived as benign by humans.Successful implementations of adversarial attacks in real physical-world scenarios further demonstrate their practicality.Hence,adversarial attack and defense techniques have attracted increasing attention from both machine learning and security communities and have become a hot research topic in recent years.In this paper,we first introduce the theoretical foundations,algorithms,and applications of adversarial attack techniques.We then describe a few research efforts on the defense techniques,which cover the broad frontier in the field.Several open problems and challenges are subsequently discussed,which we hope will provoke further research efforts in this critical area.展开更多
In recent years,machine learning has become more and more popular,especially the continuous development of deep learning technology,which has brought great revolutions to many fields.In tasks such as image classificat...In recent years,machine learning has become more and more popular,especially the continuous development of deep learning technology,which has brought great revolutions to many fields.In tasks such as image classification,natural language processing,information hiding,multimedia synthesis,and so on,the performance of deep learning has far exceeded the traditional algorithms.However,researchers found that although deep learning can train an accurate model through a large amount of data to complete various tasks,the model is vulnerable to the example which is modified artificially.This technology is called adversarial attacks,while the examples are called adversarial examples.The existence of adversarial attacks poses a great threat to the security of the neural network.Based on the brief introduction of the concept and causes of adversarial example,this paper analyzes the main ideas of adversarial attacks,studies the representative classical adversarial attack methods and the detection and defense methods.展开更多
These days,deep learning and computer vision are much-growing fields in this modern world of information technology.Deep learning algorithms and computer vision have achieved great success in different applications li...These days,deep learning and computer vision are much-growing fields in this modern world of information technology.Deep learning algorithms and computer vision have achieved great success in different applications like image classification,speech recognition,self-driving vehicles,disease diagnostics,and many more.Despite success in various applications,it is found that these learning algorithms face severe threats due to adversarial attacks.Adversarial examples are inputs like images in the computer vision field,which are intentionally slightly changed or perturbed.These changes are humanly imperceptible.But are misclassified by a model with high probability and severely affects the performance or prediction.In this scenario,we present a deep image restoration model that restores adversarial examples so that the target model is classified correctly again.We proved that our defense method against adversarial attacks based on a deep image restoration model is simple and state-of-the-art by providing strong experimental results evidence.We have used MNIST and CIFAR10 datasets for experiments and analysis of our defense method.In the end,we have compared our method to other state-ofthe-art defense methods and proved that our results are better than other rival methods.展开更多
The development of Vehicular Ad-hoc Network(VANET)technology is helping Intelligent Transportation System(ITS)services to become a reality.Vehicles can use VANETs to communicate safety messages on the road(while drivi...The development of Vehicular Ad-hoc Network(VANET)technology is helping Intelligent Transportation System(ITS)services to become a reality.Vehicles can use VANETs to communicate safety messages on the road(while driving)and can inform their location and share road condition information in real-time.However,intentional and unintentional(e.g.,packet/frame collision)wireless signal jamming can occur,which will degrade the quality of communication over the channel,preventing the reception of safety messages,and thereby posing a safety hazard to the vehicle’s passengers.In this paper,VANET jamming detection applying Support Vector Machine(SVM)machine learning technology is used to classify jamming and non-jamming situations.The analysis is based on two cases which include normal traffic and heavy traffic conditions,where the results show that the probability of packet dropping will increase when many vehicles are using the wireless channel simultaneously.When using SVM classification,the most appropriate feature set applied in determining a jamming situation shows an accuracy of 98%or higher.Furthermore,more advanced jamming attacks need to be considered for preparation of more reliable and safer autonomous ITS services.Such research can use vehicular communication transmission and reception data based on selected published datasets.In this paper,an additional adversarial defense algorithm using the Density-Based Spatial Clustering of Applications with Noise(DBSCAN)method is proposed,which assumes that evolutionary attacks of the jammer will attempt to confuse the trained classifier.The simulation results show that applying DBSCAN can improve the accuracy by elimination of outliers before conducting classification testing.展开更多
The emergence of adversarial examples has revealed the inadequacies in the robustness of image classification models based on Convolutional Neural Networks (CNNs). Particularly in recent years, the discovery of natura...The emergence of adversarial examples has revealed the inadequacies in the robustness of image classification models based on Convolutional Neural Networks (CNNs). Particularly in recent years, the discovery of natural adversarial examples has posed significant challenges, as traditional defense methods against adversarial attacks have proven to be largely ineffective against these natural adversarial examples. This paper explores defenses against these natural adversarial examples from three perspectives: adversarial examples, model architecture, and dataset. First, it employs Class Activation Mapping (CAM) to visualize how models classify natural adversarial examples, identifying several typical attack patterns. Next, various common CNN models are analyzed to evaluate their susceptibility to these attacks, revealing that different architectures exhibit varying defensive capabilities. The study finds that as the depth of a network increases, its defenses against natural adversarial examples strengthen. Lastly, Finally, the impact of dataset class distribution on the defense capability of models is examined, focusing on two aspects: the number of classes in the training set and the number of predicted classes. This study investigates how these factors influence the model’s ability to defend against natural adversarial examples. Results indicate that reducing the number of training classes enhances the model’s defense against natural adversarial examples. Additionally, under a fixed number of training classes, some CNN models show an optimal range of predicted classes for achieving the best defense performance against these adversarial examples.展开更多
Deep learning-based models are vulnerable to adversarial attacks. Defense against adversarial attacks is essential for sensitive and safety-critical scenarios. However, deep learning methods still lack effective and e...Deep learning-based models are vulnerable to adversarial attacks. Defense against adversarial attacks is essential for sensitive and safety-critical scenarios. However, deep learning methods still lack effective and efficient defense mechanisms against adversarial attacks. Most of the existing methods are just stopgaps for specific adversarial samples. The main obstacle is that how adversarial samples fool the deep learning models is still unclear. The underlying working mechanism of adversarial samples has not been well explored, and it is the bottleneck of adversarial attack defense. In this paper, we build a causal model to interpret the generation and performance of adversarial samples. The self-attention/transformer is adopted as a powerful tool in this causal model. Compared to existing methods, causality enables us to analyze adversarial samples more naturally and intrinsically. Based on this causal model, the working mechanism of adversarial samples is revealed, and instructive analysis is provided. Then, we propose simple and effective adversarial sample detection and recognition methods according to the revealed working mechanism. The causal insights enable us to detect and recognize adversarial samples without any extra model or training. Extensive experiments are conducted to demonstrate the effectiveness of the proposed methods. Our methods outperform the state-of-the-art defense methods under various adversarial attacks.展开更多
Graph neural networks(GNNs)have achieved significant success in graph representation learning.Nevertheless,the recent work indicates that current GNNs are vulnerable to adversarial perturbations,in particular structur...Graph neural networks(GNNs)have achieved significant success in graph representation learning.Nevertheless,the recent work indicates that current GNNs are vulnerable to adversarial perturbations,in particular structural perturbations.This,therefore,narrows the application of GNN models in real-world scenarios.Such vulnerability can be attributed to the model’s excessive reliance on incomplete data views(e.g.,graph convolutional networks(GCNs)heavily rely on graph structures to make predictions).By integrating the information from multiple perspectives,this problem can be effectively addressed,and typical views of graphs include the node feature view and the graph structure view.In this paper,we propose C^(2)oG,which combines these two typical views to train sub-models and fuses their knowledge through co-training.Due to the orthogonality of the views,sub-models in the feature view tend to be robust against the perturbations targeted at sub-models in the structure view.C^(2)oG allows sub-models to correct one another mutually and thus enhance the robustness of their ensembles.In our evaluations,C^(2)oG significantly improves the robustness of graph models against adversarial attacks without sacrificing their performance on clean datasets.展开更多
Deep learning has presented remarkable progress in various tasks.Despite the excellent performance,deep learning models remain not robust,especially to well-designed adversarial examples,limiting deep learning models ...Deep learning has presented remarkable progress in various tasks.Despite the excellent performance,deep learning models remain not robust,especially to well-designed adversarial examples,limiting deep learning models employed in security-critical applications.Therefore,how to improve the robustness of deep learning has attracted increasing attention from researchers.This paper investigates the progress on the threat of deep learning and the techniques that can enhance the model robustness in computer vision.Unlike previous relevant survey papers summarizing adversarial attacks and defense technologies,this paper also provides an overview of the general robustness of deep learning.Besides,this survey elaborates on the current robustness evaluation approaches,which require further exploration.This paper also reviews the recent literature on making deep learning models resistant to adversarial examples from an architectural perspective,which was rarely mentioned in previous surveys.Finally,interesting directions for future research are listed based on the reviewed literature.This survey is hoped to serve as the basis for future research in this topical field.展开更多
Adversarial example has been well known as a serious threat to deep neural networks(DNNs).In this work,we study the detection of adversarial examples based on the assumption that the output and internal responses of o...Adversarial example has been well known as a serious threat to deep neural networks(DNNs).In this work,we study the detection of adversarial examples based on the assumption that the output and internal responses of one DNN model for both adversarial and benign examples follow the generalized Gaussian distribution(GGD)but with different parameters(i.e.,shape factor,mean,and variance).GGD is a general distribution family that covers many popular distributions(e.g.,Laplacian,Gaussian,or uniform).Therefore,it is more likely to approximate the intrinsic distributions of internal responses than any specific distribution.Besides,since the shape factor is more robust to different databases rather than the other two parameters,we propose to construct discriminative features via the shape factor for adversarial detection,employing the magnitude of Benford-Fourier(MBF)coefficients,which can be easily estimated using responses.Finally,a support vector machine is trained as an adversarial detector leveraging the MBF features.Extensive experiments in terms of image classification demonstrate that the proposed detector is much more effective and robust in detecting adversarial examples of different crafting methods and sources compared to state-of-the-art adversarial detection methods.展开更多
Deep convolutional neural networks(DCNNs)have been widely deployed in real-world scenarios.However,DCNNs are easily tricked by adversarial examples,which present challenges for critical applications,such as vehicle cl...Deep convolutional neural networks(DCNNs)have been widely deployed in real-world scenarios.However,DCNNs are easily tricked by adversarial examples,which present challenges for critical applications,such as vehicle classification.To address this problem,we propose a novel end-to-end convolutional network for joint detection and removal of adversarial perturbations by denoising(DDAP).It gets rid of adversarial perturbations using the DDAP denoiser based on adversarial examples discovered by the DDAP detector.The proposed method can be regarded as a pre-processing step—it does not require modifying the structure of the vehicle classification model and hardly affects the classification results on clean images.We consider four kinds of adversarial attack(FGSM,BIM,DeepFool,PGD)to verify DDAP’s capabilities when trained on BIT-Vehicle and other public datasets.It provides better defense than other state-of-the-art defensive methods.展开更多
Flipover,an enhanced dropout technique,is introduced to improve the robustness of artificial neural networks.In contrast to dropout,which involves randomly removing certain neurons and their connections,flipover rando...Flipover,an enhanced dropout technique,is introduced to improve the robustness of artificial neural networks.In contrast to dropout,which involves randomly removing certain neurons and their connections,flipover randomly selects neurons and reverts their outputs using a negative multiplier during training.This approach offers stronger regularization than conventional dropout,refining model performance by(1)mitigating overfitting,matching or even exceeding the efficacy of dropout;(2)amplifying robustness to noise;and(3)enhancing resilience against adversarial attacks.Extensive experiments across various neural networks affirm the effectiveness of flipover in deep learning.展开更多
Recent years have witnessed the ever-increasing performance of Deep Neural Networks(DNNs)in computer vision tasks.However,researchers have identified a potential vulnerability:carefully crafted adversarial examples ca...Recent years have witnessed the ever-increasing performance of Deep Neural Networks(DNNs)in computer vision tasks.However,researchers have identified a potential vulnerability:carefully crafted adversarial examples can easily mislead DNNs into incorrect behavior via the injection of imperceptible modification to the input data.In this survey,we focus on(1)adversarial attack algorithms to generate adversarial examples,(2)adversarial defense techniques to secure DNNs against adversarial examples,and(3)important problems in the realm of adversarial examples beyond attack and defense,including the theoretical explanations,trade-off issues and benign attacks in adversarial examples.Additionally,we draw a brief comparison between recently published surveys on adversarial examples,and identify the future directions for the research of adversarial examples,such as the generalization of methods and the understanding of transferability,that might be solutions to the open problems in this field.展开更多
基金supported in part by the National Natural Science Foundation of China under Grant 62073164the Shanghai Aerospace Science and Technology Innovation Foundation under Grant SAST2022-013.
文摘With the rapid development of deep learning-based detection algorithms,deep learning is widely used in the field of infrared small target detection.However,well-designed adversarial samples can fool human visual perception,directly causing a serious decline in the detection quality of the recognition model.In this paper,an adversarial defense technology for small infrared targets is proposed to improve model robustness.The adversarial samples with strong migration can not only improve the generalization of defense technology,but also save the training cost.Therefore,this study adopts the concept of maximizing multidimensional feature distortion,applying noise to clean samples to serve as subsequent training samples.On this basis,this study proposes an inverse perturbation elimination method based on Generative Adversarial Networks(GAN)to realize the adversarial defense,and design the generator and discriminator for infrared small targets,aiming to make both of them compete with each other to continuously improve the performance of the model,find out the commonalities and differences between the adversarial samples and the original samples.Through experimental verification,our defense algorithm is not only able to cope with multiple attacks but also performs well on different recognition models compared to commonly used defense algorithms,making it a plug-and-play efficient adversarial defense technique.
基金supported by the National Natural Science Foundation of China under Grant No.62062023.
文摘Deep neural networks(DNNs)are vulnerable to elaborately crafted and imperceptible adversarial perturbations.With the continuous development of adversarial attack methods,existing defense algorithms can no longer defend against them proficiently.Meanwhile,numerous studies have shown that vision transformer(ViT)has stronger robustness and generalization performance than the convolutional neural network(CNN)in various domains.Moreover,because the standard denoiser is subject to the error amplification effect,the prediction network cannot correctly classify all reconstruction examples.Firstly,this paper proposes a defense network(CVTNet)that combines CNNs and ViTs that is appended in front of the prediction network.CVTNet can effectively eliminate adversarial perturbations and maintain high robustness.Furthermore,this paper proposes a regularization loss(L_(CPL)),which optimizes the CVTNet by computing different losses for the correct prediction set(CPS)and the wrong prediction set(WPS)of the reconstruction examples,respectively.The evaluation results on several standard benchmark datasets show that CVTNet performs better robustness than other advanced methods.Compared with state-of-the-art algorithms,the proposed CVTNet defense improves the average accuracy of pixel-constrained attack examples generated on the CIFAR-10 dataset by 24.25%and spatially-constrained attack examples by 14.06%.Moreover,CVTNet shows excellent generalizability in cross-model protection.
基金supported by the National Natural Science Foundation of China(61771154)the Fundamental Research Funds for the Central Universities(3072022CF0601)supported by Key Laboratory of Advanced Marine Communication and Information Technology,Ministry of Industry and Information Technology,Harbin Engineering University,Harbin,China.
文摘As modern communication technology advances apace,the digital communication signals identification plays an important role in cognitive radio networks,the communication monitoring and management systems.AI has become a promising solution to this problem due to its powerful modeling capability,which has become a consensus in academia and industry.However,because of the data-dependence and inexplicability of AI models and the openness of electromagnetic space,the physical layer digital communication signals identification model is threatened by adversarial attacks.Adversarial examples pose a common threat to AI models,where well-designed and slight perturbations added to input data can cause wrong results.Therefore,the security of AI models for the digital communication signals identification is the premise of its efficient and credible applications.In this paper,we first launch adversarial attacks on the end-to-end AI model for automatic modulation classifi-cation,and then we explain and present three defense mechanisms based on the adversarial principle.Next we present more detailed adversarial indicators to evaluate attack and defense behavior.Finally,a demonstration verification system is developed to show that the adversarial attack is a real threat to the digital communication signals identification model,which should be paid more attention in future research.
基金Taif University,Taif,Saudi Arabia through Taif University Researchers Supporting Project Number(TURSP-2020/115).
文摘Neural networks play a significant role in the field of image classification.When an input image is modified by adversarial attacks,the changes are imperceptible to the human eye,but it still leads to misclassification of the images.Researchers have demonstrated these attacks to make production self-driving cars misclassify StopRoad signs as 45 Miles Per Hour(MPH)road signs and a turtle being misclassified as AK47.Three primary types of defense approaches exist which can safeguard against such attacks i.e.,Gradient Masking,Robust Optimization,and Adversarial Example Detection.Very few approaches use Generative Adversarial Networks(GAN)for Defense against Adversarial Attacks.In this paper,we create a new approach to defend against adversarial attacks,dubbed Chained Dual-Generative Adversarial Network(CD-GAN)that tackles the defense against adversarial attacks by minimizing the perturbations of the adversarial image using iterative oversampling and undersampling using GANs.CD-GAN is created using two GANs,i.e.,CDGAN’s Sub-ResolutionGANandCDGAN’s Super-ResolutionGAN.The first is CDGAN’s Sub-Resolution GAN which takes the original resolution input image and oversamples it to generate a lower resolution neutralized image.The second is CDGAN’s Super-Resolution GAN which takes the output of the CDGAN’s Sub-Resolution and undersamples,it to generate the higher resolution image which removes any remaining perturbations.Chained Dual GAN is formed by chaining these two GANs together.Both of these GANs are trained independently.CDGAN’s Sub-Resolution GAN is trained using higher resolution adversarial images as inputs and lower resolution neutralized images as output image examples.Hence,this GAN downscales the image while removing adversarial attack noise.CDGAN’s Super-Resolution GAN is trained using lower resolution adversarial images as inputs and higher resolution neutralized images as output images.Because of this,it acts as an Upscaling GAN while removing the adversarial attak noise.Furthermore,CD-GAN has a modular design such that it can be prefixed to any existing classifier without any retraining or extra effort,and 2542 CMC,2023,vol.74,no.2 can defend any classifier model against adversarial attack.In this way,it is a Generalized Defense against adversarial attacks,capable of defending any classifier model against any attacks.This enables the user to directly integrate CD-GANwith an existing production deployed classifier smoothly.CD-GAN iteratively removes the adversarial noise using a multi-step approach in a modular approach.It performs comparably to the state of the arts with mean accuracy of 33.67 while using minimal compute resources in training.
基金Ant Financial,Zhejiang University Financial Technology Research Center.
文摘With the rapid developments of artificial intelligence(AI)and deep learning(DL)techniques,it is critical to ensure the security and robustness of the deployed algorithms.Recently,the security vulnerability of DL algorithms to adversarial samples has been widely recognized.The fabricated samples can lead to various misbehaviors of the DL models while being perceived as benign by humans.Successful implementations of adversarial attacks in real physical-world scenarios further demonstrate their practicality.Hence,adversarial attack and defense techniques have attracted increasing attention from both machine learning and security communities and have become a hot research topic in recent years.In this paper,we first introduce the theoretical foundations,algorithms,and applications of adversarial attack techniques.We then describe a few research efforts on the defense techniques,which cover the broad frontier in the field.Several open problems and challenges are subsequently discussed,which we hope will provoke further research efforts in this critical area.
文摘In recent years,machine learning has become more and more popular,especially the continuous development of deep learning technology,which has brought great revolutions to many fields.In tasks such as image classification,natural language processing,information hiding,multimedia synthesis,and so on,the performance of deep learning has far exceeded the traditional algorithms.However,researchers found that although deep learning can train an accurate model through a large amount of data to complete various tasks,the model is vulnerable to the example which is modified artificially.This technology is called adversarial attacks,while the examples are called adversarial examples.The existence of adversarial attacks poses a great threat to the security of the neural network.Based on the brief introduction of the concept and causes of adversarial example,this paper analyzes the main ideas of adversarial attacks,studies the representative classical adversarial attack methods and the detection and defense methods.
文摘These days,deep learning and computer vision are much-growing fields in this modern world of information technology.Deep learning algorithms and computer vision have achieved great success in different applications like image classification,speech recognition,self-driving vehicles,disease diagnostics,and many more.Despite success in various applications,it is found that these learning algorithms face severe threats due to adversarial attacks.Adversarial examples are inputs like images in the computer vision field,which are intentionally slightly changed or perturbed.These changes are humanly imperceptible.But are misclassified by a model with high probability and severely affects the performance or prediction.In this scenario,we present a deep image restoration model that restores adversarial examples so that the target model is classified correctly again.We proved that our defense method against adversarial attacks based on a deep image restoration model is simple and state-of-the-art by providing strong experimental results evidence.We have used MNIST and CIFAR10 datasets for experiments and analysis of our defense method.In the end,we have compared our method to other state-ofthe-art defense methods and proved that our results are better than other rival methods.
文摘The development of Vehicular Ad-hoc Network(VANET)technology is helping Intelligent Transportation System(ITS)services to become a reality.Vehicles can use VANETs to communicate safety messages on the road(while driving)and can inform their location and share road condition information in real-time.However,intentional and unintentional(e.g.,packet/frame collision)wireless signal jamming can occur,which will degrade the quality of communication over the channel,preventing the reception of safety messages,and thereby posing a safety hazard to the vehicle’s passengers.In this paper,VANET jamming detection applying Support Vector Machine(SVM)machine learning technology is used to classify jamming and non-jamming situations.The analysis is based on two cases which include normal traffic and heavy traffic conditions,where the results show that the probability of packet dropping will increase when many vehicles are using the wireless channel simultaneously.When using SVM classification,the most appropriate feature set applied in determining a jamming situation shows an accuracy of 98%or higher.Furthermore,more advanced jamming attacks need to be considered for preparation of more reliable and safer autonomous ITS services.Such research can use vehicular communication transmission and reception data based on selected published datasets.In this paper,an additional adversarial defense algorithm using the Density-Based Spatial Clustering of Applications with Noise(DBSCAN)method is proposed,which assumes that evolutionary attacks of the jammer will attempt to confuse the trained classifier.The simulation results show that applying DBSCAN can improve the accuracy by elimination of outliers before conducting classification testing.
文摘The emergence of adversarial examples has revealed the inadequacies in the robustness of image classification models based on Convolutional Neural Networks (CNNs). Particularly in recent years, the discovery of natural adversarial examples has posed significant challenges, as traditional defense methods against adversarial attacks have proven to be largely ineffective against these natural adversarial examples. This paper explores defenses against these natural adversarial examples from three perspectives: adversarial examples, model architecture, and dataset. First, it employs Class Activation Mapping (CAM) to visualize how models classify natural adversarial examples, identifying several typical attack patterns. Next, various common CNN models are analyzed to evaluate their susceptibility to these attacks, revealing that different architectures exhibit varying defensive capabilities. The study finds that as the depth of a network increases, its defenses against natural adversarial examples strengthen. Lastly, Finally, the impact of dataset class distribution on the defense capability of models is examined, focusing on two aspects: the number of classes in the training set and the number of predicted classes. This study investigates how these factors influence the model’s ability to defend against natural adversarial examples. Results indicate that reducing the number of training classes enhances the model’s defense against natural adversarial examples. Additionally, under a fixed number of training classes, some CNN models show an optimal range of predicted classes for achieving the best defense performance against these adversarial examples.
基金supported by National Key Research and Development Program of China(No.2020AAA0140002)Natural Science Foundation of China(Nos.U1836217,62076240,62006225,61906199,62071468,62176025 and U21B200389)the CAAI-Huawei Mind-spore Open Fund.
文摘Deep learning-based models are vulnerable to adversarial attacks. Defense against adversarial attacks is essential for sensitive and safety-critical scenarios. However, deep learning methods still lack effective and efficient defense mechanisms against adversarial attacks. Most of the existing methods are just stopgaps for specific adversarial samples. The main obstacle is that how adversarial samples fool the deep learning models is still unclear. The underlying working mechanism of adversarial samples has not been well explored, and it is the bottleneck of adversarial attack defense. In this paper, we build a causal model to interpret the generation and performance of adversarial samples. The self-attention/transformer is adopted as a powerful tool in this causal model. Compared to existing methods, causality enables us to analyze adversarial samples more naturally and intrinsically. Based on this causal model, the working mechanism of adversarial samples is revealed, and instructive analysis is provided. Then, we propose simple and effective adversarial sample detection and recognition methods according to the revealed working mechanism. The causal insights enable us to detect and recognize adversarial samples without any extra model or training. Extensive experiments are conducted to demonstrate the effectiveness of the proposed methods. Our methods outperform the state-of-the-art defense methods under various adversarial attacks.
基金This work was partially supported by the National University of Defense Technology Foundation under Grant Nos.ZK20-09 and ZK21-17,and the Natural Science Foundation of Hunan Province of China under Grant No.2021JJ40692.
文摘Graph neural networks(GNNs)have achieved significant success in graph representation learning.Nevertheless,the recent work indicates that current GNNs are vulnerable to adversarial perturbations,in particular structural perturbations.This,therefore,narrows the application of GNN models in real-world scenarios.Such vulnerability can be attributed to the model’s excessive reliance on incomplete data views(e.g.,graph convolutional networks(GCNs)heavily rely on graph structures to make predictions).By integrating the information from multiple perspectives,this problem can be effectively addressed,and typical views of graphs include the node feature view and the graph structure view.In this paper,we propose C^(2)oG,which combines these two typical views to train sub-models and fuses their knowledge through co-training.Due to the orthogonality of the views,sub-models in the feature view tend to be robust against the perturbations targeted at sub-models in the structure view.C^(2)oG allows sub-models to correct one another mutually and thus enhance the robustness of their ensembles.In our evaluations,C^(2)oG significantly improves the robustness of graph models against adversarial attacks without sacrificing their performance on clean datasets.
文摘Deep learning has presented remarkable progress in various tasks.Despite the excellent performance,deep learning models remain not robust,especially to well-designed adversarial examples,limiting deep learning models employed in security-critical applications.Therefore,how to improve the robustness of deep learning has attracted increasing attention from researchers.This paper investigates the progress on the threat of deep learning and the techniques that can enhance the model robustness in computer vision.Unlike previous relevant survey papers summarizing adversarial attacks and defense technologies,this paper also provides an overview of the general robustness of deep learning.Besides,this survey elaborates on the current robustness evaluation approaches,which require further exploration.This paper also reviews the recent literature on making deep learning models resistant to adversarial examples from an architectural perspective,which was rarely mentioned in previous surveys.Finally,interesting directions for future research are listed based on the reviewed literature.This survey is hoped to serve as the basis for future research in this topical field.
基金supported by Natural Science Foundation of China(No.62076213)Shenzhen Science and Technology Program,China(No.RCYX20210609103057050)+1 种基金the university development fund of The Chinese University of Hong Kong,Shenzhen,China(No.01001810)Guangdong Provincial Key Laboratory of Big Data Computing,The Chinese University of Hong Kong,Shenzhen,China.
文摘Adversarial example has been well known as a serious threat to deep neural networks(DNNs).In this work,we study the detection of adversarial examples based on the assumption that the output and internal responses of one DNN model for both adversarial and benign examples follow the generalized Gaussian distribution(GGD)but with different parameters(i.e.,shape factor,mean,and variance).GGD is a general distribution family that covers many popular distributions(e.g.,Laplacian,Gaussian,or uniform).Therefore,it is more likely to approximate the intrinsic distributions of internal responses than any specific distribution.Besides,since the shape factor is more robust to different databases rather than the other two parameters,we propose to construct discriminative features via the shape factor for adversarial detection,employing the magnitude of Benford-Fourier(MBF)coefficients,which can be easily estimated using responses.Finally,a support vector machine is trained as an adversarial detector leveraging the MBF features.Extensive experiments in terms of image classification demonstrate that the proposed detector is much more effective and robust in detecting adversarial examples of different crafting methods and sources compared to state-of-the-art adversarial detection methods.
基金supported in part by the National Natural Science Foundation of China(61872047,61720106007)the National Key R&D Program of China(2017YFB1003000)+1 种基金the Beijing Nova Program(Z201100006820124)the Beijing Natural Science Foundation(L191004),and the 111 Project(B18008).
文摘Deep convolutional neural networks(DCNNs)have been widely deployed in real-world scenarios.However,DCNNs are easily tricked by adversarial examples,which present challenges for critical applications,such as vehicle classification.To address this problem,we propose a novel end-to-end convolutional network for joint detection and removal of adversarial perturbations by denoising(DDAP).It gets rid of adversarial perturbations using the DDAP denoiser based on adversarial examples discovered by the DDAP detector.The proposed method can be regarded as a pre-processing step—it does not require modifying the structure of the vehicle classification model and hardly affects the classification results on clean images.We consider four kinds of adversarial attack(FGSM,BIM,DeepFool,PGD)to verify DDAP’s capabilities when trained on BIT-Vehicle and other public datasets.It provides better defense than other state-of-the-art defensive methods.
基金supported in part by the National Institutes of Health,Nos.R01CA237267,R01HL151561,R01EB031102,and R01EB032716.
文摘Flipover,an enhanced dropout technique,is introduced to improve the robustness of artificial neural networks.In contrast to dropout,which involves randomly removing certain neurons and their connections,flipover randomly selects neurons and reverts their outputs using a negative multiplier during training.This approach offers stronger regularization than conventional dropout,refining model performance by(1)mitigating overfitting,matching or even exceeding the efficacy of dropout;(2)amplifying robustness to noise;and(3)enhancing resilience against adversarial attacks.Extensive experiments across various neural networks affirm the effectiveness of flipover in deep learning.
基金Supported by the National Natural Science Foundation of China(U1903214,62372339,62371350,61876135)the Ministry of Education Industry University Cooperative Education Project(202102246004,220800006041043,202002142012)the Fundamental Research Funds for the Central Universities(2042023kf1033)。
文摘Recent years have witnessed the ever-increasing performance of Deep Neural Networks(DNNs)in computer vision tasks.However,researchers have identified a potential vulnerability:carefully crafted adversarial examples can easily mislead DNNs into incorrect behavior via the injection of imperceptible modification to the input data.In this survey,we focus on(1)adversarial attack algorithms to generate adversarial examples,(2)adversarial defense techniques to secure DNNs against adversarial examples,and(3)important problems in the realm of adversarial examples beyond attack and defense,including the theoretical explanations,trade-off issues and benign attacks in adversarial examples.Additionally,we draw a brief comparison between recently published surveys on adversarial examples,and identify the future directions for the research of adversarial examples,such as the generalization of methods and the understanding of transferability,that might be solutions to the open problems in this field.