With the prevalence of machine learning in malware defense,hackers have tried to attack machine learning models to evade detection.It is generally difficult to explore the details of malware detection models,hackers c...With the prevalence of machine learning in malware defense,hackers have tried to attack machine learning models to evade detection.It is generally difficult to explore the details of malware detection models,hackers can adopt fuzzing attack to manipulate the features of the malware closer to benign programs on the premise of retaining their functions.In this paper,attack and defense methods on malware detection models based on machine learning algorithms were studied.Firstly,we designed a fuzzing attack method by randomly modifying features to evade detection.The fuzzing attack can effectively descend the accuracy of machine learning model with single feature.Then an adversarial malware detection model MaliFuzz is proposed to defend fuzzing attack.Different from the ordinary single feature detection model,the combined features by static and dynamic analysis to improve the defense ability are used.The experiment results show that the adversarial malware detection model with combined features can deal with the attack.The methods designed in this paper have great significance in improving the security of malware detection models and have good application prospects.展开更多
Intrusion detection system plays an important role in defending networks from security breaches.End-to-end machine learning-based intrusion detection systems are being used to achieve high detection accuracy.However,i...Intrusion detection system plays an important role in defending networks from security breaches.End-to-end machine learning-based intrusion detection systems are being used to achieve high detection accuracy.However,in case of adversarial attacks,that cause misclassification by introducing imperceptible perturbation on input samples,performance of machine learning-based intrusion detection systems is greatly affected.Though such problems have widely been discussed in image processing domain,very few studies have investigated network intrusion detection systems and proposed corresponding defence.In this paper,we attempt to fill this gap by using adversarial attacks on standard intrusion detection datasets and then using adversarial samples to train various machine learning algorithms(adversarial training)to test their defence performance.This is achieved by first creating adversarial sample based on Jacobian-based Saliency Map Attack(JSMA)and Fast Gradient Sign Attack(FGSM)using NSLKDD,UNSW-NB15 and CICIDS17 datasets.The study then trains and tests JSMA and FGSM based adversarial examples in seen(where model has been trained on adversarial samples)and unseen(where model is unaware of adversarial packets)attacks.The experiments includes multiple machine learning classifiers to evaluate their performance against adversarial attacks.The performance parameters include Accuracy,F1-Score and Area under the receiver operating characteristic curve(AUC)Score.展开更多
Classification models for multivariate time series have drawn the interest of many researchers to the field with the objective of developing accurate and efficient models.However,limited research has been conducted on...Classification models for multivariate time series have drawn the interest of many researchers to the field with the objective of developing accurate and efficient models.However,limited research has been conducted on generating adversarial samples for multivariate time series classification models.Adversarial samples could become a security concern in systems with complex sets of sensors.This study proposes extending the existing gradient adversarial transformation network(GATN)in combination with adversarial autoencoders to attack multivariate time series classification models.The proposed model attacks classification models by utilizing a distilled model to imitate the output of the multivariate time series classification model.In addition,the adversarial generator function is replaced with a variational autoencoder to enhance the adversarial samples.The developed methodology is tested on two multivariate time series classification models:1-nearest neighbor dynamic time warping(1-NN DTW)and a fully convolutional network(FCN).This study utilizes 30 multivariate time series benchmarks provided by the University of East Anglia(UEA)and University of California Riverside(UCR).The use of adversarial autoencoders shows an increase in the fraction of successful adversaries generated on multivariate time series.To the best of our knowledge,this is the first study to explore adversarial attacks on multivariate time series.Additionally,we recommend future research utilizing the generated latent space from the variational autoencoders.展开更多
While malicious samples are widely found in many application fields of machine learning,suitable countermeasures have been investigated in the field of adversarial machine learning.Due to the importance and popularity...While malicious samples are widely found in many application fields of machine learning,suitable countermeasures have been investigated in the field of adversarial machine learning.Due to the importance and popularity of Support Vector Machines(SVMs),we first describe the evasion attack against SVM classification and then propose a defense strategy in this paper.The evasion attack utilizes the classification surface of SVM to iteratively find the minimal perturbations that mislead the nonlinear classifier.Specially,we propose what is called a vulnerability function to measure the vulnerability of the SVM classifiers.Utilizing this vulnerability function,we put forward an effective defense strategy based on the kernel optimization of SVMs with Gaussian kernel against the evasion attack.Our defense method is verified to be very effective on the benchmark datasets,and the SVM classifier becomes more robust after using our kernel optimization scheme.展开更多
文摘With the prevalence of machine learning in malware defense,hackers have tried to attack machine learning models to evade detection.It is generally difficult to explore the details of malware detection models,hackers can adopt fuzzing attack to manipulate the features of the malware closer to benign programs on the premise of retaining their functions.In this paper,attack and defense methods on malware detection models based on machine learning algorithms were studied.Firstly,we designed a fuzzing attack method by randomly modifying features to evade detection.The fuzzing attack can effectively descend the accuracy of machine learning model with single feature.Then an adversarial malware detection model MaliFuzz is proposed to defend fuzzing attack.Different from the ordinary single feature detection model,the combined features by static and dynamic analysis to improve the defense ability are used.The experiment results show that the adversarial malware detection model with combined features can deal with the attack.The methods designed in this paper have great significance in improving the security of malware detection models and have good application prospects.
文摘Intrusion detection system plays an important role in defending networks from security breaches.End-to-end machine learning-based intrusion detection systems are being used to achieve high detection accuracy.However,in case of adversarial attacks,that cause misclassification by introducing imperceptible perturbation on input samples,performance of machine learning-based intrusion detection systems is greatly affected.Though such problems have widely been discussed in image processing domain,very few studies have investigated network intrusion detection systems and proposed corresponding defence.In this paper,we attempt to fill this gap by using adversarial attacks on standard intrusion detection datasets and then using adversarial samples to train various machine learning algorithms(adversarial training)to test their defence performance.This is achieved by first creating adversarial sample based on Jacobian-based Saliency Map Attack(JSMA)and Fast Gradient Sign Attack(FGSM)using NSLKDD,UNSW-NB15 and CICIDS17 datasets.The study then trains and tests JSMA and FGSM based adversarial examples in seen(where model has been trained on adversarial samples)and unseen(where model is unaware of adversarial packets)attacks.The experiments includes multiple machine learning classifiers to evaluate their performance against adversarial attacks.The performance parameters include Accuracy,F1-Score and Area under the receiver operating characteristic curve(AUC)Score.
文摘Classification models for multivariate time series have drawn the interest of many researchers to the field with the objective of developing accurate and efficient models.However,limited research has been conducted on generating adversarial samples for multivariate time series classification models.Adversarial samples could become a security concern in systems with complex sets of sensors.This study proposes extending the existing gradient adversarial transformation network(GATN)in combination with adversarial autoencoders to attack multivariate time series classification models.The proposed model attacks classification models by utilizing a distilled model to imitate the output of the multivariate time series classification model.In addition,the adversarial generator function is replaced with a variational autoencoder to enhance the adversarial samples.The developed methodology is tested on two multivariate time series classification models:1-nearest neighbor dynamic time warping(1-NN DTW)and a fully convolutional network(FCN).This study utilizes 30 multivariate time series benchmarks provided by the University of East Anglia(UEA)and University of California Riverside(UCR).The use of adversarial autoencoders shows an increase in the fraction of successful adversaries generated on multivariate time series.To the best of our knowledge,this is the first study to explore adversarial attacks on multivariate time series.Additionally,we recommend future research utilizing the generated latent space from the variational autoencoders.
基金supported by the National Natural Science Foundation of China under Grant No.61966011.
文摘While malicious samples are widely found in many application fields of machine learning,suitable countermeasures have been investigated in the field of adversarial machine learning.Due to the importance and popularity of Support Vector Machines(SVMs),we first describe the evasion attack against SVM classification and then propose a defense strategy in this paper.The evasion attack utilizes the classification surface of SVM to iteratively find the minimal perturbations that mislead the nonlinear classifier.Specially,we propose what is called a vulnerability function to measure the vulnerability of the SVM classifiers.Utilizing this vulnerability function,we put forward an effective defense strategy based on the kernel optimization of SVMs with Gaussian kernel against the evasion attack.Our defense method is verified to be very effective on the benchmark datasets,and the SVM classifier becomes more robust after using our kernel optimization scheme.