期刊文献+
共找到4篇文章
< 1 >
每页显示 20 50 100
MaliFuzz:Adversarial Malware Detection Model for Defending Against Fuzzing Attack
1
作者 Xianwei Gao Chun Shan Changzhen Hu 《Journal of Beijing Institute of Technology》 EI CAS 2024年第5期436-449,共14页
With the prevalence of machine learning in malware defense,hackers have tried to attack machine learning models to evade detection.It is generally difficult to explore the details of malware detection models,hackers c... With the prevalence of machine learning in malware defense,hackers have tried to attack machine learning models to evade detection.It is generally difficult to explore the details of malware detection models,hackers can adopt fuzzing attack to manipulate the features of the malware closer to benign programs on the premise of retaining their functions.In this paper,attack and defense methods on malware detection models based on machine learning algorithms were studied.Firstly,we designed a fuzzing attack method by randomly modifying features to evade detection.The fuzzing attack can effectively descend the accuracy of machine learning model with single feature.Then an adversarial malware detection model MaliFuzz is proposed to defend fuzzing attack.Different from the ordinary single feature detection model,the combined features by static and dynamic analysis to improve the defense ability are used.The experiment results show that the adversarial malware detection model with combined features can deal with the attack.The methods designed in this paper have great significance in improving the security of malware detection models and have good application prospects. 展开更多
关键词 adversarial machine learning fuzzing attack malware detection
在线阅读 下载PDF
Adversarial Training Against Adversarial Attacks for Machine Learning-Based Intrusion Detection Systems
2
作者 Muhammad Shahzad Haroon Husnain Mansoor Ali 《Computers, Materials & Continua》 SCIE EI 2022年第11期3513-3527,共15页
Intrusion detection system plays an important role in defending networks from security breaches.End-to-end machine learning-based intrusion detection systems are being used to achieve high detection accuracy.However,i... Intrusion detection system plays an important role in defending networks from security breaches.End-to-end machine learning-based intrusion detection systems are being used to achieve high detection accuracy.However,in case of adversarial attacks,that cause misclassification by introducing imperceptible perturbation on input samples,performance of machine learning-based intrusion detection systems is greatly affected.Though such problems have widely been discussed in image processing domain,very few studies have investigated network intrusion detection systems and proposed corresponding defence.In this paper,we attempt to fill this gap by using adversarial attacks on standard intrusion detection datasets and then using adversarial samples to train various machine learning algorithms(adversarial training)to test their defence performance.This is achieved by first creating adversarial sample based on Jacobian-based Saliency Map Attack(JSMA)and Fast Gradient Sign Attack(FGSM)using NSLKDD,UNSW-NB15 and CICIDS17 datasets.The study then trains and tests JSMA and FGSM based adversarial examples in seen(where model has been trained on adversarial samples)and unseen(where model is unaware of adversarial packets)attacks.The experiments includes multiple machine learning classifiers to evaluate their performance against adversarial attacks.The performance parameters include Accuracy,F1-Score and Area under the receiver operating characteristic curve(AUC)Score. 展开更多
关键词 Intrusion detection system adversarial attacks adversarial training adversarial machine learning
在线阅读 下载PDF
Generating Adversarial Samples on Multivariate Time Series using Variational Autoencoders 被引量:8
3
作者 Samuel Harford Fazle Karim Houshang Darabi 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2021年第9期1523-1538,共16页
Classification models for multivariate time series have drawn the interest of many researchers to the field with the objective of developing accurate and efficient models.However,limited research has been conducted on... Classification models for multivariate time series have drawn the interest of many researchers to the field with the objective of developing accurate and efficient models.However,limited research has been conducted on generating adversarial samples for multivariate time series classification models.Adversarial samples could become a security concern in systems with complex sets of sensors.This study proposes extending the existing gradient adversarial transformation network(GATN)in combination with adversarial autoencoders to attack multivariate time series classification models.The proposed model attacks classification models by utilizing a distilled model to imitate the output of the multivariate time series classification model.In addition,the adversarial generator function is replaced with a variational autoencoder to enhance the adversarial samples.The developed methodology is tested on two multivariate time series classification models:1-nearest neighbor dynamic time warping(1-NN DTW)and a fully convolutional network(FCN).This study utilizes 30 multivariate time series benchmarks provided by the University of East Anglia(UEA)and University of California Riverside(UCR).The use of adversarial autoencoders shows an increase in the fraction of successful adversaries generated on multivariate time series.To the best of our knowledge,this is the first study to explore adversarial attacks on multivariate time series.Additionally,we recommend future research utilizing the generated latent space from the variational autoencoders. 展开更多
关键词 adversarial machine learning deep learning multivariate time series perturbation methods
在线阅读 下载PDF
Kernel-based adversarial attacks and defenses on support vector classification 被引量:1
4
作者 Wanman Li Xiaozhang Liu +1 位作者 Anli Yan Jie Yang 《Digital Communications and Networks》 SCIE CSCD 2022年第4期492-497,共6页
While malicious samples are widely found in many application fields of machine learning,suitable countermeasures have been investigated in the field of adversarial machine learning.Due to the importance and popularity... While malicious samples are widely found in many application fields of machine learning,suitable countermeasures have been investigated in the field of adversarial machine learning.Due to the importance and popularity of Support Vector Machines(SVMs),we first describe the evasion attack against SVM classification and then propose a defense strategy in this paper.The evasion attack utilizes the classification surface of SVM to iteratively find the minimal perturbations that mislead the nonlinear classifier.Specially,we propose what is called a vulnerability function to measure the vulnerability of the SVM classifiers.Utilizing this vulnerability function,we put forward an effective defense strategy based on the kernel optimization of SVMs with Gaussian kernel against the evasion attack.Our defense method is verified to be very effective on the benchmark datasets,and the SVM classifier becomes more robust after using our kernel optimization scheme. 展开更多
关键词 adversarial machine learning Support vector machines Evasion attack Vulnerability function Kernel optimization
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部