The design of an antenna requires a careful selection of its parameters to retain the desired performance.However,this task is time-consuming when the traditional approaches are employed,which represents a significant...The design of an antenna requires a careful selection of its parameters to retain the desired performance.However,this task is time-consuming when the traditional approaches are employed,which represents a significant challenge.On the other hand,machine learning presents an effective solution to this challenge through a set of regression models that can robustly assist antenna designers to find out the best set of design parameters to achieve the intended performance.In this paper,we propose a novel approach for accurately predicting the bandwidth of metamaterial antenna.The proposed approach is based on employing the recently emerged guided whale optimization algorithm using adaptive particle swarm optimization to optimize the parameters of the long-short-term memory(LSTM)deep network.This optimized network is used to retrieve the metamaterial bandwidth given a set of features.In addition,the superiority of the proposed approach is examined in terms of a comparison with the traditional multilayer perceptron(ML),Knearest neighbors(K-NN),and the basic LSTM in terms of several evaluation criteria such as root mean square error(RMSE),mean absolute error(MAE),and mean bias error(MBE).Experimental results show that the proposed approach could achieve RMSE of(0.003018),MAE of(0.001871),and MBE of(0.000205).These values are better than those of the other competing models.展开更多
Electrocardiogram(ECG)signal is a measure of the heart’s electrical activity.Recently,ECG detection and classification have benefited from the use of computer-aided systems by cardiologists.The goal of this paper is ...Electrocardiogram(ECG)signal is a measure of the heart’s electrical activity.Recently,ECG detection and classification have benefited from the use of computer-aided systems by cardiologists.The goal of this paper is to improve the accuracy of ECG classification by combining the Dipper Throated Optimization(DTO)and Differential Evolution Algorithm(DEA)into a unified algorithm to optimize the hyperparameters of neural network(NN)for boosting the ECG classification accuracy.In addition,we proposed a new feature selection method for selecting the significant feature that can improve the overall performance.To prove the superiority of the proposed approach,several experimentswere conducted to compare the results achieved by the proposed approach and other competing approaches.Moreover,statistical analysis is performed to study the significance and stability of the proposed approach using Wilcoxon and ANOVA tests.Experimental results confirmed the superiority and effectiveness of the proposed approach.The classification accuracy achieved by the proposed approach is(99.98%).展开更多
Digital signal processing of electroencephalography(EEG)data is now widely utilized in various applications,including motor imagery classification,seizure detection and prediction,emotion classification,mental task cl...Digital signal processing of electroencephalography(EEG)data is now widely utilized in various applications,including motor imagery classification,seizure detection and prediction,emotion classification,mental task classification,drug impact identification and sleep state classification.With the increasing number of recorded EEG channels,it has become clear that effective channel selection algorithms are required for various applications.Guided Whale Optimization Method(Guided WOA),a suggested feature selection algorithm based on Stochastic Fractal Search(SFS)technique,evaluates the chosen subset of channels.This may be used to select the optimum EEG channels for use in Brain-Computer Interfaces(BCIs),the method for identifying essential and irrelevant characteristics in a dataset,and the complexity to be eliminated.This enables(SFS-Guided WOA)algorithm to choose the most appropriate EEG channels while assisting machine learning classification in its tasks and training the classifier with the dataset.The(SFSGuided WOA)algorithm is superior in performance metrics,and statistical tests such as ANOVA and Wilcoxon rank-sum are used to demonstrate this.展开更多
Dipper throated optimization(DTO)algorithm is a novel with a very efficient metaheuristic inspired by the dipper throated bird.DTO has its unique hunting technique by performing rapid bowing movements.To show the effi...Dipper throated optimization(DTO)algorithm is a novel with a very efficient metaheuristic inspired by the dipper throated bird.DTO has its unique hunting technique by performing rapid bowing movements.To show the efficiency of the proposed algorithm,DTO is tested and compared to the algorithms of Particle Swarm Optimization(PSO),Whale Optimization Algorithm(WOA),Grey Wolf Optimizer(GWO),and Genetic Algorithm(GA)based on the seven unimodal benchmark functions.Then,ANOVA and Wilcoxon rank-sum tests are performed to confirm the effectiveness of the DTO compared to other optimization techniques.Additionally,to demonstrate the proposed algorithm’s suitability for solving complex realworld issues,DTO is used to solve the feature selection problem.The strategy of using DTOs as feature selection is evaluated using commonly used data sets from the University of California at Irvine(UCI)repository.The findings indicate that the DTO outperforms all other algorithms in addressing feature selection issues,demonstrating the proposed algorithm’s capabilities to solve complex real-world situations.展开更多
Selecting the most relevant subset of features from a dataset is a vital step in data mining and machine learning.Each feature in a dataset has 2n possible subsets,making it challenging to select the optimum collectio...Selecting the most relevant subset of features from a dataset is a vital step in data mining and machine learning.Each feature in a dataset has 2n possible subsets,making it challenging to select the optimum collection of features using typical methods.As a result,a new metaheuristicsbased feature selection method based on the dipper-throated and grey-wolf optimization(DTO-GW)algorithms has been developed in this research.Instability can result when the selection of features is subject to metaheuristics,which can lead to a wide range of results.Thus,we adopted hybrid optimization in our method of optimizing,which allowed us to better balance exploration and harvesting chores more equitably.We propose utilizing the binary DTO-GW search approach we previously devised for selecting the optimal subset of attributes.In the proposed method,the number of features selected is minimized,while classification accuracy is increased.To test the proposed method’s performance against eleven other state-of-theart approaches,eight datasets from the UCI repository were used,such as binary grey wolf search(bGWO),binary hybrid grey wolf,and particle swarm optimization(bGWO-PSO),bPSO,binary stochastic fractal search(bSFS),binary whale optimization algorithm(bWOA),binary modified grey wolf optimization(bMGWO),binary multiverse optimization(bMVO),binary bowerbird optimization(bSBO),binary hysteresis optimization(bHy),and binary hysteresis optimization(bHWO).The suggested method is superior 4532 CMC,2023,vol.74,no.2 and successful in handling the problem of feature selection,according to the results of the experiments.展开更多
Increasing the coverage and capacity of cellular networks by deploying additional base stations is one of the fundamental objectives of fifth-generation(5G)networks.However,it leads to performance degradation and huge...Increasing the coverage and capacity of cellular networks by deploying additional base stations is one of the fundamental objectives of fifth-generation(5G)networks.However,it leads to performance degradation and huge spectral consumption due to the massive densification of connected devices and simultaneous access demand.To meet these access conditions and improve Quality of Service,resource allocation(RA)should be carefully optimized.Traditionally,RA problems are nonconvex optimizations,which are performed using heuristic methods,such as genetic algorithm,particle swarm optimization,and simulated annealing.However,the application of these approaches remains computationally expensive and unattractive for dense cellular networks.Therefore,artificial intelligence algorithms are used to improve traditional RA mechanisms.Deep learning is a promising tool for addressing resource management problems in wireless communication.In this study,we investigate a double deep Q-network-based RA framework that maximizes energy efficiency(EE)and total network throughput in unmanned aerial vehicle(UAV)-assisted terrestrial networks.Specifically,the system is studied under the constraints of interference.However,the optimization problem is formulated as a mixed integer nonlinear program.Within this framework,we evaluated the effect of height and the number of UAVs on EE and throughput.Then,in accordance with the experimental results,we compare the proposed algorithm with several artificial intelligence methods.Simulation results indicate that the proposed approach can increase EE with a considerable throughput.展开更多
Traffic prediction of wireless networks attracted many researchersand practitioners during the past decades. However, wireless traffic frequentlyexhibits strong nonlinearities and complicated patterns, which makes it ...Traffic prediction of wireless networks attracted many researchersand practitioners during the past decades. However, wireless traffic frequentlyexhibits strong nonlinearities and complicated patterns, which makes it challengingto be predicted accurately. Many of the existing approaches forpredicting wireless network traffic are unable to produce accurate predictionsbecause they lack the ability to describe the dynamic spatial-temporalcorrelations of wireless network traffic data. In this paper, we proposed anovel meta-heuristic optimization approach based on fitness grey wolf anddipper throated optimization algorithms for boosting the prediction accuracyof traffic volume. The proposed algorithm is employed to optimize the hyperparametersof long short-term memory (LSTM) network as an efficient timeseries modeling approach which is widely used in sequence prediction tasks.To prove the superiority of the proposed algorithm, four other optimizationalgorithms were employed to optimize LSTM, and the results were compared.The evaluation results confirmed the effectiveness of the proposed approachin predicting the traffic of wireless networks accurately. On the other hand,a statistical analysis is performed to emphasize the stability of the proposedapproach.展开更多
Wind power is one of the sustainable ways to generate renewable energy.In recent years,some countries have set renewables to meet future energy needs,with the primary goal of reducing emissions and promoting sustainab...Wind power is one of the sustainable ways to generate renewable energy.In recent years,some countries have set renewables to meet future energy needs,with the primary goal of reducing emissions and promoting sustainable growth,primarily the use of wind and solar power.To achieve the prediction of wind power generation,several deep and machine learning models are constructed in this article as base models.These regression models are Deep neural network(DNN),k-nearest neighbor(KNN)regressor,long short-term memory(LSTM),averaging model,random forest(RF)regressor,bagging regressor,and gradient boosting(GB)regressor.In addition,data cleaning and data preprocessing were performed to the data.The dataset used in this study includes 4 features and 50530 instances.To accurately predict the wind power values,we propose in this paper a new optimization technique based on stochastic fractal search and particle swarm optimization(SFSPSO)to optimize the parameters of LSTM network.Five evaluation criteria were utilized to estimate the efficiency of the regression models,namely,mean absolute error(MAE),Nash Sutcliffe Efficiency(NSE),mean square error(MSE),coefficient of determination(R2),root mean squared error(RMSE).The experimental results illustrated that the proposed optimization of LSTM using SFS-PSO model achieved the best results with R2 equals 99.99%in predicting the wind power values.展开更多
The ability to detect and localize the human eye is critical for use in security applications and human identification and verification systems.This is because eye recognition algorithms have multiple challenges,such ...The ability to detect and localize the human eye is critical for use in security applications and human identification and verification systems.This is because eye recognition algorithms have multiple challenges,such as multi-pose variations,ocular parts,and illumination.Moreover,the modern security applica-tions fail to detect facial expressions from eye images.In this paper,a Speeded-Up Roust Feature(SURF)Algorithm was utilized to localize the face images of the enrolled subjects.We highlighted on eye and pupil parts to be detected based on SURF,Hough Circle Transform(HCT),and Local Binary Pattern(LBP).Afterward,Deep Belief Neural Networks(DBNN)were used to classify the input features results from the SURF algorithm.We further determined the correctly and wrongly classified subjects using a confusion matrix with two class labels to classify people whose eye images are correctly detected.We apply Stochastic Gradient Descent(SGD)optimizer to address the overfitting problem,and the hyper-parameters arefine-tuned based on the applied DBNN.The accuracy of the proposed system is determined based on SURF,LBP,and DBNN classifier achieved 95.54%for the ORL dataset,94.07%for the BioID,and 96.20%for the CASIA-V5 dataset.The proposed approach is more reliable and more advanced when compared with state-of-the-art algorithms.展开更多
The accurate prediction of energy consumption has effective role in decision making and risk management for individuals and governments.Meanwhile,the accurate prediction can be realized using the recent advances in ma...The accurate prediction of energy consumption has effective role in decision making and risk management for individuals and governments.Meanwhile,the accurate prediction can be realized using the recent advances in machine learning and predictive models.This research proposes a novel approach for energy consumption forecasting based on a new optimization algorithm and a new forecasting model consisting of a set of long short-term memory(LSTM)units.The proposed optimization algorithm is used to optimize the parameters of the LSTM-based model to boost its forecasting accuracy.This optimization algorithm is based on the recently emerged dipper-throated optimization(DTO)and stochastic fractal search(SFS)algo-rithm and is referred to as dynamic DTOSFS.To prove the effectiveness and superiority of the proposed approach,five standard benchmark algorithms,namely,stochastic fractal search(SFS),dipper throated optimization(DTO),whale optimization algorithm(WOA),particle swarm optimization(PSO),and grey wolf optimization(GWO),are used to optimize the parameters of the LSTM-based model,and the results are compared with that of the proposed approach.Experimental results show that the proposed DDTOSFS+LSTM can accurately forecast the energy consumption with root mean square error RMSE of 0.00013,which is the best among the recorded results of the other methods.In addition,statistical experiments are conducted to prove the statistical difference of the proposed model.The results of these tests confirmed the expected outcomes.展开更多
Managing physical objects in the network’s periphery is made possible by the Internet of Things(IoT),revolutionizing human life.Open attacks and unauthorized access are possible with these IoT devices,which exchange ...Managing physical objects in the network’s periphery is made possible by the Internet of Things(IoT),revolutionizing human life.Open attacks and unauthorized access are possible with these IoT devices,which exchange data to enable remote access.These attacks are often detected using intrusion detection methodologies,although these systems’effectiveness and accuracy are subpar.This paper proposes a new voting classifier composed of an ensemble of machine learning models trained and optimized using metaheuristic optimization.The employed metaheuristic optimizer is a new version of the whale optimization algorithm(WOA),which is guided by the dipper throated optimizer(DTO)to improve the exploration process of the traditionalWOA optimizer.The proposed voting classifier categorizes the network intrusions robustly and efficiently.To assess the proposed approach,a dataset created from IoT devices is employed to record the efficiency of the proposed algorithm for binary attack categorization.The dataset records are balanced using the locality-sensitive hashing(LSH)and Synthetic Minority Oversampling Technique(SMOTE).The evaluation of the achieved results is performed in terms of statistical analysis and visual plots to prove the proposed approach’s effectiveness,stability,and significance.The achieved results confirmed the superiority of the proposed algorithm for the task of network intrusion detection.展开更多
The Internet of Things(IoT)is a modern approach that enables connection with a wide variety of devices remotely.Due to the resource constraints and open nature of IoT nodes,the routing protocol for low power and lossy...The Internet of Things(IoT)is a modern approach that enables connection with a wide variety of devices remotely.Due to the resource constraints and open nature of IoT nodes,the routing protocol for low power and lossy(RPL)networks may be vulnerable to several routing attacks.That’s why a network intrusion detection system(NIDS)is needed to guard against routing assaults on RPL-based IoT networks.The imbalance between the false and valid attacks in the training set degrades the performance of machine learning employed to detect network attacks.Therefore,we propose in this paper a novel approach to balance the dataset classes based on metaheuristic optimization applied to locality-sensitive hashing and synthetic minority oversampling technique(LSH-SMOTE).The proposed optimization approach is based on a new hybrid between the grey wolf and dipper throated optimization algorithms.To prove the effectiveness of the proposed approach,a set of experiments were conducted to evaluate the performance of NIDS for three cases,namely,detection without dataset balancing,detection with SMOTE balancing,and detection with the proposed optimized LSHSOMTE balancing.Experimental results showed that the proposed approach outperforms the other approaches and could boost the detection accuracy.In addition,a statistical analysis is performed to study the significance and stability of the proposed approach.The conducted experiments include seven different types of attack cases in the RPL-NIDS17 dataset.Based on the 2696 CMC,2023,vol.74,no.2 proposed approach,the achieved accuracy is(98.1%),sensitivity is(97.8%),and specificity is(98.8%).展开更多
Applications of internet-of-things(IoT)are increasingly being used in many facets of our daily life,which results in an enormous volume of data.Cloud computing and fog computing,two of the most common technologies use...Applications of internet-of-things(IoT)are increasingly being used in many facets of our daily life,which results in an enormous volume of data.Cloud computing and fog computing,two of the most common technologies used in IoT applications,have led to major security concerns.Cyberattacks are on the rise as a result of the usage of these technologies since present security measures are insufficient.Several artificial intelligence(AI)based security solutions,such as intrusion detection systems(IDS),have been proposed in recent years.Intelligent technologies that require data preprocessing and machine learning algorithm-performance augmentation require the use of feature selection(FS)techniques to increase classification accuracy by minimizing the number of features selected.On the other hand,metaheuristic optimization algorithms have been widely used in feature selection in recent decades.In this paper,we proposed a hybrid optimization algorithm for feature selection in IDS.The proposed algorithm is based on grey wolf(GW),and dipper throated optimization(DTO)algorithms and is referred to as GWDTO.The proposed algorithm has a better balance between the exploration and exploitation steps of the optimization process and thus could achieve better performance.On the employed IoT-IDS dataset,the performance of the proposed GWDTO algorithm was assessed using a set of evaluation metrics and compared to other optimization approaches in 2678 CMC,2023,vol.74,no.2 the literature to validate its superiority.In addition,a statistical analysis is performed to assess the stability and effectiveness of the proposed approach.Experimental results confirmed the superiority of the proposed approach in boosting the classification accuracy of the intrusion in IoT-based networks.展开更多
One of the most common kinds of cancer is breast cancer.The early detection of it may help lower its overall rates of mortality.In this paper,we robustly propose a novel approach for detecting and classifying breast c...One of the most common kinds of cancer is breast cancer.The early detection of it may help lower its overall rates of mortality.In this paper,we robustly propose a novel approach for detecting and classifying breast cancer regions in thermal images.The proposed approach starts with data preprocessing the input images and segmenting the significant regions of interest.In addition,to properly train the machine learning models,data augmentation is applied to increase the number of segmented regions using various scaling ratios.On the other hand,to extract the relevant features from the breast cancer cases,a set of deep neural networks(VGGNet,ResNet-50,AlexNet,and GoogLeNet)are employed.The resulting set of features is processed using the binary dipper throated algorithm to select the most effective features that can realize high classification accuracy.The selected features are used to train a neural network to finally classify the thermal images of breast cancer.To achieve accurate classification,the parameters of the employed neural network are optimized using the continuous dipper throated optimization algorithm.Experimental results show the effectiveness of the proposed approach in classifying the breast cancer cases when compared to other recent approaches in the literature.Moreover,several experiments were conducted to compare the performance of the proposed approach with the other approaches.The results of these experiments emphasized the superiority of the proposed approach.展开更多
Rainfall plays a significant role in managing the water level in the reser-voir.The unpredictable amount of rainfall due to the climate change can cause either overflow or dry in the reservoir.Many individuals,especia...Rainfall plays a significant role in managing the water level in the reser-voir.The unpredictable amount of rainfall due to the climate change can cause either overflow or dry in the reservoir.Many individuals,especially those in the agricultural sector,rely on rain forecasts.Forecasting rainfall is challenging because of the changing nature of the weather.The area of Jimma in southwest Oromia,Ethiopia is the subject of this research,which aims to develop a rainfall forecasting model.To estimate Jimma's daily rainfall,we propose a novel approach based on optimizing the parameters of long short-term memory(LSTM)using Al-Biruni earth radius(BER)optimization algorithm for boosting the fore-casting accuracy.N ash-Sutcliffe model eficiency(NSE),mean square error(MSE),root MSE(RMSE),mean absolute error(MAE),and R2 were all used in the conducted experiments to assess the proposed approach,with final scores of(0.61),(430.81),(19.12),and(11.09),respectively.Moreover,we compared the proposed model to current machine-learning regression models;such as non-optimized LSTM,bidirectional LSTM(BiLSTM),gated recurrent unit(GRU),and convolutional LSTM(ConvLSTM).It was found that the proposed approach achieved the lowest RMSE of(19.12).In addition,the experimental results show that the proposed model has R-with a value outperforming the other models,which confirms the superiority of the proposed approach.On the other hand,a statistical analysis is performed to measure the significance and stability of the proposed approach and the recorded results proved the expected perfomance.展开更多
As corona virus disease(COVID-19)is still an ongoing global outbreak,countries around the world continue to take precautions and measures to control the spread of the pandemic.Because of the excessive number of infect...As corona virus disease(COVID-19)is still an ongoing global outbreak,countries around the world continue to take precautions and measures to control the spread of the pandemic.Because of the excessive number of infected patients and the resulting deficiency of testing kits in hospitals,a rapid,reliable,and automatic detection of COVID-19 is in extreme need to curb the number of infections.By analyzing the COVID-19 chest X-ray images,a novel metaheuristic approach is proposed based on hybrid dipper throated and particle swarm optimizers.The lung region was segmented from the original chest X-ray images and augmented using various transformation operations.Furthermore,the augmented images were fed into the VGG19 deep network for feature extraction.On the other hand,a feature selection method is proposed to select the most significant features that can boost the classification results.Finally,the selected features were input into an optimized neural network for detection.The neural network is optimized using the proposed hybrid optimizer.The experimental results showed that the proposed method achieved 99.88%accuracy,outperforming the existing COVID-19 detection models.In addition,a deep statistical analysis is performed to study the performance and stability of the proposed optimizer.The results confirm the effectiveness and superiority of the proposed approach.展开更多
Arrhythmia has been classified using a variety of methods.Because of the dynamic nature of electrocardiogram(ECG)data,traditional handcrafted approaches are difficult to execute,making the machine learning(ML)solution...Arrhythmia has been classified using a variety of methods.Because of the dynamic nature of electrocardiogram(ECG)data,traditional handcrafted approaches are difficult to execute,making the machine learning(ML)solutions more appealing.Patients with cardiac arrhythmias can benefit from competent monitoring to save their lives.Cardiac arrhythmia classification and prediction have greatly improved in recent years.Arrhythmias are a category of conditions in which the heart's electrical activity is abnormally rapid or sluggish.Every year,it is one of the main reasons of mortality for both men and women,worldwide.For the classification of arrhythmias,this work proposes a novel technique based on optimized feature selection and optimized K-nearest neighbors(KNN)classifier.The proposed method makes advantage of the UCI repository,which has a 279-attribute high-dimensional cardiac arrhythmia dataset.The proposed approach is based on dividing cardiac arrhythmia patients into 16 groups based on the electrocardiography dataset’s features.The purpose is to design an efficient intelligent system employing the dipper throated optimization method to categorize cardiac arrhythmia patients.This method of comprehensive arrhythmia classification outperforms earlier methods presented in the literature.The achieved classification accuracy using the proposed approach is 99.8%.展开更多
Supply chain 4.0 refers to the fourth industrial revolution’s supply chain management systems,which integrate the supply chain’s manufacturing operations,information technology,and telecommunication processes.Althou...Supply chain 4.0 refers to the fourth industrial revolution’s supply chain management systems,which integrate the supply chain’s manufacturing operations,information technology,and telecommunication processes.Although supply chain 4.0 aims to improve supply chains’production systems and profitability,it is subject to different operational and disruptive risks.Operational risks are a big challenge in the cycle of supply chain 4.0 for controlling the demand and supply operations to produce and deliver products across IT systems.This paper proposes a voting classifier to identify the operational risks in the supply chain 4.0 based on a Sine Cosine Dynamic Group(SCDG)algorithm.Exploration and exploitation mechanisms of the basic Sine Cosine Algorithm(CSA)are adjusted and controlled by two groups of agents that can be changed dynamically during the iterations.External and internal features were collected and analyzed from different data sources of service level agreements and transaction data from various KSA firms to validate the proposed algorithm’s efficiency.A balanced accuracy of 0.989 and a Mean Square Error(MSE)of 0.0476 were achieved compared with other optimization-based classifier techniques.A one-way analysis of variance(ANOVA)and Wilcoxon rank-sum tests were performed to show the superiority of the proposed SCDG algorithm.Thus,the experimental results indicate the effectiveness of the proposed SCDG algorithm-based voting classifier.展开更多
Machine learning(ML)has taken the world by a tornado with its prevalent applications in automating ordinary tasks and using turbulent insights throughout scientific research and design strolls.ML is a massive area wit...Machine learning(ML)has taken the world by a tornado with its prevalent applications in automating ordinary tasks and using turbulent insights throughout scientific research and design strolls.ML is a massive area within artificial intelligence(AI)that focuses on obtaining valuable information out of data,explaining why ML has often been related to stats and data science.An advanced meta-heuristic optimization algorithm is proposed in this work for the optimization problem of antenna architecture design.The algorithm is designed,depending on the hybrid between the Sine Cosine Algorithm(SCA)and the Grey Wolf Optimizer(GWO),to train neural networkbased Multilayer Perceptron(MLP).The proposed optimization algorithm is a practical,versatile,and trustworthy platform to recognize the design parameters in an optimal way for an endorsement double T-shaped monopole antenna.The proposed algorithm likewise shows a comparative and statistical analysis by different curves in addition to the ANOVA and T-Test.It offers the superiority and validation stability evaluation of the predicted results to verify the procedures’accuracy.展开更多
Metamaterial Antenna is a subclass of antennas that makes use of metamaterial to improve performance.Metamaterial antennas can overcome the bandwidth constraint associated with tiny antennas.Machine learning is receiv...Metamaterial Antenna is a subclass of antennas that makes use of metamaterial to improve performance.Metamaterial antennas can overcome the bandwidth constraint associated with tiny antennas.Machine learning is receiving a lot of interest in optimizing solutions in a variety of areas.Machine learning methods are already a significant component of ongoing research and are anticipated to play a critical role in today’s technology.The accuracy of the forecast is mostly determined by the model used.The purpose of this article is to provide an optimal ensemble model for predicting the bandwidth and gain of the Metamaterial Antenna.Support Vector Machines(SVM),Random Forest,K-Neighbors Regressor,and Decision Tree Regressor were utilized as the basic models.The Adaptive Dynamic Polar Rose Guided Whale Optimization method,named AD-PRS-Guided WOA,was used to pick the optimal features from the datasets.The suggested model is compared to models based on five variables and to the average ensemble model.The findings indicate that the presented model using Random Forest results in a Root Mean Squared Error(RMSE)of(0.0102)for bandwidth and RMSE of(0.0891)for gain.This is superior to other models and can accurately predict antenna bandwidth and gain.展开更多
文摘The design of an antenna requires a careful selection of its parameters to retain the desired performance.However,this task is time-consuming when the traditional approaches are employed,which represents a significant challenge.On the other hand,machine learning presents an effective solution to this challenge through a set of regression models that can robustly assist antenna designers to find out the best set of design parameters to achieve the intended performance.In this paper,we propose a novel approach for accurately predicting the bandwidth of metamaterial antenna.The proposed approach is based on employing the recently emerged guided whale optimization algorithm using adaptive particle swarm optimization to optimize the parameters of the long-short-term memory(LSTM)deep network.This optimized network is used to retrieve the metamaterial bandwidth given a set of features.In addition,the superiority of the proposed approach is examined in terms of a comparison with the traditional multilayer perceptron(ML),Knearest neighbors(K-NN),and the basic LSTM in terms of several evaluation criteria such as root mean square error(RMSE),mean absolute error(MAE),and mean bias error(MBE).Experimental results show that the proposed approach could achieve RMSE of(0.003018),MAE of(0.001871),and MBE of(0.000205).These values are better than those of the other competing models.
文摘Electrocardiogram(ECG)signal is a measure of the heart’s electrical activity.Recently,ECG detection and classification have benefited from the use of computer-aided systems by cardiologists.The goal of this paper is to improve the accuracy of ECG classification by combining the Dipper Throated Optimization(DTO)and Differential Evolution Algorithm(DEA)into a unified algorithm to optimize the hyperparameters of neural network(NN)for boosting the ECG classification accuracy.In addition,we proposed a new feature selection method for selecting the significant feature that can improve the overall performance.To prove the superiority of the proposed approach,several experimentswere conducted to compare the results achieved by the proposed approach and other competing approaches.Moreover,statistical analysis is performed to study the significance and stability of the proposed approach using Wilcoxon and ANOVA tests.Experimental results confirmed the superiority and effectiveness of the proposed approach.The classification accuracy achieved by the proposed approach is(99.98%).
基金Funding for this study is received from Taif University Researchers Supporting Project No.(Project No.TURSP-2020/150)Taif University,Taif,Saudi Arabia。
文摘Digital signal processing of electroencephalography(EEG)data is now widely utilized in various applications,including motor imagery classification,seizure detection and prediction,emotion classification,mental task classification,drug impact identification and sleep state classification.With the increasing number of recorded EEG channels,it has become clear that effective channel selection algorithms are required for various applications.Guided Whale Optimization Method(Guided WOA),a suggested feature selection algorithm based on Stochastic Fractal Search(SFS)technique,evaluates the chosen subset of channels.This may be used to select the optimum EEG channels for use in Brain-Computer Interfaces(BCIs),the method for identifying essential and irrelevant characteristics in a dataset,and the complexity to be eliminated.This enables(SFS-Guided WOA)algorithm to choose the most appropriate EEG channels while assisting machine learning classification in its tasks and training the classifier with the dataset.The(SFSGuided WOA)algorithm is superior in performance metrics,and statistical tests such as ANOVA and Wilcoxon rank-sum are used to demonstrate this.
文摘Dipper throated optimization(DTO)algorithm is a novel with a very efficient metaheuristic inspired by the dipper throated bird.DTO has its unique hunting technique by performing rapid bowing movements.To show the efficiency of the proposed algorithm,DTO is tested and compared to the algorithms of Particle Swarm Optimization(PSO),Whale Optimization Algorithm(WOA),Grey Wolf Optimizer(GWO),and Genetic Algorithm(GA)based on the seven unimodal benchmark functions.Then,ANOVA and Wilcoxon rank-sum tests are performed to confirm the effectiveness of the DTO compared to other optimization techniques.Additionally,to demonstrate the proposed algorithm’s suitability for solving complex realworld issues,DTO is used to solve the feature selection problem.The strategy of using DTOs as feature selection is evaluated using commonly used data sets from the University of California at Irvine(UCI)repository.The findings indicate that the DTO outperforms all other algorithms in addressing feature selection issues,demonstrating the proposed algorithm’s capabilities to solve complex real-world situations.
文摘Selecting the most relevant subset of features from a dataset is a vital step in data mining and machine learning.Each feature in a dataset has 2n possible subsets,making it challenging to select the optimum collection of features using typical methods.As a result,a new metaheuristicsbased feature selection method based on the dipper-throated and grey-wolf optimization(DTO-GW)algorithms has been developed in this research.Instability can result when the selection of features is subject to metaheuristics,which can lead to a wide range of results.Thus,we adopted hybrid optimization in our method of optimizing,which allowed us to better balance exploration and harvesting chores more equitably.We propose utilizing the binary DTO-GW search approach we previously devised for selecting the optimal subset of attributes.In the proposed method,the number of features selected is minimized,while classification accuracy is increased.To test the proposed method’s performance against eleven other state-of-theart approaches,eight datasets from the UCI repository were used,such as binary grey wolf search(bGWO),binary hybrid grey wolf,and particle swarm optimization(bGWO-PSO),bPSO,binary stochastic fractal search(bSFS),binary whale optimization algorithm(bWOA),binary modified grey wolf optimization(bMGWO),binary multiverse optimization(bMVO),binary bowerbird optimization(bSBO),binary hysteresis optimization(bHy),and binary hysteresis optimization(bHWO).The suggested method is superior 4532 CMC,2023,vol.74,no.2 and successful in handling the problem of feature selection,according to the results of the experiments.
基金This work was supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2022R323)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia,and Taif University Researchers Supporting Project Number TURSP-2020/34),Taif,Saudi Arabia。
文摘Increasing the coverage and capacity of cellular networks by deploying additional base stations is one of the fundamental objectives of fifth-generation(5G)networks.However,it leads to performance degradation and huge spectral consumption due to the massive densification of connected devices and simultaneous access demand.To meet these access conditions and improve Quality of Service,resource allocation(RA)should be carefully optimized.Traditionally,RA problems are nonconvex optimizations,which are performed using heuristic methods,such as genetic algorithm,particle swarm optimization,and simulated annealing.However,the application of these approaches remains computationally expensive and unattractive for dense cellular networks.Therefore,artificial intelligence algorithms are used to improve traditional RA mechanisms.Deep learning is a promising tool for addressing resource management problems in wireless communication.In this study,we investigate a double deep Q-network-based RA framework that maximizes energy efficiency(EE)and total network throughput in unmanned aerial vehicle(UAV)-assisted terrestrial networks.Specifically,the system is studied under the constraints of interference.However,the optimization problem is formulated as a mixed integer nonlinear program.Within this framework,we evaluated the effect of height and the number of UAVs on EE and throughput.Then,in accordance with the experimental results,we compare the proposed algorithm with several artificial intelligence methods.Simulation results indicate that the proposed approach can increase EE with a considerable throughput.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project Number (PNURSP2022R323)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Traffic prediction of wireless networks attracted many researchersand practitioners during the past decades. However, wireless traffic frequentlyexhibits strong nonlinearities and complicated patterns, which makes it challengingto be predicted accurately. Many of the existing approaches forpredicting wireless network traffic are unable to produce accurate predictionsbecause they lack the ability to describe the dynamic spatial-temporalcorrelations of wireless network traffic data. In this paper, we proposed anovel meta-heuristic optimization approach based on fitness grey wolf anddipper throated optimization algorithms for boosting the prediction accuracyof traffic volume. The proposed algorithm is employed to optimize the hyperparametersof long short-term memory (LSTM) network as an efficient timeseries modeling approach which is widely used in sequence prediction tasks.To prove the superiority of the proposed algorithm, four other optimizationalgorithms were employed to optimize LSTM, and the results were compared.The evaluation results confirmed the effectiveness of the proposed approachin predicting the traffic of wireless networks accurately. On the other hand,a statistical analysis is performed to emphasize the stability of the proposedapproach.
文摘Wind power is one of the sustainable ways to generate renewable energy.In recent years,some countries have set renewables to meet future energy needs,with the primary goal of reducing emissions and promoting sustainable growth,primarily the use of wind and solar power.To achieve the prediction of wind power generation,several deep and machine learning models are constructed in this article as base models.These regression models are Deep neural network(DNN),k-nearest neighbor(KNN)regressor,long short-term memory(LSTM),averaging model,random forest(RF)regressor,bagging regressor,and gradient boosting(GB)regressor.In addition,data cleaning and data preprocessing were performed to the data.The dataset used in this study includes 4 features and 50530 instances.To accurately predict the wind power values,we propose in this paper a new optimization technique based on stochastic fractal search and particle swarm optimization(SFSPSO)to optimize the parameters of LSTM network.Five evaluation criteria were utilized to estimate the efficiency of the regression models,namely,mean absolute error(MAE),Nash Sutcliffe Efficiency(NSE),mean square error(MSE),coefficient of determination(R2),root mean squared error(RMSE).The experimental results illustrated that the proposed optimization of LSTM using SFS-PSO model achieved the best results with R2 equals 99.99%in predicting the wind power values.
基金the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4331164DSR02).
文摘The ability to detect and localize the human eye is critical for use in security applications and human identification and verification systems.This is because eye recognition algorithms have multiple challenges,such as multi-pose variations,ocular parts,and illumination.Moreover,the modern security applica-tions fail to detect facial expressions from eye images.In this paper,a Speeded-Up Roust Feature(SURF)Algorithm was utilized to localize the face images of the enrolled subjects.We highlighted on eye and pupil parts to be detected based on SURF,Hough Circle Transform(HCT),and Local Binary Pattern(LBP).Afterward,Deep Belief Neural Networks(DBNN)were used to classify the input features results from the SURF algorithm.We further determined the correctly and wrongly classified subjects using a confusion matrix with two class labels to classify people whose eye images are correctly detected.We apply Stochastic Gradient Descent(SGD)optimizer to address the overfitting problem,and the hyper-parameters arefine-tuned based on the applied DBNN.The accuracy of the proposed system is determined based on SURF,LBP,and DBNN classifier achieved 95.54%for the ORL dataset,94.07%for the BioID,and 96.20%for the CASIA-V5 dataset.The proposed approach is more reliable and more advanced when compared with state-of-the-art algorithms.
基金funded by the Deanship of Scientific Research,Princess Nourah bint Abdulrahman University,through the Program of Research Project Funding After Publication,Grant No (43-PRFA-P-52).
文摘The accurate prediction of energy consumption has effective role in decision making and risk management for individuals and governments.Meanwhile,the accurate prediction can be realized using the recent advances in machine learning and predictive models.This research proposes a novel approach for energy consumption forecasting based on a new optimization algorithm and a new forecasting model consisting of a set of long short-term memory(LSTM)units.The proposed optimization algorithm is used to optimize the parameters of the LSTM-based model to boost its forecasting accuracy.This optimization algorithm is based on the recently emerged dipper-throated optimization(DTO)and stochastic fractal search(SFS)algo-rithm and is referred to as dynamic DTOSFS.To prove the effectiveness and superiority of the proposed approach,five standard benchmark algorithms,namely,stochastic fractal search(SFS),dipper throated optimization(DTO),whale optimization algorithm(WOA),particle swarm optimization(PSO),and grey wolf optimization(GWO),are used to optimize the parameters of the LSTM-based model,and the results are compared with that of the proposed approach.Experimental results show that the proposed DDTOSFS+LSTM can accurately forecast the energy consumption with root mean square error RMSE of 0.00013,which is the best among the recorded results of the other methods.In addition,statistical experiments are conducted to prove the statistical difference of the proposed model.The results of these tests confirmed the expected outcomes.
文摘Managing physical objects in the network’s periphery is made possible by the Internet of Things(IoT),revolutionizing human life.Open attacks and unauthorized access are possible with these IoT devices,which exchange data to enable remote access.These attacks are often detected using intrusion detection methodologies,although these systems’effectiveness and accuracy are subpar.This paper proposes a new voting classifier composed of an ensemble of machine learning models trained and optimized using metaheuristic optimization.The employed metaheuristic optimizer is a new version of the whale optimization algorithm(WOA),which is guided by the dipper throated optimizer(DTO)to improve the exploration process of the traditionalWOA optimizer.The proposed voting classifier categorizes the network intrusions robustly and efficiently.To assess the proposed approach,a dataset created from IoT devices is employed to record the efficiency of the proposed algorithm for binary attack categorization.The dataset records are balanced using the locality-sensitive hashing(LSH)and Synthetic Minority Oversampling Technique(SMOTE).The evaluation of the achieved results is performed in terms of statistical analysis and visual plots to prove the proposed approach’s effectiveness,stability,and significance.The achieved results confirmed the superiority of the proposed algorithm for the task of network intrusion detection.
文摘The Internet of Things(IoT)is a modern approach that enables connection with a wide variety of devices remotely.Due to the resource constraints and open nature of IoT nodes,the routing protocol for low power and lossy(RPL)networks may be vulnerable to several routing attacks.That’s why a network intrusion detection system(NIDS)is needed to guard against routing assaults on RPL-based IoT networks.The imbalance between the false and valid attacks in the training set degrades the performance of machine learning employed to detect network attacks.Therefore,we propose in this paper a novel approach to balance the dataset classes based on metaheuristic optimization applied to locality-sensitive hashing and synthetic minority oversampling technique(LSH-SMOTE).The proposed optimization approach is based on a new hybrid between the grey wolf and dipper throated optimization algorithms.To prove the effectiveness of the proposed approach,a set of experiments were conducted to evaluate the performance of NIDS for three cases,namely,detection without dataset balancing,detection with SMOTE balancing,and detection with the proposed optimized LSHSOMTE balancing.Experimental results showed that the proposed approach outperforms the other approaches and could boost the detection accuracy.In addition,a statistical analysis is performed to study the significance and stability of the proposed approach.The conducted experiments include seven different types of attack cases in the RPL-NIDS17 dataset.Based on the 2696 CMC,2023,vol.74,no.2 proposed approach,the achieved accuracy is(98.1%),sensitivity is(97.8%),and specificity is(98.8%).
文摘Applications of internet-of-things(IoT)are increasingly being used in many facets of our daily life,which results in an enormous volume of data.Cloud computing and fog computing,two of the most common technologies used in IoT applications,have led to major security concerns.Cyberattacks are on the rise as a result of the usage of these technologies since present security measures are insufficient.Several artificial intelligence(AI)based security solutions,such as intrusion detection systems(IDS),have been proposed in recent years.Intelligent technologies that require data preprocessing and machine learning algorithm-performance augmentation require the use of feature selection(FS)techniques to increase classification accuracy by minimizing the number of features selected.On the other hand,metaheuristic optimization algorithms have been widely used in feature selection in recent decades.In this paper,we proposed a hybrid optimization algorithm for feature selection in IDS.The proposed algorithm is based on grey wolf(GW),and dipper throated optimization(DTO)algorithms and is referred to as GWDTO.The proposed algorithm has a better balance between the exploration and exploitation steps of the optimization process and thus could achieve better performance.On the employed IoT-IDS dataset,the performance of the proposed GWDTO algorithm was assessed using a set of evaluation metrics and compared to other optimization approaches in 2678 CMC,2023,vol.74,no.2 the literature to validate its superiority.In addition,a statistical analysis is performed to assess the stability and effectiveness of the proposed approach.Experimental results confirmed the superiority of the proposed approach in boosting the classification accuracy of the intrusion in IoT-based networks.
文摘One of the most common kinds of cancer is breast cancer.The early detection of it may help lower its overall rates of mortality.In this paper,we robustly propose a novel approach for detecting and classifying breast cancer regions in thermal images.The proposed approach starts with data preprocessing the input images and segmenting the significant regions of interest.In addition,to properly train the machine learning models,data augmentation is applied to increase the number of segmented regions using various scaling ratios.On the other hand,to extract the relevant features from the breast cancer cases,a set of deep neural networks(VGGNet,ResNet-50,AlexNet,and GoogLeNet)are employed.The resulting set of features is processed using the binary dipper throated algorithm to select the most effective features that can realize high classification accuracy.The selected features are used to train a neural network to finally classify the thermal images of breast cancer.To achieve accurate classification,the parameters of the employed neural network are optimized using the continuous dipper throated optimization algorithm.Experimental results show the effectiveness of the proposed approach in classifying the breast cancer cases when compared to other recent approaches in the literature.Moreover,several experiments were conducted to compare the performance of the proposed approach with the other approaches.The results of these experiments emphasized the superiority of the proposed approach.
文摘Rainfall plays a significant role in managing the water level in the reser-voir.The unpredictable amount of rainfall due to the climate change can cause either overflow or dry in the reservoir.Many individuals,especially those in the agricultural sector,rely on rain forecasts.Forecasting rainfall is challenging because of the changing nature of the weather.The area of Jimma in southwest Oromia,Ethiopia is the subject of this research,which aims to develop a rainfall forecasting model.To estimate Jimma's daily rainfall,we propose a novel approach based on optimizing the parameters of long short-term memory(LSTM)using Al-Biruni earth radius(BER)optimization algorithm for boosting the fore-casting accuracy.N ash-Sutcliffe model eficiency(NSE),mean square error(MSE),root MSE(RMSE),mean absolute error(MAE),and R2 were all used in the conducted experiments to assess the proposed approach,with final scores of(0.61),(430.81),(19.12),and(11.09),respectively.Moreover,we compared the proposed model to current machine-learning regression models;such as non-optimized LSTM,bidirectional LSTM(BiLSTM),gated recurrent unit(GRU),and convolutional LSTM(ConvLSTM).It was found that the proposed approach achieved the lowest RMSE of(19.12).In addition,the experimental results show that the proposed model has R-with a value outperforming the other models,which confirms the superiority of the proposed approach.On the other hand,a statistical analysis is performed to measure the significance and stability of the proposed approach and the recorded results proved the expected perfomance.
文摘As corona virus disease(COVID-19)is still an ongoing global outbreak,countries around the world continue to take precautions and measures to control the spread of the pandemic.Because of the excessive number of infected patients and the resulting deficiency of testing kits in hospitals,a rapid,reliable,and automatic detection of COVID-19 is in extreme need to curb the number of infections.By analyzing the COVID-19 chest X-ray images,a novel metaheuristic approach is proposed based on hybrid dipper throated and particle swarm optimizers.The lung region was segmented from the original chest X-ray images and augmented using various transformation operations.Furthermore,the augmented images were fed into the VGG19 deep network for feature extraction.On the other hand,a feature selection method is proposed to select the most significant features that can boost the classification results.Finally,the selected features were input into an optimized neural network for detection.The neural network is optimized using the proposed hybrid optimizer.The experimental results showed that the proposed method achieved 99.88%accuracy,outperforming the existing COVID-19 detection models.In addition,a deep statistical analysis is performed to study the performance and stability of the proposed optimizer.The results confirm the effectiveness and superiority of the proposed approach.
文摘Arrhythmia has been classified using a variety of methods.Because of the dynamic nature of electrocardiogram(ECG)data,traditional handcrafted approaches are difficult to execute,making the machine learning(ML)solutions more appealing.Patients with cardiac arrhythmias can benefit from competent monitoring to save their lives.Cardiac arrhythmia classification and prediction have greatly improved in recent years.Arrhythmias are a category of conditions in which the heart's electrical activity is abnormally rapid or sluggish.Every year,it is one of the main reasons of mortality for both men and women,worldwide.For the classification of arrhythmias,this work proposes a novel technique based on optimized feature selection and optimized K-nearest neighbors(KNN)classifier.The proposed method makes advantage of the UCI repository,which has a 279-attribute high-dimensional cardiac arrhythmia dataset.The proposed approach is based on dividing cardiac arrhythmia patients into 16 groups based on the electrocardiography dataset’s features.The purpose is to design an efficient intelligent system employing the dipper throated optimization method to categorize cardiac arrhythmia patients.This method of comprehensive arrhythmia classification outperforms earlier methods presented in the literature.The achieved classification accuracy using the proposed approach is 99.8%.
文摘Supply chain 4.0 refers to the fourth industrial revolution’s supply chain management systems,which integrate the supply chain’s manufacturing operations,information technology,and telecommunication processes.Although supply chain 4.0 aims to improve supply chains’production systems and profitability,it is subject to different operational and disruptive risks.Operational risks are a big challenge in the cycle of supply chain 4.0 for controlling the demand and supply operations to produce and deliver products across IT systems.This paper proposes a voting classifier to identify the operational risks in the supply chain 4.0 based on a Sine Cosine Dynamic Group(SCDG)algorithm.Exploration and exploitation mechanisms of the basic Sine Cosine Algorithm(CSA)are adjusted and controlled by two groups of agents that can be changed dynamically during the iterations.External and internal features were collected and analyzed from different data sources of service level agreements and transaction data from various KSA firms to validate the proposed algorithm’s efficiency.A balanced accuracy of 0.989 and a Mean Square Error(MSE)of 0.0476 were achieved compared with other optimization-based classifier techniques.A one-way analysis of variance(ANOVA)and Wilcoxon rank-sum tests were performed to show the superiority of the proposed SCDG algorithm.Thus,the experimental results indicate the effectiveness of the proposed SCDG algorithm-based voting classifier.
文摘Machine learning(ML)has taken the world by a tornado with its prevalent applications in automating ordinary tasks and using turbulent insights throughout scientific research and design strolls.ML is a massive area within artificial intelligence(AI)that focuses on obtaining valuable information out of data,explaining why ML has often been related to stats and data science.An advanced meta-heuristic optimization algorithm is proposed in this work for the optimization problem of antenna architecture design.The algorithm is designed,depending on the hybrid between the Sine Cosine Algorithm(SCA)and the Grey Wolf Optimizer(GWO),to train neural networkbased Multilayer Perceptron(MLP).The proposed optimization algorithm is a practical,versatile,and trustworthy platform to recognize the design parameters in an optimal way for an endorsement double T-shaped monopole antenna.The proposed algorithm likewise shows a comparative and statistical analysis by different curves in addition to the ANOVA and T-Test.It offers the superiority and validation stability evaluation of the predicted results to verify the procedures’accuracy.
文摘Metamaterial Antenna is a subclass of antennas that makes use of metamaterial to improve performance.Metamaterial antennas can overcome the bandwidth constraint associated with tiny antennas.Machine learning is receiving a lot of interest in optimizing solutions in a variety of areas.Machine learning methods are already a significant component of ongoing research and are anticipated to play a critical role in today’s technology.The accuracy of the forecast is mostly determined by the model used.The purpose of this article is to provide an optimal ensemble model for predicting the bandwidth and gain of the Metamaterial Antenna.Support Vector Machines(SVM),Random Forest,K-Neighbors Regressor,and Decision Tree Regressor were utilized as the basic models.The Adaptive Dynamic Polar Rose Guided Whale Optimization method,named AD-PRS-Guided WOA,was used to pick the optimal features from the datasets.The suggested model is compared to models based on five variables and to the average ensemble model.The findings indicate that the presented model using Random Forest results in a Root Mean Squared Error(RMSE)of(0.0102)for bandwidth and RMSE of(0.0891)for gain.This is superior to other models and can accurately predict antenna bandwidth and gain.