Open networks and heterogeneous services in the Internet of Vehicles(IoV)can lead to security and privacy challenges.One key requirement for such systems is the preservation of user privacy,ensuring a seamless experie...Open networks and heterogeneous services in the Internet of Vehicles(IoV)can lead to security and privacy challenges.One key requirement for such systems is the preservation of user privacy,ensuring a seamless experience in driving,navigation,and communication.These privacy needs are influenced by various factors,such as data collected at different intervals,trip durations,and user interactions.To address this,the paper proposes a Support Vector Machine(SVM)model designed to process large amounts of aggregated data and recommend privacy preserving measures.The model analyzes data based on user demands and interactions with service providers or neighboring infrastructure.It aims to minimize privacy risks while ensuring service continuity and sustainability.The SVMmodel helps validate the system’s reliability by creating a hyperplane that distinguishes between maximum and minimum privacy recommendations.The results demonstrate the effectiveness of the proposed SVM model in enhancing both privacy and service performance.展开更多
In engineering practice,it is often necessary to determine functional relationships between dependent and independent variables.These relationships can be highly nonlinear,and classical regression approaches cannot al...In engineering practice,it is often necessary to determine functional relationships between dependent and independent variables.These relationships can be highly nonlinear,and classical regression approaches cannot always provide sufficiently reliable solutions.Nevertheless,Machine Learning(ML)techniques,which offer advanced regression tools to address complicated engineering issues,have been developed and widely explored.This study investigates the selected ML techniques to evaluate their suitability for application in the hot deformation behavior of metallic materials.The ML-based regression methods of Artificial Neural Networks(ANNs),Support Vector Machine(SVM),Decision Tree Regression(DTR),and Gaussian Process Regression(GPR)are applied to mathematically describe hot flow stress curve datasets acquired experimentally for a medium-carbon steel.Although the GPR method has not been used for such a regression task before,the results showed that its performance is the most favorable and practically unrivaled;neither the ANN method nor the other studied ML techniques provide such precise results of the solved regression analysis.展开更多
Determination of Shear Bond strength(SBS)at interlayer of double-layer asphalt concrete is crucial in flexible pavement structures.The study used three Machine Learning(ML)models,including K-Nearest Neighbors(KNN),Ext...Determination of Shear Bond strength(SBS)at interlayer of double-layer asphalt concrete is crucial in flexible pavement structures.The study used three Machine Learning(ML)models,including K-Nearest Neighbors(KNN),Extra Trees(ET),and Light Gradient Boosting Machine(LGBM),to predict SBS based on easily determinable input parameters.Also,the Grid Search technique was employed for hyper-parameter tuning of the ML models,and cross-validation and learning curve analysis were used for training the models.The models were built on a database of 240 experimental results and three input variables:temperature,normal pressure,and tack coat rate.Model validation was performed using three statistical criteria:the coefficient of determination(R2),the Root Mean Square Error(RMSE),and the mean absolute error(MAE).Additionally,SHAP analysis was also used to validate the importance of the input variables in the prediction of the SBS.Results show that these models accurately predict SBS,with LGBM providing outstanding performance.SHAP(Shapley Additive explanation)analysis for LGBM indicates that temperature is the most influential factor on SBS.Consequently,the proposed ML models can quickly and accurately predict SBS between two layers of asphalt concrete,serving practical applications in flexible pavement structure design.展开更多
In order to study the characteristics of pure fly ash-based geopolymer concrete(PFGC)conveniently,we used a machine learning method that can quantify the perception of characteristics to predict its compressive streng...In order to study the characteristics of pure fly ash-based geopolymer concrete(PFGC)conveniently,we used a machine learning method that can quantify the perception of characteristics to predict its compressive strength.In this study,505 groups of data were collected,and a new database of compressive strength of PFGC was constructed.In order to establish an accurate prediction model of compressive strength,five different types of machine learning networks were used for comparative analysis.The five machine learning models all showed good compressive strength prediction performance on PFGC.Among them,R2,MSE,RMSE and MAE of decision tree model(DT)are 0.99,1.58,1.25,and 0.25,respectively.While R2,MSE,RMSE and MAE of random forest model(RF)are 0.97,5.17,2.27 and 1.38,respectively.The two models have high prediction accuracy and outstanding generalization ability.In order to enhance the interpretability of model decision-making,we used importance ranking to obtain the perception of machine learning model to 13 variables.These 13 variables include chemical composition of fly ash(SiO_(2)/Al_(2)O_(3),Si/Al),the ratio of alkaline liquid to the binder,curing temperature,curing durations inside oven,fly ash dosage,fine aggregate dosage,coarse aggregate dosage,extra water dosage and sodium hydroxide dosage.Curing temperature,specimen ages and curing durations inside oven have the greatest influence on the prediction results,indicating that curing conditions have more prominent influence on the compressive strength of PFGC than ordinary Portland cement concrete.The importance of curing conditions of PFGC even exceeds that of the concrete mix proportion,due to the low reactivity of pure fly ash.展开更多
The high porosity and tunable chemical functionality of metal-organic frameworks(MOFs)make it a promising catalyst design platform.High-throughput screening of catalytic performance is feasible since the large MOF str...The high porosity and tunable chemical functionality of metal-organic frameworks(MOFs)make it a promising catalyst design platform.High-throughput screening of catalytic performance is feasible since the large MOF structure database is available.In this study,we report a machine learning model for high-throughput screening of MOF catalysts for the CO_(2) cycloaddition reaction.The descriptors for model training were judiciously chosen according to the reaction mechanism,which leads to high accuracy up to 97%for the 75%quantile of the training set as the classification criterion.The feature contribution was further evaluated with SHAP and PDP analysis to provide a certain physical understanding.12,415 hypothetical MOF structures and 100 reported MOFs were evaluated under 100℃ and 1 bar within one day using the model,and 239 potentially efficient catalysts were discovered.Among them,MOF-76(Y)achieved the top performance experimentally among reported MOFs,in good agreement with the prediction.展开更多
Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify sp...Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify specific flaws/diseases for diagnosis.The primary concern of ML applications is the precise selection of flexible image features for pattern detection and region classification.Most of the extracted image features are irrelevant and lead to an increase in computation time.Therefore,this article uses an analytical learning paradigm to design a Congruent Feature Selection Method to select the most relevant image features.This process trains the learning paradigm using similarity and correlation-based features over different textural intensities and pixel distributions.The similarity between the pixels over the various distribution patterns with high indexes is recommended for disease diagnosis.Later,the correlation based on intensity and distribution is analyzed to improve the feature selection congruency.Therefore,the more congruent pixels are sorted in the descending order of the selection,which identifies better regions than the distribution.Now,the learning paradigm is trained using intensity and region-based similarity to maximize the chances of selection.Therefore,the probability of feature selection,regardless of the textures and medical image patterns,is improved.This process enhances the performance of ML applications for different medical image processing.The proposed method improves the accuracy,precision,and training rate by 13.19%,10.69%,and 11.06%,respectively,compared to other models for the selected dataset.The mean error and selection time is also reduced by 12.56%and 13.56%,respectively,compared to the same models and dataset.展开更多
Diabetic retinopathy(DR)remains a leading cause of vision impairment and blindness among individuals with diabetes,necessitating innovative approaches to screening and management.This editorial explores the transforma...Diabetic retinopathy(DR)remains a leading cause of vision impairment and blindness among individuals with diabetes,necessitating innovative approaches to screening and management.This editorial explores the transformative potential of artificial intelligence(AI)and machine learning(ML)in revolutionizing DR care.AI and ML technologies have demonstrated remarkable advancements in enhancing the accuracy,efficiency,and accessibility of DR screening,helping to overcome barriers to early detection.These technologies leverage vast datasets to identify patterns and predict disease progression with unprecedented precision,enabling clinicians to make more informed decisions.Furthermore,AI-driven solutions hold promise in personalizing management strategies for DR,incorpo-rating predictive analytics to tailor interventions and optimize treatment path-ways.By automating routine tasks,AI can reduce the burden on healthcare providers,allowing for a more focused allocation of resources towards complex patient care.This review aims to evaluate the current advancements and applic-ations of AI and ML in DR screening,and to discuss the potential of these techno-logies in developing personalized management strategies,ultimately aiming to improve patient outcomes and reduce the global burden of DR.The integration of AI and ML in DR care represents a paradigm shift,offering a glimpse into the future of ophthalmic healthcare.展开更多
Every second, a large volume of useful data is created in social media about the various kind of online purchases and in another forms of reviews. Particularly, purchased products review data is enormously growing in ...Every second, a large volume of useful data is created in social media about the various kind of online purchases and in another forms of reviews. Particularly, purchased products review data is enormously growing in different database repositories every day. Most of the review data are useful to new customers for theier further purchases as well as existing companies to view customers feedback about various products. Data Mining and Machine Leaning techniques are familiar to analyse such kind of data to visualise and know the potential use of the purchased items through online. The customers are making quality of products through their sentiments about the purchased items from different online companies. In this research work, it is analysed sentiments of Headphone review data, which is collected from online repositories. For the analysis of Headphone review data, some of the Machine Learning techniques like Support Vector Machines, Naive Bayes, Decision Trees and Random Forest Algorithms and a Hybrid method are applied to find the quality via the customers’ sentiments. The accuracy and performance of the taken algorithms are also analysed based on the three types of sentiments such as positive, negative and neutral.展开更多
Machine learning(ML)is a type of artificial intelligence that assists computers in the acquisition of knowledge through data analysis,thus creating machines that can complete tasks otherwise requiring human intelligen...Machine learning(ML)is a type of artificial intelligence that assists computers in the acquisition of knowledge through data analysis,thus creating machines that can complete tasks otherwise requiring human intelligence.Among its various applications,it has proven groundbreaking in healthcare as well,both in clinical practice and research.In this editorial,we succinctly introduce ML applications and present a study,featured in the latest issue of the World Journal of Clinical Cases.The authors of this study conducted an analysis using both multiple linear regression(MLR)and ML methods to investigate the significant factors that may impact the estimated glomerular filtration rate in healthy women with and without non-alcoholic fatty liver disease(NAFLD).Their results implicated age as the most important determining factor in both groups,followed by lactic dehydrogenase,uric acid,forced expiratory volume in one second,and albumin.In addition,for the NAFLD-group,the 5th and 6th most important impact factors were thyroid-stimulating hormone and systolic blood pressure,as compared to plasma calcium and body fat for the NAFLD+group.However,the study's distinctive contribution lies in its adoption of ML methodologies,showcasing their superiority over traditional statistical approaches(herein MLR),thereby highlighting the potential of ML to represent an invaluable advanced adjunct tool in clinical practice and research.展开更多
Critical to the safe, efficient, and reliable operation of an autonomous maritime vessel is its ability to perceive the external environment through onboard sensors. For this research, data was collected from a LiDAR ...Critical to the safe, efficient, and reliable operation of an autonomous maritime vessel is its ability to perceive the external environment through onboard sensors. For this research, data was collected from a LiDAR sensor installed on a 16-foot catamaran unmanned vessel. This sensor generated point clouds of the surrounding maritime environment, which were then labeled by hand for training a machine learning (ML) model to perform a semantic segmentation task on LiDAR scans. In particular, the developed semantic segmentation classifies each point-cloud point as belonging to a certain buoy type. This paper describes the developed Unity Game Engine (Unity) simulation to emulate the maritime environment perceived by LiDAR with the goal of generating large (automatically labeled) simulation datasets and improving the ML model performance since hand-labeled real-life LiDAR scan data may be scarce. The Unity simulation data combined with labeled real-life point cloud data was used for a PointNet-based neural network model, the architecture of which is presented in this paper. Fitting the PointNet-based model on the simulation data followed by fine-tuning the combined dataset allowed for accurate semantic segmentation of point clouds on the real-world data. The ML model performance on several combinations of simulation and real-life data is explored. The resulting Intersection over Union (IoU) metric scores are quite high, ranging between 0.78 and 0.89, when validated on simulation and real-life data. The confusion matrix-entry values indicate an accurate semantic segmentation of the buoy types.展开更多
Fabric dyeing is a critical production process in the clothing industry and heavily relies on batch processing machines(BPM).In this study,the parallel BPM scheduling problem with machine eligibility in fabric dyeing ...Fabric dyeing is a critical production process in the clothing industry and heavily relies on batch processing machines(BPM).In this study,the parallel BPM scheduling problem with machine eligibility in fabric dyeing is considered,and an adaptive cooperated shuffled frog-leaping algorithm(ACSFLA)is proposed to minimize makespan and total tardiness simultaneously.ACSFLA determines the search times for each memeplex based on its quality,with more searches in high-quality memeplexes.An adaptive cooperated and diversified search mechanism is applied,dynamically adjusting search strategies for each memeplex based on their dominance relationships and quality.During the cooperated search,ACSFLA uses a segmented and dynamic targeted search approach,while in non-cooperated scenarios,the search focuses on local search around superior solutions to improve efficiency.Furthermore,ACSFLA employs adaptive population division and partial population shuffling strategies.Through these strategies,memeplexes with low evolutionary potential are selected for reconstruction in the next generation,while thosewithhighevolutionarypotential are retained to continue their evolution.Toevaluate the performance of ACSFLA,comparative experiments were conducted using ACSFLA,SFLA,ASFLA,MOABC,and NSGA-CC in 90 instances.The computational results reveal that ACSFLA outperforms the other algorithms in 78 of the 90 test cases,highlighting its advantages in solving the parallel BPM scheduling problem with machine eligibility.展开更多
BACKGROUND Machine learning(ML),a major branch of artificial intelligence,has not only demonstrated the potential to significantly improve numerous sectors of healthcare but has also made significant contributions to ...BACKGROUND Machine learning(ML),a major branch of artificial intelligence,has not only demonstrated the potential to significantly improve numerous sectors of healthcare but has also made significant contributions to the field of solid organ transplantation.ML provides revolutionary opportunities in areas such as donorrecipient matching,post-transplant monitoring,and patient care by automatically analyzing large amounts of data,identifying patterns,and forecasting outcomes.AIM To conduct a comprehensive bibliometric analysis of publications on the use of ML in transplantation to understand current research trends and their implications.METHODS On July 18,a thorough search strategy was used with the Web of Science database.ML and transplantation-related keywords were utilized.With the aid of the VOS viewer application,the identified articles were subjected to bibliometric variable analysis in order to determine publication counts,citation counts,contributing countries,and institutions,among other factors.RESULTS Of the 529 articles that were first identified,427 were deemed relevant for bibliometric analysis.A surge in publications was observed over the last four years,especially after 2018,signifying growing interest in this area.With 209 publications,the United States emerged as the top contributor.Notably,the"Journal of Heart and Lung Transplantation"and the"American Journal of Transplantation"emerged as the leading journals,publishing the highest number of relevant articles.Frequent keyword searches revealed that patient survival,mortality,outcomes,allocation,and risk assessment were significant themes of focus.CONCLUSION The growing body of pertinent publications highlights ML's growing presence in the field of solid organ transplantation.This bibliometric analysis highlights the growing importance of ML in transplant research and highlights its exciting potential to change medical practices and enhance patient outcomes.Encouraging collaboration between significant contributors can potentially fast-track advancements in this interdisciplinary domain.展开更多
The purpose of this research paper is to explore how early Machine Learning models have shown a bias in the results where a bias should not be seen. A prime example is an ML model that favors male applicants over fema...The purpose of this research paper is to explore how early Machine Learning models have shown a bias in the results where a bias should not be seen. A prime example is an ML model that favors male applicants over female applicants. While the model is supposed to take into consideration other aspects of the data, it tends to have a bias and skew the results one way or another. Therefore, in this paper, we will be exploring how this bias comes about and how it can be fixed. In this research, I have taken different case studies of real-world examples of these biases being shown. For example, an Amazon hiring application that favored male applicants or a loan application that favored western applicants is both studies that I will reference in this paper and explore the situation itself. In order to find out where the bias is coming from, I have constructed a machine learning model that will use a dataset found on Kaggle, and I will analyze the results of said ML model. The results that the research has yielded clarify the reason for said bias in the artificial intelligence models. The way the model was trained influences the way the results will play out. If the model is trained with a large amount of male applicant data over female applicant data, the model will favor male applicants. Therefore, when they are trained with new data, they are likely to accept applications that are male over female despite having equivalent parts. Later in the paper, I will dive deeper into the way that AI applications work and how they find biases and trends in order to classify things correctly. However, there is a fine line between classification and bias and making sure that it is rightfully corrected and tested is important in machine learning today.展开更多
Solar cells made from perovskites have experienced rapid development as examples of third-generation solar cells in recent years. The traditional trial-and-error method is inefficient, and the search space is incredib...Solar cells made from perovskites have experienced rapid development as examples of third-generation solar cells in recent years. The traditional trial-and-error method is inefficient, and the search space is incredibly large. This makes developing advanced perovskite materials, as well as high conversion efficiencies and stability of perovskite solar cells (PSCs), a challenging task. A growing number of data-driven machine learning (ML) applications are being developed in the materials science field, due to the availability of large databases and increased computing power. There are many advantages associated with the use of machine learning to predict the properties of potential perovskite materials, as well as provide additional knowledge on how these materials work to fast-track their progress. Thus, the purpose of this paper is to develop a conceptual model to improve the efficiency of a perovskite solar cell using machine learning techniques in order to improve its performance. This study relies on the application of design science as a method to conduct the research as part of the study. The developed model consists of six phases: Data collection and preprocessing, feature selection and engineering, model training and evaluation, performance assessment, optimization and fine-tuning, and deployment and application. As a result of this model, there is a great deal of promise in advancing the field of perovskite solar cells as well as providing a basis for developing more efficient and cost-effective solar energy technologies in the future.展开更多
The rapid growth of machine learning(ML)across fields has intensified the challenge of selecting the right algorithm for specific tasks,known as the Algorithm Selection Problem(ASP).Traditional trial-and-error methods...The rapid growth of machine learning(ML)across fields has intensified the challenge of selecting the right algorithm for specific tasks,known as the Algorithm Selection Problem(ASP).Traditional trial-and-error methods have become impractical due to their resource demands.Automated Machine Learning(AutoML)systems automate this process,but often neglect the group structures and sparsity in meta-features,leading to inefficiencies in algorithm recommendations for classification tasks.This paper proposes a meta-learning approach using Multivariate Sparse Group Lasso(MSGL)to address these limitations.Our method models both within-group and across-group sparsity among meta-features to manage high-dimensional data and reduce multicollinearity across eight meta-feature groups.The Fast Iterative Shrinkage-Thresholding Algorithm(FISTA)with adaptive restart efficiently solves the non-smooth optimization problem.Empirical validation on 145 classification datasets with 17 classification algorithms shows that our meta-learning method outperforms four state-of-the-art approaches,achieving 77.18%classification accuracy,86.07%recommendation accuracy and 88.83%normalized discounted cumulative gain.展开更多
1st cases of COVID-19 were reported in March 2020 in Bangladesh and rapidly increased daily. So many steps were taken by the Bangladesh government to reduce the outbreak of COVID-19, such as masks, gatherings, local m...1st cases of COVID-19 were reported in March 2020 in Bangladesh and rapidly increased daily. So many steps were taken by the Bangladesh government to reduce the outbreak of COVID-19, such as masks, gatherings, local movements, international movements, etc. The data was collected from the World Health Organization. In this research, different variables have been used for analysis, for instance, new cases, new deaths, masks, schools, business, gatherings, domestic movement, international travel, new test, positive rate, test per case, new vaccination smoothed, new vaccine, total vaccination, and stringency index. Machine learning algorithms were used to predict and build the model, such as linear regression, K-nearest neighbours, decision trees, random forests, and support vector machines. Accuracy and Mean Square error (MSE) were used to test the model. A hyperparameter was also applied to find the optimum values of parameters. After computing the analysis, the result showed that the linear regression algorithm performs the best overall among the algorithms listed, with the highest testing accuracy and the lowest RMSE before and after hyper-tuning. The highest accuracy and lowest MSE were used for the best model, and for this data set, Linear regression got the highest accuracy, 0.98 and 0.97 and the lowest MSE, 4.79 and 4.04, respectively.展开更多
Accurately estimating the State of Health(SOH)and Remaining Useful Life(RUL)of lithium-ion batteries(LIBs)is crucial for the continuous and stable operation of battery management systems.However,due to the complex int...Accurately estimating the State of Health(SOH)and Remaining Useful Life(RUL)of lithium-ion batteries(LIBs)is crucial for the continuous and stable operation of battery management systems.However,due to the complex internal chemical systems of LIBs and the nonlinear degradation of their performance,direct measurement of SOH and RUL is challenging.To address these issues,the Twin Support Vector Machine(TWSVM)method is proposed to predict SOH and RUL.Initially,the constant current charging time of the lithium battery is extracted as a health indicator(HI),decomposed using Variational Modal Decomposition(VMD),and feature correlations are computed using Importance of Random Forest Features(RF)to maximize the extraction of critical factors influencing battery performance degradation.Furthermore,to enhance the global search capability of the Convolution Optimization Algorithm(COA),improvements are made using Good Point Set theory and the Differential Evolution method.The Improved Convolution Optimization Algorithm(ICOA)is employed to optimize TWSVM parameters for constructing SOH and RUL prediction models.Finally,the proposed models are validated using NASA and CALCE lithium-ion battery datasets.Experimental results demonstrate that the proposed models achieve an RMSE not exceeding 0.007 and an MAPE not exceeding 0.0082 for SOH and RUL prediction,with a relative error in RUL prediction within the range of[-1.8%,2%].Compared to other models,the proposed model not only exhibits superior fitting capability but also demonstrates robust performance.展开更多
To guarantee safe and efficient tunneling of a tunnel boring machine(TBM),rapid and accurate judgment of the rock mass condition is essential.Based on fuzzy C-means clustering,this paper proposes a grouped machine lea...To guarantee safe and efficient tunneling of a tunnel boring machine(TBM),rapid and accurate judgment of the rock mass condition is essential.Based on fuzzy C-means clustering,this paper proposes a grouped machine learning method for predicting rock mass parameters.An elaborate data set on field rock mass is collected,which also matches field TBM tunneling.Meanwhile,target stratum samples are divided into several clusters by fuzzy C-means clustering,and multiple submodels are trained by samples in different clusters with the input of pretreated TBM tunneling data and the output of rock mass parameter data.Each testing sample or newly encountered tunneling condition can be predicted by multiple submodels with the weight of the membership degree of the sample to each cluster.The proposed method has been realized by 100 training samples and verified by 30 testing samples collected from the C1 part of the Pearl Delta water resources allocation project.The average percentage error of uniaxial compressive strength and joint frequency(Jf)of the 30 testing samples predicted by the pure back propagation(BP)neural network is 13.62%and 12.38%,while that predicted by the BP neural network combined with fuzzy C-means is 7.66%and6.40%,respectively.In addition,by combining fuzzy C-means clustering,the prediction accuracies of support vector regression and random forest are also improved to different degrees,which demonstrates that fuzzy C-means clustering is helpful for improving the prediction accuracy of machine learning and thus has good applicability.Accordingly,the proposed method is valuable for predicting rock mass parameters during TBM tunneling.展开更多
The significance of precise energy usage forecasts has been highlighted by the increasing need for sustainability and energy efficiency across a range of industries.In order to improve the precision and openness of en...The significance of precise energy usage forecasts has been highlighted by the increasing need for sustainability and energy efficiency across a range of industries.In order to improve the precision and openness of energy consumption projections,this study investigates the combination of machine learning(ML)methods with Shapley additive explanations(SHAP)values.The study evaluates three distinct models:the first is a Linear Regressor,the second is a Support Vector Regressor,and the third is a Decision Tree Regressor,which was scaled up to a Random Forest Regressor/Additions made were the third one which was Regressor which was extended to a Random Forest Regressor.These models were deployed with the use of Shareable,Plot-interpretable Explainable Artificial Intelligence techniques,to improve trust in the AI.The findings suggest that our developedmodels are superior to the conventional models discussed in prior studies;with high Mean Absolute Error(MAE)and Root Mean Squared Error(RMSE)values being close to perfection.In detail,the Random Forest Regressor shows the MAE of 0.001 for predicting the house prices whereas the SVR gives 0.21 of MAE and 0.24 RMSE.Such outcomes reflect the possibility of optimizing the use of the promoted advanced AI models with the use of Explainable AI for more accurate prediction of energy consumption and at the same time for the models’decision-making procedures’explanation.In addition to increasing prediction accuracy,this strategy gives stakeholders comprehensible insights,which facilitates improved decision-making and fosters confidence in AI-powered energy solutions.The outcomes show how well ML and SHAP work together to enhance prediction performance and guarantee transparency in energy usage projections.展开更多
“Flying Ad Hoc Networks(FANETs)”,which use“Unmanned Aerial Vehicles(UAVs)”,are developing as a critical mechanism for numerous applications,such as military operations and civilian services.The dynamic nature of F...“Flying Ad Hoc Networks(FANETs)”,which use“Unmanned Aerial Vehicles(UAVs)”,are developing as a critical mechanism for numerous applications,such as military operations and civilian services.The dynamic nature of FANETs,with high mobility,quick node migration,and frequent topology changes,presents substantial hurdles for routing protocol development.Over the preceding few years,researchers have found that machine learning gives productive solutions in routing while preserving the nature of FANET,which is topology change and high mobility.This paper reviews current research on routing protocols and Machine Learning(ML)approaches applied to FANETs,emphasizing developments between 2021 and 2023.The research uses the PRISMA approach to sift through the literature,filtering results from the SCOPUS database to find 82 relevant publications.The research study uses machine learning-based routing algorithms to beat the issues of high mobility,dynamic topologies,and intermittent connection in FANETs.When compared with conventional routing,it gives an energy-efficient and fast decision-making solution in a real-time environment,with greater fault tolerance capabilities.These protocols aim to increase routing efficiency,flexibility,and network stability using ML’s predictive and adaptive capabilities.This comprehensive review seeks to integrate existing information,offer novel integration approaches,and recommend future research topics for improving routing efficiency and flexibility in FANETs.Moreover,the study highlights emerging trends in ML integration,discusses challenges faced during the review,and discusses overcoming these hurdles in future research.展开更多
基金supported by the Deanship of Graduate Studies and Scientific Research at University of Bisha for funding this research through the promising program under grant number(UB-Promising-33-1445).
文摘Open networks and heterogeneous services in the Internet of Vehicles(IoV)can lead to security and privacy challenges.One key requirement for such systems is the preservation of user privacy,ensuring a seamless experience in driving,navigation,and communication.These privacy needs are influenced by various factors,such as data collected at different intervals,trip durations,and user interactions.To address this,the paper proposes a Support Vector Machine(SVM)model designed to process large amounts of aggregated data and recommend privacy preserving measures.The model analyzes data based on user demands and interactions with service providers or neighboring infrastructure.It aims to minimize privacy risks while ensuring service continuity and sustainability.The SVMmodel helps validate the system’s reliability by creating a hyperplane that distinguishes between maximum and minimum privacy recommendations.The results demonstrate the effectiveness of the proposed SVM model in enhancing both privacy and service performance.
基金supported by the SP2024/089 Project by the Faculty of Materials Science and Technology,VˇSB-Technical University of Ostrava.
文摘In engineering practice,it is often necessary to determine functional relationships between dependent and independent variables.These relationships can be highly nonlinear,and classical regression approaches cannot always provide sufficiently reliable solutions.Nevertheless,Machine Learning(ML)techniques,which offer advanced regression tools to address complicated engineering issues,have been developed and widely explored.This study investigates the selected ML techniques to evaluate their suitability for application in the hot deformation behavior of metallic materials.The ML-based regression methods of Artificial Neural Networks(ANNs),Support Vector Machine(SVM),Decision Tree Regression(DTR),and Gaussian Process Regression(GPR)are applied to mathematically describe hot flow stress curve datasets acquired experimentally for a medium-carbon steel.Although the GPR method has not been used for such a regression task before,the results showed that its performance is the most favorable and practically unrivaled;neither the ANN method nor the other studied ML techniques provide such precise results of the solved regression analysis.
基金the University of Transport Technology under grant number DTTD2022-12.
文摘Determination of Shear Bond strength(SBS)at interlayer of double-layer asphalt concrete is crucial in flexible pavement structures.The study used three Machine Learning(ML)models,including K-Nearest Neighbors(KNN),Extra Trees(ET),and Light Gradient Boosting Machine(LGBM),to predict SBS based on easily determinable input parameters.Also,the Grid Search technique was employed for hyper-parameter tuning of the ML models,and cross-validation and learning curve analysis were used for training the models.The models were built on a database of 240 experimental results and three input variables:temperature,normal pressure,and tack coat rate.Model validation was performed using three statistical criteria:the coefficient of determination(R2),the Root Mean Square Error(RMSE),and the mean absolute error(MAE).Additionally,SHAP analysis was also used to validate the importance of the input variables in the prediction of the SBS.Results show that these models accurately predict SBS,with LGBM providing outstanding performance.SHAP(Shapley Additive explanation)analysis for LGBM indicates that temperature is the most influential factor on SBS.Consequently,the proposed ML models can quickly and accurately predict SBS between two layers of asphalt concrete,serving practical applications in flexible pavement structure design.
基金Funded by the Natural Science Foundation of China(No.52109168)。
文摘In order to study the characteristics of pure fly ash-based geopolymer concrete(PFGC)conveniently,we used a machine learning method that can quantify the perception of characteristics to predict its compressive strength.In this study,505 groups of data were collected,and a new database of compressive strength of PFGC was constructed.In order to establish an accurate prediction model of compressive strength,five different types of machine learning networks were used for comparative analysis.The five machine learning models all showed good compressive strength prediction performance on PFGC.Among them,R2,MSE,RMSE and MAE of decision tree model(DT)are 0.99,1.58,1.25,and 0.25,respectively.While R2,MSE,RMSE and MAE of random forest model(RF)are 0.97,5.17,2.27 and 1.38,respectively.The two models have high prediction accuracy and outstanding generalization ability.In order to enhance the interpretability of model decision-making,we used importance ranking to obtain the perception of machine learning model to 13 variables.These 13 variables include chemical composition of fly ash(SiO_(2)/Al_(2)O_(3),Si/Al),the ratio of alkaline liquid to the binder,curing temperature,curing durations inside oven,fly ash dosage,fine aggregate dosage,coarse aggregate dosage,extra water dosage and sodium hydroxide dosage.Curing temperature,specimen ages and curing durations inside oven have the greatest influence on the prediction results,indicating that curing conditions have more prominent influence on the compressive strength of PFGC than ordinary Portland cement concrete.The importance of curing conditions of PFGC even exceeds that of the concrete mix proportion,due to the low reactivity of pure fly ash.
基金financial support from the National Key Research and Development Program of China(2021YFB 3501501)the National Natural Science Foundation of China(No.22225803,22038001,22108007 and 22278011)+1 种基金Beijing Natural Science Foundation(No.Z230023)Beijing Science and Technology Commission(No.Z211100004321001).
文摘The high porosity and tunable chemical functionality of metal-organic frameworks(MOFs)make it a promising catalyst design platform.High-throughput screening of catalytic performance is feasible since the large MOF structure database is available.In this study,we report a machine learning model for high-throughput screening of MOF catalysts for the CO_(2) cycloaddition reaction.The descriptors for model training were judiciously chosen according to the reaction mechanism,which leads to high accuracy up to 97%for the 75%quantile of the training set as the classification criterion.The feature contribution was further evaluated with SHAP and PDP analysis to provide a certain physical understanding.12,415 hypothetical MOF structures and 100 reported MOFs were evaluated under 100℃ and 1 bar within one day using the model,and 239 potentially efficient catalysts were discovered.Among them,MOF-76(Y)achieved the top performance experimentally among reported MOFs,in good agreement with the prediction.
基金the Deanship of Scientifc Research at King Khalid University for funding this work through large group Research Project under grant number RGP2/421/45supported via funding from Prince Sattam bin Abdulaziz University project number(PSAU/2024/R/1446)+1 种基金supported by theResearchers Supporting Project Number(UM-DSR-IG-2023-07)Almaarefa University,Riyadh,Saudi Arabia.supported by the Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(No.2021R1F1A1055408).
文摘Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify specific flaws/diseases for diagnosis.The primary concern of ML applications is the precise selection of flexible image features for pattern detection and region classification.Most of the extracted image features are irrelevant and lead to an increase in computation time.Therefore,this article uses an analytical learning paradigm to design a Congruent Feature Selection Method to select the most relevant image features.This process trains the learning paradigm using similarity and correlation-based features over different textural intensities and pixel distributions.The similarity between the pixels over the various distribution patterns with high indexes is recommended for disease diagnosis.Later,the correlation based on intensity and distribution is analyzed to improve the feature selection congruency.Therefore,the more congruent pixels are sorted in the descending order of the selection,which identifies better regions than the distribution.Now,the learning paradigm is trained using intensity and region-based similarity to maximize the chances of selection.Therefore,the probability of feature selection,regardless of the textures and medical image patterns,is improved.This process enhances the performance of ML applications for different medical image processing.The proposed method improves the accuracy,precision,and training rate by 13.19%,10.69%,and 11.06%,respectively,compared to other models for the selected dataset.The mean error and selection time is also reduced by 12.56%and 13.56%,respectively,compared to the same models and dataset.
文摘Diabetic retinopathy(DR)remains a leading cause of vision impairment and blindness among individuals with diabetes,necessitating innovative approaches to screening and management.This editorial explores the transformative potential of artificial intelligence(AI)and machine learning(ML)in revolutionizing DR care.AI and ML technologies have demonstrated remarkable advancements in enhancing the accuracy,efficiency,and accessibility of DR screening,helping to overcome barriers to early detection.These technologies leverage vast datasets to identify patterns and predict disease progression with unprecedented precision,enabling clinicians to make more informed decisions.Furthermore,AI-driven solutions hold promise in personalizing management strategies for DR,incorpo-rating predictive analytics to tailor interventions and optimize treatment path-ways.By automating routine tasks,AI can reduce the burden on healthcare providers,allowing for a more focused allocation of resources towards complex patient care.This review aims to evaluate the current advancements and applic-ations of AI and ML in DR screening,and to discuss the potential of these techno-logies in developing personalized management strategies,ultimately aiming to improve patient outcomes and reduce the global burden of DR.The integration of AI and ML in DR care represents a paradigm shift,offering a glimpse into the future of ophthalmic healthcare.
文摘Every second, a large volume of useful data is created in social media about the various kind of online purchases and in another forms of reviews. Particularly, purchased products review data is enormously growing in different database repositories every day. Most of the review data are useful to new customers for theier further purchases as well as existing companies to view customers feedback about various products. Data Mining and Machine Leaning techniques are familiar to analyse such kind of data to visualise and know the potential use of the purchased items through online. The customers are making quality of products through their sentiments about the purchased items from different online companies. In this research work, it is analysed sentiments of Headphone review data, which is collected from online repositories. For the analysis of Headphone review data, some of the Machine Learning techniques like Support Vector Machines, Naive Bayes, Decision Trees and Random Forest Algorithms and a Hybrid method are applied to find the quality via the customers’ sentiments. The accuracy and performance of the taken algorithms are also analysed based on the three types of sentiments such as positive, negative and neutral.
文摘Machine learning(ML)is a type of artificial intelligence that assists computers in the acquisition of knowledge through data analysis,thus creating machines that can complete tasks otherwise requiring human intelligence.Among its various applications,it has proven groundbreaking in healthcare as well,both in clinical practice and research.In this editorial,we succinctly introduce ML applications and present a study,featured in the latest issue of the World Journal of Clinical Cases.The authors of this study conducted an analysis using both multiple linear regression(MLR)and ML methods to investigate the significant factors that may impact the estimated glomerular filtration rate in healthy women with and without non-alcoholic fatty liver disease(NAFLD).Their results implicated age as the most important determining factor in both groups,followed by lactic dehydrogenase,uric acid,forced expiratory volume in one second,and albumin.In addition,for the NAFLD-group,the 5th and 6th most important impact factors were thyroid-stimulating hormone and systolic blood pressure,as compared to plasma calcium and body fat for the NAFLD+group.However,the study's distinctive contribution lies in its adoption of ML methodologies,showcasing their superiority over traditional statistical approaches(herein MLR),thereby highlighting the potential of ML to represent an invaluable advanced adjunct tool in clinical practice and research.
文摘Critical to the safe, efficient, and reliable operation of an autonomous maritime vessel is its ability to perceive the external environment through onboard sensors. For this research, data was collected from a LiDAR sensor installed on a 16-foot catamaran unmanned vessel. This sensor generated point clouds of the surrounding maritime environment, which were then labeled by hand for training a machine learning (ML) model to perform a semantic segmentation task on LiDAR scans. In particular, the developed semantic segmentation classifies each point-cloud point as belonging to a certain buoy type. This paper describes the developed Unity Game Engine (Unity) simulation to emulate the maritime environment perceived by LiDAR with the goal of generating large (automatically labeled) simulation datasets and improving the ML model performance since hand-labeled real-life LiDAR scan data may be scarce. The Unity simulation data combined with labeled real-life point cloud data was used for a PointNet-based neural network model, the architecture of which is presented in this paper. Fitting the PointNet-based model on the simulation data followed by fine-tuning the combined dataset allowed for accurate semantic segmentation of point clouds on the real-world data. The ML model performance on several combinations of simulation and real-life data is explored. The resulting Intersection over Union (IoU) metric scores are quite high, ranging between 0.78 and 0.89, when validated on simulation and real-life data. The confusion matrix-entry values indicate an accurate semantic segmentation of the buoy types.
文摘Fabric dyeing is a critical production process in the clothing industry and heavily relies on batch processing machines(BPM).In this study,the parallel BPM scheduling problem with machine eligibility in fabric dyeing is considered,and an adaptive cooperated shuffled frog-leaping algorithm(ACSFLA)is proposed to minimize makespan and total tardiness simultaneously.ACSFLA determines the search times for each memeplex based on its quality,with more searches in high-quality memeplexes.An adaptive cooperated and diversified search mechanism is applied,dynamically adjusting search strategies for each memeplex based on their dominance relationships and quality.During the cooperated search,ACSFLA uses a segmented and dynamic targeted search approach,while in non-cooperated scenarios,the search focuses on local search around superior solutions to improve efficiency.Furthermore,ACSFLA employs adaptive population division and partial population shuffling strategies.Through these strategies,memeplexes with low evolutionary potential are selected for reconstruction in the next generation,while thosewithhighevolutionarypotential are retained to continue their evolution.Toevaluate the performance of ACSFLA,comparative experiments were conducted using ACSFLA,SFLA,ASFLA,MOABC,and NSGA-CC in 90 instances.The computational results reveal that ACSFLA outperforms the other algorithms in 78 of the 90 test cases,highlighting its advantages in solving the parallel BPM scheduling problem with machine eligibility.
文摘BACKGROUND Machine learning(ML),a major branch of artificial intelligence,has not only demonstrated the potential to significantly improve numerous sectors of healthcare but has also made significant contributions to the field of solid organ transplantation.ML provides revolutionary opportunities in areas such as donorrecipient matching,post-transplant monitoring,and patient care by automatically analyzing large amounts of data,identifying patterns,and forecasting outcomes.AIM To conduct a comprehensive bibliometric analysis of publications on the use of ML in transplantation to understand current research trends and their implications.METHODS On July 18,a thorough search strategy was used with the Web of Science database.ML and transplantation-related keywords were utilized.With the aid of the VOS viewer application,the identified articles were subjected to bibliometric variable analysis in order to determine publication counts,citation counts,contributing countries,and institutions,among other factors.RESULTS Of the 529 articles that were first identified,427 were deemed relevant for bibliometric analysis.A surge in publications was observed over the last four years,especially after 2018,signifying growing interest in this area.With 209 publications,the United States emerged as the top contributor.Notably,the"Journal of Heart and Lung Transplantation"and the"American Journal of Transplantation"emerged as the leading journals,publishing the highest number of relevant articles.Frequent keyword searches revealed that patient survival,mortality,outcomes,allocation,and risk assessment were significant themes of focus.CONCLUSION The growing body of pertinent publications highlights ML's growing presence in the field of solid organ transplantation.This bibliometric analysis highlights the growing importance of ML in transplant research and highlights its exciting potential to change medical practices and enhance patient outcomes.Encouraging collaboration between significant contributors can potentially fast-track advancements in this interdisciplinary domain.
文摘The purpose of this research paper is to explore how early Machine Learning models have shown a bias in the results where a bias should not be seen. A prime example is an ML model that favors male applicants over female applicants. While the model is supposed to take into consideration other aspects of the data, it tends to have a bias and skew the results one way or another. Therefore, in this paper, we will be exploring how this bias comes about and how it can be fixed. In this research, I have taken different case studies of real-world examples of these biases being shown. For example, an Amazon hiring application that favored male applicants or a loan application that favored western applicants is both studies that I will reference in this paper and explore the situation itself. In order to find out where the bias is coming from, I have constructed a machine learning model that will use a dataset found on Kaggle, and I will analyze the results of said ML model. The results that the research has yielded clarify the reason for said bias in the artificial intelligence models. The way the model was trained influences the way the results will play out. If the model is trained with a large amount of male applicant data over female applicant data, the model will favor male applicants. Therefore, when they are trained with new data, they are likely to accept applications that are male over female despite having equivalent parts. Later in the paper, I will dive deeper into the way that AI applications work and how they find biases and trends in order to classify things correctly. However, there is a fine line between classification and bias and making sure that it is rightfully corrected and tested is important in machine learning today.
文摘Solar cells made from perovskites have experienced rapid development as examples of third-generation solar cells in recent years. The traditional trial-and-error method is inefficient, and the search space is incredibly large. This makes developing advanced perovskite materials, as well as high conversion efficiencies and stability of perovskite solar cells (PSCs), a challenging task. A growing number of data-driven machine learning (ML) applications are being developed in the materials science field, due to the availability of large databases and increased computing power. There are many advantages associated with the use of machine learning to predict the properties of potential perovskite materials, as well as provide additional knowledge on how these materials work to fast-track their progress. Thus, the purpose of this paper is to develop a conceptual model to improve the efficiency of a perovskite solar cell using machine learning techniques in order to improve its performance. This study relies on the application of design science as a method to conduct the research as part of the study. The developed model consists of six phases: Data collection and preprocessing, feature selection and engineering, model training and evaluation, performance assessment, optimization and fine-tuning, and deployment and application. As a result of this model, there is a great deal of promise in advancing the field of perovskite solar cells as well as providing a basis for developing more efficient and cost-effective solar energy technologies in the future.
文摘The rapid growth of machine learning(ML)across fields has intensified the challenge of selecting the right algorithm for specific tasks,known as the Algorithm Selection Problem(ASP).Traditional trial-and-error methods have become impractical due to their resource demands.Automated Machine Learning(AutoML)systems automate this process,but often neglect the group structures and sparsity in meta-features,leading to inefficiencies in algorithm recommendations for classification tasks.This paper proposes a meta-learning approach using Multivariate Sparse Group Lasso(MSGL)to address these limitations.Our method models both within-group and across-group sparsity among meta-features to manage high-dimensional data and reduce multicollinearity across eight meta-feature groups.The Fast Iterative Shrinkage-Thresholding Algorithm(FISTA)with adaptive restart efficiently solves the non-smooth optimization problem.Empirical validation on 145 classification datasets with 17 classification algorithms shows that our meta-learning method outperforms four state-of-the-art approaches,achieving 77.18%classification accuracy,86.07%recommendation accuracy and 88.83%normalized discounted cumulative gain.
文摘1st cases of COVID-19 were reported in March 2020 in Bangladesh and rapidly increased daily. So many steps were taken by the Bangladesh government to reduce the outbreak of COVID-19, such as masks, gatherings, local movements, international movements, etc. The data was collected from the World Health Organization. In this research, different variables have been used for analysis, for instance, new cases, new deaths, masks, schools, business, gatherings, domestic movement, international travel, new test, positive rate, test per case, new vaccination smoothed, new vaccine, total vaccination, and stringency index. Machine learning algorithms were used to predict and build the model, such as linear regression, K-nearest neighbours, decision trees, random forests, and support vector machines. Accuracy and Mean Square error (MSE) were used to test the model. A hyperparameter was also applied to find the optimum values of parameters. After computing the analysis, the result showed that the linear regression algorithm performs the best overall among the algorithms listed, with the highest testing accuracy and the lowest RMSE before and after hyper-tuning. The highest accuracy and lowest MSE were used for the best model, and for this data set, Linear regression got the highest accuracy, 0.98 and 0.97 and the lowest MSE, 4.79 and 4.04, respectively.
基金funded by the Pyramid Talent Training Project of Beijing University of Civil Engineering and Architecture under Grant GJZJ20220802。
文摘Accurately estimating the State of Health(SOH)and Remaining Useful Life(RUL)of lithium-ion batteries(LIBs)is crucial for the continuous and stable operation of battery management systems.However,due to the complex internal chemical systems of LIBs and the nonlinear degradation of their performance,direct measurement of SOH and RUL is challenging.To address these issues,the Twin Support Vector Machine(TWSVM)method is proposed to predict SOH and RUL.Initially,the constant current charging time of the lithium battery is extracted as a health indicator(HI),decomposed using Variational Modal Decomposition(VMD),and feature correlations are computed using Importance of Random Forest Features(RF)to maximize the extraction of critical factors influencing battery performance degradation.Furthermore,to enhance the global search capability of the Convolution Optimization Algorithm(COA),improvements are made using Good Point Set theory and the Differential Evolution method.The Improved Convolution Optimization Algorithm(ICOA)is employed to optimize TWSVM parameters for constructing SOH and RUL prediction models.Finally,the proposed models are validated using NASA and CALCE lithium-ion battery datasets.Experimental results demonstrate that the proposed models achieve an RMSE not exceeding 0.007 and an MAPE not exceeding 0.0082 for SOH and RUL prediction,with a relative error in RUL prediction within the range of[-1.8%,2%].Compared to other models,the proposed model not only exhibits superior fitting capability but also demonstrates robust performance.
基金Natural Science Foundation of Shandong Province,Grant/Award Number:ZR202103010903Doctoral Fund of Shandong Jianzhu University,Grant/Award Number:X21101Z。
文摘To guarantee safe and efficient tunneling of a tunnel boring machine(TBM),rapid and accurate judgment of the rock mass condition is essential.Based on fuzzy C-means clustering,this paper proposes a grouped machine learning method for predicting rock mass parameters.An elaborate data set on field rock mass is collected,which also matches field TBM tunneling.Meanwhile,target stratum samples are divided into several clusters by fuzzy C-means clustering,and multiple submodels are trained by samples in different clusters with the input of pretreated TBM tunneling data and the output of rock mass parameter data.Each testing sample or newly encountered tunneling condition can be predicted by multiple submodels with the weight of the membership degree of the sample to each cluster.The proposed method has been realized by 100 training samples and verified by 30 testing samples collected from the C1 part of the Pearl Delta water resources allocation project.The average percentage error of uniaxial compressive strength and joint frequency(Jf)of the 30 testing samples predicted by the pure back propagation(BP)neural network is 13.62%and 12.38%,while that predicted by the BP neural network combined with fuzzy C-means is 7.66%and6.40%,respectively.In addition,by combining fuzzy C-means clustering,the prediction accuracies of support vector regression and random forest are also improved to different degrees,which demonstrates that fuzzy C-means clustering is helpful for improving the prediction accuracy of machine learning and thus has good applicability.Accordingly,the proposed method is valuable for predicting rock mass parameters during TBM tunneling.
文摘The significance of precise energy usage forecasts has been highlighted by the increasing need for sustainability and energy efficiency across a range of industries.In order to improve the precision and openness of energy consumption projections,this study investigates the combination of machine learning(ML)methods with Shapley additive explanations(SHAP)values.The study evaluates three distinct models:the first is a Linear Regressor,the second is a Support Vector Regressor,and the third is a Decision Tree Regressor,which was scaled up to a Random Forest Regressor/Additions made were the third one which was Regressor which was extended to a Random Forest Regressor.These models were deployed with the use of Shareable,Plot-interpretable Explainable Artificial Intelligence techniques,to improve trust in the AI.The findings suggest that our developedmodels are superior to the conventional models discussed in prior studies;with high Mean Absolute Error(MAE)and Root Mean Squared Error(RMSE)values being close to perfection.In detail,the Random Forest Regressor shows the MAE of 0.001 for predicting the house prices whereas the SVR gives 0.21 of MAE and 0.24 RMSE.Such outcomes reflect the possibility of optimizing the use of the promoted advanced AI models with the use of Explainable AI for more accurate prediction of energy consumption and at the same time for the models’decision-making procedures’explanation.In addition to increasing prediction accuracy,this strategy gives stakeholders comprehensible insights,which facilitates improved decision-making and fosters confidence in AI-powered energy solutions.The outcomes show how well ML and SHAP work together to enhance prediction performance and guarantee transparency in energy usage projections.
基金support the findings of this study are openly available in(Scopus database)at www.scopus.com(accessed on 07 January 2025).
文摘“Flying Ad Hoc Networks(FANETs)”,which use“Unmanned Aerial Vehicles(UAVs)”,are developing as a critical mechanism for numerous applications,such as military operations and civilian services.The dynamic nature of FANETs,with high mobility,quick node migration,and frequent topology changes,presents substantial hurdles for routing protocol development.Over the preceding few years,researchers have found that machine learning gives productive solutions in routing while preserving the nature of FANET,which is topology change and high mobility.This paper reviews current research on routing protocols and Machine Learning(ML)approaches applied to FANETs,emphasizing developments between 2021 and 2023.The research uses the PRISMA approach to sift through the literature,filtering results from the SCOPUS database to find 82 relevant publications.The research study uses machine learning-based routing algorithms to beat the issues of high mobility,dynamic topologies,and intermittent connection in FANETs.When compared with conventional routing,it gives an energy-efficient and fast decision-making solution in a real-time environment,with greater fault tolerance capabilities.These protocols aim to increase routing efficiency,flexibility,and network stability using ML’s predictive and adaptive capabilities.This comprehensive review seeks to integrate existing information,offer novel integration approaches,and recommend future research topics for improving routing efficiency and flexibility in FANETs.Moreover,the study highlights emerging trends in ML integration,discusses challenges faced during the review,and discusses overcoming these hurdles in future research.