期刊文献+
共找到8,482篇文章
< 1 2 250 >
每页显示 20 50 100
The “Ocean Stabilization Machine” May Represent a Primary Factor Underlying the Effect of “Global Warming on Climate Change”
1
作者 Yanjun Mao Jiqing Tan +1 位作者 Bomin Chen Huiyi Fan 《Atmospheric and Climate Sciences》 2019年第1期135-145,共11页
Contemporary references to global warming pertain to the dramatic increase in monthly global land surface temperature (GLST) anomalies since 1976. In this paper, we argue that recent global warming is primarily a resu... Contemporary references to global warming pertain to the dramatic increase in monthly global land surface temperature (GLST) anomalies since 1976. In this paper, we argue that recent global warming is primarily a result of natural causes;we have established three steps that support this viewpoint. The first is to identify periodic functions that perfectly match all of the monthly anomaly data for GLST;the second is to identify monthly sea surface temperature (SST) anomalies that are located within different ocean basin domains and highly correlated with the monthly GLST anomalies;and the third is to determine whether the dramatically increasing (or dramatically decreasing) K-line diagram signals that coincide with GLST anomalies occurred in El Ni&#241o years (or La Ni&#241a years). We have identified 15,295 periodic functions that perfectly fit the monthly GLST anomalies from 1880 to 2013 and show that the monthly SST anomalies in six domains in different oceans are highly correlated with the monthly GLST anomalies. In addition, most of the annual dramatically increasing GLST anomalies occur in El Ni&#241o years;and most of the annual dramatically decreasing GLST anomalies occur in La Ni&#241a years. These findings indicate that the “ocean stabilization machine” might represent a primary factor underlying the effect of “global warming on climate change”. 展开更多
关键词 GLOBAL Warming MONTHLY GLOBAL Land Surface Temperature (GLST) ANOMALIES MONTHLY SST ANOMALIES OCEAN STABILIZATION machine K-Line Diagram Signals
在线阅读 下载PDF
Machine Learning Techniques in Predicting Hot Deformation Behavior of Metallic Materials
2
作者 Petr Opela Josef Walek Jaromír Kopecek 《Computer Modeling in Engineering & Sciences》 SCIE EI 2025年第1期713-732,共20页
In engineering practice,it is often necessary to determine functional relationships between dependent and independent variables.These relationships can be highly nonlinear,and classical regression approaches cannot al... In engineering practice,it is often necessary to determine functional relationships between dependent and independent variables.These relationships can be highly nonlinear,and classical regression approaches cannot always provide sufficiently reliable solutions.Nevertheless,Machine Learning(ML)techniques,which offer advanced regression tools to address complicated engineering issues,have been developed and widely explored.This study investigates the selected ML techniques to evaluate their suitability for application in the hot deformation behavior of metallic materials.The ML-based regression methods of Artificial Neural Networks(ANNs),Support Vector Machine(SVM),Decision Tree Regression(DTR),and Gaussian Process Regression(GPR)are applied to mathematically describe hot flow stress curve datasets acquired experimentally for a medium-carbon steel.Although the GPR method has not been used for such a regression task before,the results showed that its performance is the most favorable and practically unrivaled;neither the ANN method nor the other studied ML techniques provide such precise results of the solved regression analysis. 展开更多
关键词 machine learning Gaussian process regression artificial neural networks support vector machine hot deformation behavior
在线阅读 下载PDF
Prediction of Shear Bond Strength of Asphalt Concrete Pavement Using Machine Learning Models and Grid Search Optimization Technique
3
作者 Quynh-Anh Thi Bui Dam Duc Nguyen +2 位作者 Hiep Van Le Indra Prakash Binh Thai Pham 《Computer Modeling in Engineering & Sciences》 SCIE EI 2025年第1期691-712,共22页
Determination of Shear Bond strength(SBS)at interlayer of double-layer asphalt concrete is crucial in flexible pavement structures.The study used three Machine Learning(ML)models,including K-Nearest Neighbors(KNN),Ext... Determination of Shear Bond strength(SBS)at interlayer of double-layer asphalt concrete is crucial in flexible pavement structures.The study used three Machine Learning(ML)models,including K-Nearest Neighbors(KNN),Extra Trees(ET),and Light Gradient Boosting Machine(LGBM),to predict SBS based on easily determinable input parameters.Also,the Grid Search technique was employed for hyper-parameter tuning of the ML models,and cross-validation and learning curve analysis were used for training the models.The models were built on a database of 240 experimental results and three input variables:temperature,normal pressure,and tack coat rate.Model validation was performed using three statistical criteria:the coefficient of determination(R2),the Root Mean Square Error(RMSE),and the mean absolute error(MAE).Additionally,SHAP analysis was also used to validate the importance of the input variables in the prediction of the SBS.Results show that these models accurately predict SBS,with LGBM providing outstanding performance.SHAP(Shapley Additive explanation)analysis for LGBM indicates that temperature is the most influential factor on SBS.Consequently,the proposed ML models can quickly and accurately predict SBS between two layers of asphalt concrete,serving practical applications in flexible pavement structure design. 展开更多
关键词 Shear bond asphalt pavement grid search OPTIMIZATION machine learning
在线阅读 下载PDF
Congruent Feature Selection Method to Improve the Efficacy of Machine Learning-Based Classification in Medical Image Processing
4
作者 Mohd Anjum Naoufel Kraiem +2 位作者 Hong Min Ashit Kumar Dutta Yousef Ibrahim Daradkeh 《Computer Modeling in Engineering & Sciences》 SCIE EI 2025年第1期357-384,共28页
Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify sp... Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify specific flaws/diseases for diagnosis.The primary concern of ML applications is the precise selection of flexible image features for pattern detection and region classification.Most of the extracted image features are irrelevant and lead to an increase in computation time.Therefore,this article uses an analytical learning paradigm to design a Congruent Feature Selection Method to select the most relevant image features.This process trains the learning paradigm using similarity and correlation-based features over different textural intensities and pixel distributions.The similarity between the pixels over the various distribution patterns with high indexes is recommended for disease diagnosis.Later,the correlation based on intensity and distribution is analyzed to improve the feature selection congruency.Therefore,the more congruent pixels are sorted in the descending order of the selection,which identifies better regions than the distribution.Now,the learning paradigm is trained using intensity and region-based similarity to maximize the chances of selection.Therefore,the probability of feature selection,regardless of the textures and medical image patterns,is improved.This process enhances the performance of ML applications for different medical image processing.The proposed method improves the accuracy,precision,and training rate by 13.19%,10.69%,and 11.06%,respectively,compared to other models for the selected dataset.The mean error and selection time is also reduced by 12.56%and 13.56%,respectively,compared to the same models and dataset. 展开更多
关键词 Computer vision feature selection machine learning region detection texture analysis image classification medical images
在线阅读 下载PDF
A Support Vector Machine(SVM)Model for Privacy Recommending Data Processing Model(PRDPM)in Internet of Vehicles
5
作者 Ali Alqarni 《Computers, Materials & Continua》 SCIE EI 2025年第1期389-406,共18页
Open networks and heterogeneous services in the Internet of Vehicles(IoV)can lead to security and privacy challenges.One key requirement for such systems is the preservation of user privacy,ensuring a seamless experie... Open networks and heterogeneous services in the Internet of Vehicles(IoV)can lead to security and privacy challenges.One key requirement for such systems is the preservation of user privacy,ensuring a seamless experience in driving,navigation,and communication.These privacy needs are influenced by various factors,such as data collected at different intervals,trip durations,and user interactions.To address this,the paper proposes a Support Vector Machine(SVM)model designed to process large amounts of aggregated data and recommend privacy preserving measures.The model analyzes data based on user demands and interactions with service providers or neighboring infrastructure.It aims to minimize privacy risks while ensuring service continuity and sustainability.The SVMmodel helps validate the system’s reliability by creating a hyperplane that distinguishes between maximum and minimum privacy recommendations.The results demonstrate the effectiveness of the proposed SVM model in enhancing both privacy and service performance. 展开更多
关键词 Support vector machine big data IoV PRIVACY-PRESERVING
在线阅读 下载PDF
High-throughput screening of CO_(2) cycloaddition MOF catalyst with an explainable machine learning model
6
作者 Xuefeng Bai Yi Li +3 位作者 Yabo Xie Qiancheng Chen Xin Zhang Jian-Rong Li 《Green Energy & Environment》 SCIE EI CAS 2025年第1期132-138,共7页
The high porosity and tunable chemical functionality of metal-organic frameworks(MOFs)make it a promising catalyst design platform.High-throughput screening of catalytic performance is feasible since the large MOF str... The high porosity and tunable chemical functionality of metal-organic frameworks(MOFs)make it a promising catalyst design platform.High-throughput screening of catalytic performance is feasible since the large MOF structure database is available.In this study,we report a machine learning model for high-throughput screening of MOF catalysts for the CO_(2) cycloaddition reaction.The descriptors for model training were judiciously chosen according to the reaction mechanism,which leads to high accuracy up to 97%for the 75%quantile of the training set as the classification criterion.The feature contribution was further evaluated with SHAP and PDP analysis to provide a certain physical understanding.12,415 hypothetical MOF structures and 100 reported MOFs were evaluated under 100℃ and 1 bar within one day using the model,and 239 potentially efficient catalysts were discovered.Among them,MOF-76(Y)achieved the top performance experimentally among reported MOFs,in good agreement with the prediction. 展开更多
关键词 Metal-organic frameworks High-throughput screening machine learning Explainable model CO_(2)cycloaddition
在线阅读 下载PDF
Interpretable Machine Learning Method for Compressive Strength Prediction and Analysis of Pure Fly Ash-based Geopolymer Concrete
7
作者 SHI Yuqiong LI Jingyi +1 位作者 ZHANG Yang LI Li 《Journal of Wuhan University of Technology(Materials Science)》 SCIE EI CAS 2025年第1期65-78,共14页
In order to study the characteristics of pure fly ash-based geopolymer concrete(PFGC)conveniently,we used a machine learning method that can quantify the perception of characteristics to predict its compressive streng... In order to study the characteristics of pure fly ash-based geopolymer concrete(PFGC)conveniently,we used a machine learning method that can quantify the perception of characteristics to predict its compressive strength.In this study,505 groups of data were collected,and a new database of compressive strength of PFGC was constructed.In order to establish an accurate prediction model of compressive strength,five different types of machine learning networks were used for comparative analysis.The five machine learning models all showed good compressive strength prediction performance on PFGC.Among them,R2,MSE,RMSE and MAE of decision tree model(DT)are 0.99,1.58,1.25,and 0.25,respectively.While R2,MSE,RMSE and MAE of random forest model(RF)are 0.97,5.17,2.27 and 1.38,respectively.The two models have high prediction accuracy and outstanding generalization ability.In order to enhance the interpretability of model decision-making,we used importance ranking to obtain the perception of machine learning model to 13 variables.These 13 variables include chemical composition of fly ash(SiO_(2)/Al_(2)O_(3),Si/Al),the ratio of alkaline liquid to the binder,curing temperature,curing durations inside oven,fly ash dosage,fine aggregate dosage,coarse aggregate dosage,extra water dosage and sodium hydroxide dosage.Curing temperature,specimen ages and curing durations inside oven have the greatest influence on the prediction results,indicating that curing conditions have more prominent influence on the compressive strength of PFGC than ordinary Portland cement concrete.The importance of curing conditions of PFGC even exceeds that of the concrete mix proportion,due to the low reactivity of pure fly ash. 展开更多
关键词 machine learning pure fly ash geopolymer compressive strength feature perception
在线阅读 下载PDF
Revolutionizing diabetic retinopathy screening and management:The role of artificial intelligence and machine learning
8
作者 Mona Mohamed Ibrahim Abdalla Jaiprakash Mohanraj 《World Journal of Clinical Cases》 SCIE 2025年第5期1-12,共12页
Diabetic retinopathy(DR)remains a leading cause of vision impairment and blindness among individuals with diabetes,necessitating innovative approaches to screening and management.This editorial explores the transforma... Diabetic retinopathy(DR)remains a leading cause of vision impairment and blindness among individuals with diabetes,necessitating innovative approaches to screening and management.This editorial explores the transformative potential of artificial intelligence(AI)and machine learning(ML)in revolutionizing DR care.AI and ML technologies have demonstrated remarkable advancements in enhancing the accuracy,efficiency,and accessibility of DR screening,helping to overcome barriers to early detection.These technologies leverage vast datasets to identify patterns and predict disease progression with unprecedented precision,enabling clinicians to make more informed decisions.Furthermore,AI-driven solutions hold promise in personalizing management strategies for DR,incorpo-rating predictive analytics to tailor interventions and optimize treatment path-ways.By automating routine tasks,AI can reduce the burden on healthcare providers,allowing for a more focused allocation of resources towards complex patient care.This review aims to evaluate the current advancements and applic-ations of AI and ML in DR screening,and to discuss the potential of these techno-logies in developing personalized management strategies,ultimately aiming to improve patient outcomes and reduce the global burden of DR.The integration of AI and ML in DR care represents a paradigm shift,offering a glimpse into the future of ophthalmic healthcare. 展开更多
关键词 Diabetic retinopathy Artificial intelligence machine learning SCREENING MANAGEMENT Predictive analytics Personalized medicine
在线阅读 下载PDF
Machine learning applications in healthcare clinical practice and research
9
作者 Nikolaos-Achilleas Arkoudis Stavros P Papadakos 《World Journal of Clinical Cases》 SCIE 2025年第1期16-21,共6页
Machine learning(ML)is a type of artificial intelligence that assists computers in the acquisition of knowledge through data analysis,thus creating machines that can complete tasks otherwise requiring human intelligen... Machine learning(ML)is a type of artificial intelligence that assists computers in the acquisition of knowledge through data analysis,thus creating machines that can complete tasks otherwise requiring human intelligence.Among its various applications,it has proven groundbreaking in healthcare as well,both in clinical practice and research.In this editorial,we succinctly introduce ML applications and present a study,featured in the latest issue of the World Journal of Clinical Cases.The authors of this study conducted an analysis using both multiple linear regression(MLR)and ML methods to investigate the significant factors that may impact the estimated glomerular filtration rate in healthy women with and without non-alcoholic fatty liver disease(NAFLD).Their results implicated age as the most important determining factor in both groups,followed by lactic dehydrogenase,uric acid,forced expiratory volume in one second,and albumin.In addition,for the NAFLD-group,the 5th and 6th most important impact factors were thyroid-stimulating hormone and systolic blood pressure,as compared to plasma calcium and body fat for the NAFLD+group.However,the study's distinctive contribution lies in its adoption of ML methodologies,showcasing their superiority over traditional statistical approaches(herein MLR),thereby highlighting the potential of ML to represent an invaluable advanced adjunct tool in clinical practice and research. 展开更多
关键词 machine Learning Artificial INTELLIGENCE CLINICAL Practice RESEARCH Glomerular filtration rate Non-alcoholic fatty liver disease MEDICINE
在线阅读 下载PDF
Joint Estimation of SOH and RUL for Lithium-Ion Batteries Based on Improved Twin Support Vector Machineh
10
作者 Liyao Yang Hongyan Ma +1 位作者 Yingda Zhang Wei He 《Energy Engineering》 EI 2025年第1期243-264,共22页
Accurately estimating the State of Health(SOH)and Remaining Useful Life(RUL)of lithium-ion batteries(LIBs)is crucial for the continuous and stable operation of battery management systems.However,due to the complex int... Accurately estimating the State of Health(SOH)and Remaining Useful Life(RUL)of lithium-ion batteries(LIBs)is crucial for the continuous and stable operation of battery management systems.However,due to the complex internal chemical systems of LIBs and the nonlinear degradation of their performance,direct measurement of SOH and RUL is challenging.To address these issues,the Twin Support Vector Machine(TWSVM)method is proposed to predict SOH and RUL.Initially,the constant current charging time of the lithium battery is extracted as a health indicator(HI),decomposed using Variational Modal Decomposition(VMD),and feature correlations are computed using Importance of Random Forest Features(RF)to maximize the extraction of critical factors influencing battery performance degradation.Furthermore,to enhance the global search capability of the Convolution Optimization Algorithm(COA),improvements are made using Good Point Set theory and the Differential Evolution method.The Improved Convolution Optimization Algorithm(ICOA)is employed to optimize TWSVM parameters for constructing SOH and RUL prediction models.Finally,the proposed models are validated using NASA and CALCE lithium-ion battery datasets.Experimental results demonstrate that the proposed models achieve an RMSE not exceeding 0.007 and an MAPE not exceeding 0.0082 for SOH and RUL prediction,with a relative error in RUL prediction within the range of[-1.8%,2%].Compared to other models,the proposed model not only exhibits superior fitting capability but also demonstrates robust performance. 展开更多
关键词 State of health remaining useful life variational modal decomposition random forest twin support vector machine convolutional optimization algorithm
在线阅读 下载PDF
Machine learning in solid organ transplantation:Charting the evolving landscape
11
作者 Badi Rawashdeh Haneen Al-abdallat +3 位作者 Emre Arpali Beje Thomas Ty B Dunn Matthew Cooper 《World Journal of Transplantation》 2025年第1期165-177,共13页
BACKGROUND Machine learning(ML),a major branch of artificial intelligence,has not only demonstrated the potential to significantly improve numerous sectors of healthcare but has also made significant contributions to ... BACKGROUND Machine learning(ML),a major branch of artificial intelligence,has not only demonstrated the potential to significantly improve numerous sectors of healthcare but has also made significant contributions to the field of solid organ transplantation.ML provides revolutionary opportunities in areas such as donorrecipient matching,post-transplant monitoring,and patient care by automatically analyzing large amounts of data,identifying patterns,and forecasting outcomes.AIM To conduct a comprehensive bibliometric analysis of publications on the use of ML in transplantation to understand current research trends and their implications.METHODS On July 18,a thorough search strategy was used with the Web of Science database.ML and transplantation-related keywords were utilized.With the aid of the VOS viewer application,the identified articles were subjected to bibliometric variable analysis in order to determine publication counts,citation counts,contributing countries,and institutions,among other factors.RESULTS Of the 529 articles that were first identified,427 were deemed relevant for bibliometric analysis.A surge in publications was observed over the last four years,especially after 2018,signifying growing interest in this area.With 209 publications,the United States emerged as the top contributor.Notably,the"Journal of Heart and Lung Transplantation"and the"American Journal of Transplantation"emerged as the leading journals,publishing the highest number of relevant articles.Frequent keyword searches revealed that patient survival,mortality,outcomes,allocation,and risk assessment were significant themes of focus.CONCLUSION The growing body of pertinent publications highlights ML's growing presence in the field of solid organ transplantation.This bibliometric analysis highlights the growing importance of ML in transplant research and highlights its exciting potential to change medical practices and enhance patient outcomes.Encouraging collaboration between significant contributors can potentially fast-track advancements in this interdisciplinary domain. 展开更多
关键词 machine learning Artificial Intelligence Solid organ transplantation Bibliometric analysis
在线阅读 下载PDF
Machine learning-based models for prediction of in-hospital mortality in patients with dengue shock syndrome
12
作者 Luan Thanh Vo Thien Vu +2 位作者 Thach Ngoc Pham Tung Huu Trinh Thanh Tat Nguyen 《World Journal of Methodology》 2025年第3期89-99,共11页
BACKGROUND Severe dengue children with critical complications have been attributed to high mortality rates,varying from approximately 1%to over 20%.To date,there is a lack of data on machine-learning-based algorithms ... BACKGROUND Severe dengue children with critical complications have been attributed to high mortality rates,varying from approximately 1%to over 20%.To date,there is a lack of data on machine-learning-based algorithms for predicting the risk of inhospital mortality in children with dengue shock syndrome(DSS).AIM To develop machine-learning models to estimate the risk of death in hospitalized children with DSS.METHODS This single-center retrospective study was conducted at tertiary Children’s Hospital No.2 in Viet Nam,between 2013 and 2022.The primary outcome was the in-hospital mortality rate in children with DSS admitted to the pediatric intensive care unit(PICU).Nine significant features were predetermined for further analysis using machine learning models.An oversampling method was used to enhance the model performance.Supervised models,including logistic regression,Naïve Bayes,Random Forest(RF),K-nearest neighbors,Decision Tree and Extreme Gradient Boosting(XGBoost),were employed to develop predictive models.The Shapley Additive Explanation was used to determine the degree of contribution of the features.RESULTS In total,1278 PICU-admitted children with complete data were included in the analysis.The median patient age was 8.1 years(interquartile range:5.4-10.7).Thirty-nine patients(3%)died.The RF and XGboost models demonstrated the highest performance.The Shapley Addictive Explanations model revealed that the most important predictive features included younger age,female patients,presence of underlying diseases,severe transaminitis,severe bleeding,low platelet counts requiring platelet transfusion,elevated levels of international normalized ratio,blood lactate and serum creatinine,large volume of resuscitation fluid and a high vasoactive inotropic score(>30).CONCLUSION We developed robust machine learning-based models to estimate the risk of death in hospitalized children with DSS.The study findings are applicable to the design of management schemes to enhance survival outcomes of patients with DSS. 展开更多
关键词 Dengue shock syndrome Dengue mortality machine learning Supervised models Logistic regression Random forest K-nearest neighbors Support vector machine Extreme Gradient Boost Shapley addictive explanations
在线阅读 下载PDF
Genomics, phenomics, and machine learning in transforming plant research: Advancements and challenges
13
作者 Sheikh Mansoor Ekanayaka M.B.M.Karunathilake +1 位作者 Thai Thanh Tuan Yong Suk Chung 《Horticultural Plant Journal》 2025年第2期486-503,共18页
Advances in gene editing and natural genetic variability present significant opportunities to generate novel alleles and select natural sources of genetic variation for horticulture crop improvement.The genetic improv... Advances in gene editing and natural genetic variability present significant opportunities to generate novel alleles and select natural sources of genetic variation for horticulture crop improvement.The genetic improvement of crops to enhance their resilience to abiotic stresses and new pests due to climate change is essential for future food security.The field of genomics has made significant strides over the past few decades,enabling us to sequence and analyze entire genomes.However,understanding the complex relationship between genes and their expression in phenotypes-the observable characteristics of an organism-requires a deeper understanding of phenomics.Phenomics seeks to link genetic information with biological processes and environmental factors to better understand complex traits and diseases.Recent breakthroughs in this field include the development of advanced imaging technologies,artificial intelligence algorithms,and large-scale data analysis techniques.These tools have enabled us to explore the relationships between genotype,phenotype,and environment in unprecedented detail.This review explores the importance of understanding the complex relationship between genes and their expression in phenotypes.Integration of genomics with efficient high throughput plant phenotyping as well as the potential of machine learning approaches for genomic and phenomics trait discovery. 展开更多
关键词 Plant PHENOMICS TRAITS HORTICULTURE BREEDING Crop improvement machine learning
在线阅读 下载PDF
Analysing Effectiveness of Sentiments in Social Media Data Using Machine Learning Techniques
14
作者 Thambusamy Velmurugan Mohandas Archana Ajith Singh Nongmaithem 《Journal of Computer and Communications》 2025年第1期136-151,共16页
Every second, a large volume of useful data is created in social media about the various kind of online purchases and in another forms of reviews. Particularly, purchased products review data is enormously growing in ... Every second, a large volume of useful data is created in social media about the various kind of online purchases and in another forms of reviews. Particularly, purchased products review data is enormously growing in different database repositories every day. Most of the review data are useful to new customers for theier further purchases as well as existing companies to view customers feedback about various products. Data Mining and Machine Leaning techniques are familiar to analyse such kind of data to visualise and know the potential use of the purchased items through online. The customers are making quality of products through their sentiments about the purchased items from different online companies. In this research work, it is analysed sentiments of Headphone review data, which is collected from online repositories. For the analysis of Headphone review data, some of the Machine Learning techniques like Support Vector Machines, Naive Bayes, Decision Trees and Random Forest Algorithms and a Hybrid method are applied to find the quality via the customers’ sentiments. The accuracy and performance of the taken algorithms are also analysed based on the three types of sentiments such as positive, negative and neutral. 展开更多
关键词 Support Vector machine Random Forest Algorithm Naive Bayes Algorithm machine Learning Techniques Decision Tree Algorithm
在线阅读 下载PDF
Real-time monitoring of disc cutter wear in tunnel boring machines:A sound and vibration sensor-based approach with machine learning technique
15
作者 Mohammad Amir Akhlaghi Raheb Bagherpour Seyed Hadi Hoseinie 《Journal of Rock Mechanics and Geotechnical Engineering》 2025年第3期1700-1722,共23页
Large portions of the tunnel boring machine(TBM)construction cost are attributed to disc cutter consumption,and assessing the disc cutter's wear level can help determine the optimal time to replace the disc cutter... Large portions of the tunnel boring machine(TBM)construction cost are attributed to disc cutter consumption,and assessing the disc cutter's wear level can help determine the optimal time to replace the disc cutter.Therefore,the need to monitor disc cutter wear in real-time has emerged as a technical challenge for TBMs.In this study,real-time disc cutter wear monitoring is developed based on sound and vibration sensors.For this purpose,the microphone and accelerometer were used to record the sound and vibration signals of cutting three different types of rocks with varying abrasions on a laboratory scale.The relationship between disc cutter wear and the sound and vibration signal was determined by comparing the measurements of disc cutter wear with the signal plots for each sample.The features extracted from the signals showed that the sound and vibration signals are impacted by the progression of disc wear during the rock-cutting process.The signal features obtained from the rock-cutting operation were utilized to verify the machine learning techniques.The results showed that the multilayer perceptron(MLP),random subspace-based decision tree(RS-DT),DT,and random forest(RF)methods could predict the wear level of the disc cutter with an accuracy of 0.89,0.951,0.951,and 0.927,respectively.Based on the accuracy of the models and the confusion matrix,it was found that the RS-DT model has the best estimate for predicting the level of disc wear.This research has developed a method that can potentially determine when to replace a tool and assess disc wear in real-time. 展开更多
关键词 TBM disc cutter WEAR SOUND VIBRATION machine learning Real-time wear estimation
在线阅读 下载PDF
Machine Learning and Simulation Techniques for Detecting Buoy Types from LiDAR Data
16
作者 Christopher Adolphi Masha Sosonkina 《Journal of Intelligent Learning Systems and Applications》 2025年第1期8-24,共17页
Critical to the safe, efficient, and reliable operation of an autonomous maritime vessel is its ability to perceive the external environment through onboard sensors. For this research, data was collected from a LiDAR ... Critical to the safe, efficient, and reliable operation of an autonomous maritime vessel is its ability to perceive the external environment through onboard sensors. For this research, data was collected from a LiDAR sensor installed on a 16-foot catamaran unmanned vessel. This sensor generated point clouds of the surrounding maritime environment, which were then labeled by hand for training a machine learning (ML) model to perform a semantic segmentation task on LiDAR scans. In particular, the developed semantic segmentation classifies each point-cloud point as belonging to a certain buoy type. This paper describes the developed Unity Game Engine (Unity) simulation to emulate the maritime environment perceived by LiDAR with the goal of generating large (automatically labeled) simulation datasets and improving the ML model performance since hand-labeled real-life LiDAR scan data may be scarce. The Unity simulation data combined with labeled real-life point cloud data was used for a PointNet-based neural network model, the architecture of which is presented in this paper. Fitting the PointNet-based model on the simulation data followed by fine-tuning the combined dataset allowed for accurate semantic segmentation of point clouds on the real-world data. The ML model performance on several combinations of simulation and real-life data is explored. The resulting Intersection over Union (IoU) metric scores are quite high, ranging between 0.78 and 0.89, when validated on simulation and real-life data. The confusion matrix-entry values indicate an accurate semantic segmentation of the buoy types. 展开更多
关键词 Maritime Autonomy LIDAR Unity Simulation machine Learning PointNet
在线阅读 下载PDF
A Machine Learning-Based Observational Constraint Correction Method for Seasonal Precipitation Prediction
17
作者 Bofei ZHANG Haipeng YU +5 位作者 Zeyong HU Ping YUE Zunye TANG Hongyu LUO Guantian WANG Shanling CHENG 《Advances in Atmospheric Sciences》 2025年第1期36-52,共17页
Seasonal precipitation has always been a key focus of climate prediction.As a dynamic-statistical combined method,the existing observational constraint correction establishes a regression relationship between the nume... Seasonal precipitation has always been a key focus of climate prediction.As a dynamic-statistical combined method,the existing observational constraint correction establishes a regression relationship between the numerical model outputs and historical observations,which can partly predict seasonal precipitation.However,solving a nonlinear problem through linear regression is significantly biased.This study implements a nonlinear optimization of an existing observational constrained correction model using a Light Gradient Boosting Machine(LightGBM)machine learning algorithm based on output from the Beijing National Climate Center Climate System Model(BCC-CSM)and station observations to improve the prediction of summer precipitation in China.The model was trained using a rolling approach,and LightGBM outperformed Linear Regression(LR),Extreme Gradient Boosting(XGBoost),and Categorical Boosting(CatBoost).Using parameter tuning to optimize the machine learning model and predict future summer precipitation using eight different predictors in BCC-CSM,the mean Anomaly Correlation Coefficient(ACC)score in the 2019–22 summer precipitation predictions was 0.17,and the mean Prediction Score(PS)reached 74.The PS score was improved by 7.87%and 6.63%compared with the BCC-CSM and the linear observational constraint approach,respectively.The observational constraint correction prediction strategy with LightGBM significantly and stably improved the prediction of summer precipitation in China compared to the previous linear observational constraint solution,providing a reference for flood control and drought relief during the flood season(summer)in China. 展开更多
关键词 observational constraint LightGBM seasonal prediction summer precipitation machine learning
在线阅读 下载PDF
Machine learning and deep learning to improve prevention of anastomotic leak after rectal cancer surgery
18
作者 Francesco Celotto Quoc R Bao +2 位作者 Giulia Capelli Gaya Spolverato Andrew A Gumbs 《World Journal of Gastrointestinal Surgery》 2025年第1期25-31,共7页
Anastomotic leakage(AL)is a significant complication following rectal cancer surgery,adversely affecting both quality of life and oncological outcomes.Recent advancements in artificial intelligence(AI),particularly ma... Anastomotic leakage(AL)is a significant complication following rectal cancer surgery,adversely affecting both quality of life and oncological outcomes.Recent advancements in artificial intelligence(AI),particularly machine learning and deep learning,offer promising avenues for predicting and preventing AL.These technologies can analyze extensive clinical datasets to identify preoperative and perioperative risk factors such as malnutrition,body composition,and radiological features.AI-based models have demonstrated superior predictive power compared to traditional statistical methods,potentially guiding clinical decisionmaking and improving patient outcomes.Additionally,AI can provide surgeons with intraoperative feedback on blood supply and anatomical dissection planes,minimizing the risk of intraoperative complications and reducing the likelihood of AL development. 展开更多
关键词 Anastomotic leak Rectal cancer SURGERY machine learning Deep Learning
在线阅读 下载PDF
Early Machine Learning Models: Biases Shown and How It Can Be Corrected
19
作者 Nandini Guduru 《Journal of Intelligent Learning Systems and Applications》 2025年第1期1-7,共7页
The purpose of this research paper is to explore how early Machine Learning models have shown a bias in the results where a bias should not be seen. A prime example is an ML model that favors male applicants over fema... The purpose of this research paper is to explore how early Machine Learning models have shown a bias in the results where a bias should not be seen. A prime example is an ML model that favors male applicants over female applicants. While the model is supposed to take into consideration other aspects of the data, it tends to have a bias and skew the results one way or another. Therefore, in this paper, we will be exploring how this bias comes about and how it can be fixed. In this research, I have taken different case studies of real-world examples of these biases being shown. For example, an Amazon hiring application that favored male applicants or a loan application that favored western applicants is both studies that I will reference in this paper and explore the situation itself. In order to find out where the bias is coming from, I have constructed a machine learning model that will use a dataset found on Kaggle, and I will analyze the results of said ML model. The results that the research has yielded clarify the reason for said bias in the artificial intelligence models. The way the model was trained influences the way the results will play out. If the model is trained with a large amount of male applicant data over female applicant data, the model will favor male applicants. Therefore, when they are trained with new data, they are likely to accept applications that are male over female despite having equivalent parts. Later in the paper, I will dive deeper into the way that AI applications work and how they find biases and trends in order to classify things correctly. However, there is a fine line between classification and bias and making sure that it is rightfully corrected and tested is important in machine learning today. 展开更多
关键词 machine Learning Artificial Intelligence BIAS Generative AI Training Data Testing Data
在线阅读 下载PDF
Evaluations of Machine Learning Algorithms Using Simulation Study
20
作者 Nasrin Khatun 《Open Journal of Statistics》 2025年第1期41-52,共12页
1st cases of COVID-19 were reported in March 2020 in Bangladesh and rapidly increased daily. So many steps were taken by the Bangladesh government to reduce the outbreak of COVID-19, such as masks, gatherings, local m... 1st cases of COVID-19 were reported in March 2020 in Bangladesh and rapidly increased daily. So many steps were taken by the Bangladesh government to reduce the outbreak of COVID-19, such as masks, gatherings, local movements, international movements, etc. The data was collected from the World Health Organization. In this research, different variables have been used for analysis, for instance, new cases, new deaths, masks, schools, business, gatherings, domestic movement, international travel, new test, positive rate, test per case, new vaccination smoothed, new vaccine, total vaccination, and stringency index. Machine learning algorithms were used to predict and build the model, such as linear regression, K-nearest neighbours, decision trees, random forests, and support vector machines. Accuracy and Mean Square error (MSE) were used to test the model. A hyperparameter was also applied to find the optimum values of parameters. After computing the analysis, the result showed that the linear regression algorithm performs the best overall among the algorithms listed, with the highest testing accuracy and the lowest RMSE before and after hyper-tuning. The highest accuracy and lowest MSE were used for the best model, and for this data set, Linear regression got the highest accuracy, 0.98 and 0.97 and the lowest MSE, 4.79 and 4.04, respectively. 展开更多
关键词 Linear Regression K-Nearest Neighbours Decision Tree Random Forest Support Vector machine Hyper-Tuning
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部