期刊文献+
共找到18篇文章
< 1 >
每页显示 20 50 100
Automated machine learning for rainfall-induced landslide hazard mapping in Luhe County of Guangdong Province,China
1
作者 Tao Li Chen-chen Xie +3 位作者 Chong Xu Wen-wen Qi Yuan-dong Huang Lei Li 《China Geology》 CAS CSCD 2024年第2期315-329,共15页
Landslide hazard mapping is essential for regional landslide hazard management.The main objective of this study is to construct a rainfall-induced landslide hazard map of Luhe County,China based on an automated machin... Landslide hazard mapping is essential for regional landslide hazard management.The main objective of this study is to construct a rainfall-induced landslide hazard map of Luhe County,China based on an automated machine learning framework(AutoGluon).A total of 2241 landslides were identified from satellite images before and after the rainfall event,and 10 impact factors including elevation,slope,aspect,normalized difference vegetation index(NDVI),topographic wetness index(TWI),lithology,land cover,distance to roads,distance to rivers,and rainfall were selected as indicators.The WeightedEnsemble model,which is an ensemble of 13 basic machine learning models weighted together,was used to output the landslide hazard assessment results.The results indicate that landslides mainly occurred in the central part of the study area,especially in Hetian and Shanghu.Totally 102.44 s were spent to train all the models,and the ensemble model WeightedEnsemble has an Area Under the Curve(AUC)value of92.36%in the test set.In addition,14.95%of the study area was determined to be at very high hazard,with a landslide density of 12.02 per square kilometer.This study serves as a significant reference for the prevention and mitigation of geological hazards and land use planning in Luhe County. 展开更多
关键词 Landslide hazard Heavy rainfall Harzard mapping Hazard assessment automated machine learning Shallow landslide Visual interpretation Luhe County Geological hazards survey engineering
在线阅读 下载PDF
Automated Machine Learning Algorithm Using Recurrent Neural Network to Perform Long-Term Time Series Forecasting
2
作者 Ying Su Morgan C.Wang Shuai Liu 《Computers, Materials & Continua》 SCIE EI 2024年第3期3529-3549,共21页
Long-term time series forecasting stands as a crucial research domain within the realm of automated machine learning(AutoML).At present,forecasting,whether rooted in machine learning or statistical learning,typically ... Long-term time series forecasting stands as a crucial research domain within the realm of automated machine learning(AutoML).At present,forecasting,whether rooted in machine learning or statistical learning,typically relies on expert input and necessitates substantial manual involvement.This manual effort spans model development,feature engineering,hyper-parameter tuning,and the intricate construction of time series models.The complexity of these tasks renders complete automation unfeasible,as they inherently demand human intervention at multiple junctures.To surmount these challenges,this article proposes leveraging Long Short-Term Memory,which is the variant of Recurrent Neural Networks,harnessing memory cells and gating mechanisms to facilitate long-term time series prediction.However,forecasting accuracy by particular neural network and traditional models can degrade significantly,when addressing long-term time-series tasks.Therefore,our research demonstrates that this innovative approach outperforms the traditional Autoregressive Integrated Moving Average(ARIMA)method in forecasting long-term univariate time series.ARIMA is a high-quality and competitive model in time series prediction,and yet it requires significant preprocessing efforts.Using multiple accuracy metrics,we have evaluated both ARIMA and proposed method on the simulated time-series data and real data in both short and long term.Furthermore,our findings indicate its superiority over alternative network architectures,including Fully Connected Neural Networks,Convolutional Neural Networks,and Nonpooling Convolutional Neural Networks.Our AutoML approach enables non-professional to attain highly accurate and effective time series forecasting,and can be widely applied to various domains,particularly in business and finance. 展开更多
关键词 automated machine learning autoregressive integrated moving average neural networks time series analysis
在线阅读 下载PDF
AID4I:An Intrusion Detection Framework for Industrial Internet of Things Using Automated Machine Learning 被引量:1
3
作者 Anil Sezgin Aytug Boyacı 《Computers, Materials & Continua》 SCIE EI 2023年第8期2121-2143,共23页
By identifying and responding to any malicious behavior that could endanger the system,the Intrusion Detection System(IDS)is crucial for preserving the security of the Industrial Internet of Things(IIoT)network.The be... By identifying and responding to any malicious behavior that could endanger the system,the Intrusion Detection System(IDS)is crucial for preserving the security of the Industrial Internet of Things(IIoT)network.The benefit of anomaly-based IDS is that they are able to recognize zeroday attacks due to the fact that they do not rely on a signature database to identify abnormal activity.In order to improve control over datasets and the process,this study proposes using an automated machine learning(AutoML)technique to automate the machine learning processes for IDS.Our groundbreaking architecture,known as AID4I,makes use of automatic machine learning methods for intrusion detection.Through automation of preprocessing,feature selection,model selection,and hyperparameter tuning,the objective is to identify an appropriate machine learning model for intrusion detection.Experimental studies demonstrate that the AID4I framework successfully proposes a suitablemodel.The integrity,security,and confidentiality of data transmitted across the IIoT network can be ensured by automating machine learning processes in the IDS to enhance its capacity to identify and stop threatening activities.With a comprehensive solution that takes advantage of the latest advances in automated machine learning methods to improve network security,AID4I is a powerful and effective instrument for intrusion detection.In preprocessing module,three distinct imputation methods are utilized to handle missing data,ensuring the robustness of the intrusion detection system in the presence of incomplete information.Feature selection module adopts a hybrid approach that combines Shapley values and genetic algorithm.The Parameter Optimization module encompasses a diverse set of 14 classification methods,allowing for thorough exploration and optimization of the parameters associated with each algorithm.By carefully tuning these parameters,the framework enhances its adaptability and accuracy in identifying potential intrusions.Experimental results demonstrate that the AID4I framework can achieve high levels of accuracy in detecting network intrusions up to 14.39%on public datasets,outperforming traditional intrusion detection methods while concurrently reducing the elapsed time for training and testing. 展开更多
关键词 automated machine learning intrusion detection system industrial internet of things parameter optimization
在线阅读 下载PDF
Automated Machine Learning for Epileptic Seizure Detection Based on EEG Signals
4
作者 Jian Liu Yipeng Du +2 位作者 Xiang Wang Wuguang Yue Jim Feng 《Computers, Materials & Continua》 SCIE EI 2022年第10期1995-2011,共17页
Epilepsy is a common neurological disease and severely affects the daily life of patients.The automatic detection and diagnosis system of epilepsy based on electroencephalogram(EEG)is of great significance to help pat... Epilepsy is a common neurological disease and severely affects the daily life of patients.The automatic detection and diagnosis system of epilepsy based on electroencephalogram(EEG)is of great significance to help patients with epilepsy return to normal life.With the development of deep learning technology and the increase in the amount of EEG data,the performance of deep learning based automatic detection algorithm for epilepsy EEG has gradually surpassed the traditional hand-crafted approaches.However,the neural architecture design for epilepsy EEG analysis is time-consuming and laborious,and the designed structure is difficult to adapt to the changing EEG collection environment,which limits the application of the epilepsy EEG automatic detection system.In this paper,we explore the possibility of Automated Machine Learning(AutoML)playing a role in the task of epilepsy EEG detection.We apply the neural architecture search(NAS)algorithm in the AutoKeras platform to design the model for epilepsy EEG analysis and utilize feature interpretability methods to ensure the reliability of the searched model.The experimental results show that the model obtained through NAS outperforms the baseline model in performance.The searched model improves classification accuracy,F1-score and Cohen’s kappa coefficient by 7.68%,7.82%and 9.60%respectively than the baseline model.Furthermore,NASbased model is capable of extracting EEG features related to seizures for classification. 展开更多
关键词 Deep learning automated machine learning EEG seizure detection
在线阅读 下载PDF
Enhancing Classification Algorithm Recommendation in Automated Machine Learning: A Meta-Learning Approach Using Multivariate Sparse Group Lasso
5
作者 Irfan Khan Xianchao Zhang +2 位作者 Ramesh Kumar Ayyasamy Saadat M.Alhashmi Azizur Rahim 《Computer Modeling in Engineering & Sciences》 2025年第2期1611-1636,共26页
The rapid growth of machine learning(ML)across fields has intensified the challenge of selecting the right algorithm for specific tasks,known as the Algorithm Selection Problem(ASP).Traditional trial-and-error methods... The rapid growth of machine learning(ML)across fields has intensified the challenge of selecting the right algorithm for specific tasks,known as the Algorithm Selection Problem(ASP).Traditional trial-and-error methods have become impractical due to their resource demands.Automated Machine Learning(AutoML)systems automate this process,but often neglect the group structures and sparsity in meta-features,leading to inefficiencies in algorithm recommendations for classification tasks.This paper proposes a meta-learning approach using Multivariate Sparse Group Lasso(MSGL)to address these limitations.Our method models both within-group and across-group sparsity among meta-features to manage high-dimensional data and reduce multicollinearity across eight meta-feature groups.The Fast Iterative Shrinkage-Thresholding Algorithm(FISTA)with adaptive restart efficiently solves the non-smooth optimization problem.Empirical validation on 145 classification datasets with 17 classification algorithms shows that our meta-learning method outperforms four state-of-the-art approaches,achieving 77.18%classification accuracy,86.07%recommendation accuracy and 88.83%normalized discounted cumulative gain. 展开更多
关键词 Meta-learning machine learning automated machine learning classification meta-features
在线阅读 下载PDF
Groundwater contaminant source identification considering unknown boundary condition based on an automated machine learning surrogate
6
作者 Yaning Xu Wenxi Lu +3 位作者 Zidong Pan Chengming Luo Yukun Bai Shuwei Qiu 《Geoscience Frontiers》 SCIE CAS CSCD 2024年第1期402-416,共15页
Groundwater contamination source identification(GCSI)is a prerequisite for contamination risk evaluation and efficient groundwater contamination remediation programs.The boundary condition generally is set as known va... Groundwater contamination source identification(GCSI)is a prerequisite for contamination risk evaluation and efficient groundwater contamination remediation programs.The boundary condition generally is set as known variables in previous GCSI studies.However,in many practical cases,the boundary condition is complicated and cannot be estimated accurately in advance.Setting the boundary condition as known variables may seriously deviate from the actual situation and lead to distorted identification results.And the results of GCSI are affected by multiple factors,including contaminant source information,model parameters,boundary condition,etc.Therefore,if the boundary condition is not estimated accurately,other factors will also be estimated inaccurately.This study focuses on the unknown boundary condition and proposed to identify three types of unknown variables(contaminant source information,model parameters and boundary condition)innovatively.When simulation-optimization(S-O)method is applied to GCSI,the huge computational load is usually reduced by building surrogate models.However,when building surrogate models,the researchers need to select the models and optimize the hyperparameters to make the model powerful,which can be a lengthy process.The automated machine learning(AutoML)method was used to build surrogate model,which automates the model selection and hyperparameter optimization in machine learning engineering,largely reducing human operations and saving time.The accuracy of AutoML surrogate model is compared with the surrogate model used in eXtreme Gradient Boosting method(XGBoost),random forest method(RF),extra trees regressor method(ETR)and elasticnet method(EN)respectively,which are automatically selected in AutoML engineering.The results show that the surrogate model constructed by AutoML method has the best accuracy compared with the other four methods.This study provides reliable and strong support for GCSI. 展开更多
关键词 Groundwater contamination source Boundary condition automated machine learning Surrogate model
原文传递
AutoRhythmAI: A Hybrid Machine and Deep Learning Approach for Automated Diagnosis of Arrhythmias
7
作者 S.Jayanthi S.Prasanna Devi 《Computers, Materials & Continua》 SCIE EI 2024年第2期2137-2158,共22页
In healthcare,the persistent challenge of arrhythmias,a leading cause of global mortality,has sparked extensive research into the automation of detection using machine learning(ML)algorithms.However,traditional ML and... In healthcare,the persistent challenge of arrhythmias,a leading cause of global mortality,has sparked extensive research into the automation of detection using machine learning(ML)algorithms.However,traditional ML and AutoML approaches have revealed their limitations,notably regarding feature generalization and automation efficiency.This glaring research gap has motivated the development of AutoRhythmAI,an innovative solution that integrates both machine and deep learning to revolutionize the diagnosis of arrhythmias.Our approach encompasses two distinct pipelines tailored for binary-class and multi-class arrhythmia detection,effectively bridging the gap between data preprocessing and model selection.To validate our system,we have rigorously tested AutoRhythmAI using a multimodal dataset,surpassing the accuracy achieved using a single dataset and underscoring the robustness of our methodology.In the first pipeline,we employ signal filtering and ML algorithms for preprocessing,followed by data balancing and split for training.The second pipeline is dedicated to feature extraction and classification,utilizing deep learning models.Notably,we introduce the‘RRI-convoluted trans-former model’as a novel addition for binary-class arrhythmias.An ensemble-based approach then amalgamates all models,considering their respective weights,resulting in an optimal model pipeline.In our study,the VGGRes Model achieved impressive results in multi-class arrhythmia detection,with an accuracy of 97.39%and firm performance in precision(82.13%),recall(31.91%),and F1-score(82.61%).In the binary-class task,the proposed model achieved an outstanding accuracy of 96.60%.These results highlight the effectiveness of our approach in improving arrhythmia detection,with notably high accuracy and well-balanced performance metrics. 展开更多
关键词 automated machine learning neural networks deep learning ARRHYTHMIAS
在线阅读 下载PDF
AIPerf: Automated Machine Learning as an AI-HPC Benchmark 被引量:1
8
作者 Zhixiang Ren Yongheng Liu +6 位作者 Tianhui Shi Lei Xie Yue Zhou Jidong Zhai Youhui Zhang Yunquan Zhang Wenguang Chen 《Big Data Mining and Analytics》 EI 2021年第3期208-220,共13页
The plethora of complex Artificial Intelligence(AI)algorithms and available High-Performance Computing(HPC)power stimulates the expeditious development of AI components with heterogeneous designs.Consequently,the need... The plethora of complex Artificial Intelligence(AI)algorithms and available High-Performance Computing(HPC)power stimulates the expeditious development of AI components with heterogeneous designs.Consequently,the need for cross-stack performance benchmarking of AI-HPC systems has rapidly emerged.In particular,the de facto HPC benchmark,LINPACK,cannot reflect the AI computing power and input/output performance without a representative workload.Current popular AI benchmarks,such as MLPerf,have a fixed problem size and therefore limited scalability.To address these issues,we propose an end-to-end benchmark suite utilizing automated machine learning,which not only represents real AI scenarios,but also is auto-adaptively scalable to various scales of machines.We implement the algorithms in a highly parallel and flexible way to ensure the efficiency and optimization potential on diverse systems with customizable configurations.We utilize Operations Per Second(OPS),which is measured in an analytical and systematic approach,as a major metric to quantify the AI performance.We perform evaluations on various systems to ensure the benchmark’s stability and scalability,from 4 nodes with 32 NVIDIA Tesla T4(56.1 Tera-OPS measured)up to 512 nodes with 4096 Huawei Ascend 910(194.53 Peta-OPS measured),and the results show near-linear weak scalability.With a flexible workload and single metric,AIPerf can easily scale on and rank AI-HPC,providing a powerful benchmark suite for the coming supercomputing era. 展开更多
关键词 High-Performance Computing(HPC) Artificial Intelligence(AI) automated machine learning
原文传递
Automated deep learning system for power line inspection image analysis and processing: architecture and design issues 被引量:2
9
作者 Daoxing Li Xiaohui Wang +1 位作者 Jie Zhang Zhixiang Ji 《Global Energy Interconnection》 EI CSCD 2023年第5期614-633,共20页
The continuous growth in the scale of unmanned aerial vehicle (UAV) applications in transmission line inspection has resulted in a corresponding increase in the demand for UAV inspection image processing. Owing to its... The continuous growth in the scale of unmanned aerial vehicle (UAV) applications in transmission line inspection has resulted in a corresponding increase in the demand for UAV inspection image processing. Owing to its excellent performance in computer vision, deep learning has been applied to UAV inspection image processing tasks such as power line identification and insulator defect detection. Despite their excellent performance, electric power UAV inspection image processing models based on deep learning face several problems such as a small application scope, the need for constant retraining and optimization, and high R&D monetary and time costs due to the black-box and scene data-driven characteristics of deep learning. In this study, an automated deep learning system for electric power UAV inspection image analysis and processing is proposed as a solution to the aforementioned problems. This system design is based on the three critical design principles of generalizability, extensibility, and automation. Pre-trained models, fine-tuning (downstream task adaptation), and automated machine learning, which are closely related to these design principles, are reviewed. In addition, an automated deep learning system architecture for electric power UAV inspection image analysis and processing is presented. A prototype system was constructed and experiments were conducted on the two electric power UAV inspection image analysis and processing tasks of insulator self-detonation and bird nest recognition. The models constructed using the prototype system achieved 91.36% and 86.13% mAP for insulator self-detonation and bird nest recognition, respectively. This demonstrates that the system design concept is reasonable and the system architecture feasible . 展开更多
关键词 Transmission line inspection Deep learning automated machine learning Image analysis and processing
在线阅读 下载PDF
Application Research of Continuous Ambulatory Peritoneal Dialysis and Automated Peritoneal Dialysis Machine in Patients with End-Stage Renal Disease Undergoing Peritoneal Dialysis
10
作者 Huihui Mu Jinjin Wang 《Journal of Clinical and Nursing Research》 2025年第3期9-14,共6页
This study analyzed the therapeutic effects of continuous ambulatory peritoneal dialysis(CAPD)and automated peritoneal dialysis(APD)on patients with end-stage renal disease.Fifty patients admitted between January 2024... This study analyzed the therapeutic effects of continuous ambulatory peritoneal dialysis(CAPD)and automated peritoneal dialysis(APD)on patients with end-stage renal disease.Fifty patients admitted between January 2024 and December 2024 were randomly assigned to two groups,with the observation group receiving APD and the reference group receiving CAPD.Renal function indicators,nutritional indicators,mineral metabolism,urine volume,and ultrafiltration volume changes were compared between the two groups.After treatment,the observation group showed lower renal function indicators,higher nutritional indicators,and better mineral metabolism levels compared to the reference group(P<0.05).While there was no significant difference in urine volume between the two groups(P>0.05),the observation group demonstrated superior ultrafiltration volume(P<0.05).These findings suggest that APD offers better clinical outcomes than CAPD by improving renal function,nutritional status,mineral metabolism regulation,and ultrafiltration efficiency in patients with end-stage renal disease. 展开更多
关键词 Continuous ambulatory peritoneal dialysis automated peritoneal dialysis machine End-stage renal disease Peritoneal dialysis
在线阅读 下载PDF
Interpretable machine learning analysis and automated modeling to simulate fluid-particle flows 被引量:1
11
作者 Bo Ouyang Litao Zhu Zhenghong Luo 《Particuology》 SCIE EI CAS CSCD 2023年第9期42-52,共11页
The present study extracts human-understandable insights from machine learning(ML)-based mesoscale closure in fluid-particle flows via several novel data-driven analysis approaches,i.e.,maximal information coefficient... The present study extracts human-understandable insights from machine learning(ML)-based mesoscale closure in fluid-particle flows via several novel data-driven analysis approaches,i.e.,maximal information coefficient(MIC),interpretable ML,and automated ML.It is previously shown that the solidvolume fraction has the greatest effect on the drag force.The present study aims to quantitativelyinvestigate the influence of flow properties on mesoscale drag correction(H_(d)).The MIC results showstrong correlations between the features(i.e.,slip velocity(u^(*)_(sy))and particle volume fraction(εs))and thelabel H_(d).The interpretable ML analysis confirms this conclusion,and quantifies the contribution of u^(*)_(sy),εs and gas pressure gradient to the model as 71.9%,27.2%and 0.9%,respectively.Automated ML without theneed to select the model structure and hyperparameters is used for modeling,improving the predictionaccuracy over our previous model(Zhu et al.,2020;Ouyang,Zhu,Su,&Luo,2021). 展开更多
关键词 Filtered two-fluid model Fluid-particle flow Mesoscale closure Interpretable machine learning automated machine learning Maximal information coefficient
原文传递
An AutoML based trajectory optimization method for long-distance spacecraft pursuit-evasion game
12
作者 YANG Fuyunxiang YANG Leping ZHU Yanwei 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2023年第3期754-765,共12页
Current successes in artificial intelligence domain have revitalized interest in spacecraft pursuit-evasion game,which is an interception problem with a non-cooperative maneuvering target.The paper presents an automat... Current successes in artificial intelligence domain have revitalized interest in spacecraft pursuit-evasion game,which is an interception problem with a non-cooperative maneuvering target.The paper presents an automated machine learning(AutoML)based method to generate optimal trajectories in long-distance scenarios.Compared with conventional deep neural network(DNN)methods,the proposed method dramatically reduces the reliance on manual intervention and machine learning expertise.Firstly,based on differential game theory and costate normalization technique,the trajectory optimization problem is formulated under the assumption of continuous thrust.Secondly,the AutoML technique based on sequential model-based optimization(SMBO)framework is introduced to automate DNN design in deep learning process.If recommended DNN architecture exists,the tree-structured Parzen estimator(TPE)is used,otherwise the efficient neural architecture search(NAS)with network morphism is used.Thus,a novel trajectory optimization method with high computational efficiency is achieved.Finally,numerical results demonstrate the feasibility and efficiency of the proposed method. 展开更多
关键词 PURSUIT-EVASION different game trajectory optimization automated machine learning(AutoML)
在线阅读 下载PDF
End-to-end data-driven modeling framework for automated and trustworthy short-term building energy load forecasting
13
作者 Chaobo Zhang Jie Lu +1 位作者 Jiahua Huang Yang Zhao 《Building Simulation》 SCIE EI CSCD 2024年第8期1419-1437,共19页
Conventional automated machine learning(AutoML)technologies fall short in preprocessing low-quality raw data and adapting to varying indoor and outdoor environments,leading to accuracy reduction in forecasting short-t... Conventional automated machine learning(AutoML)technologies fall short in preprocessing low-quality raw data and adapting to varying indoor and outdoor environments,leading to accuracy reduction in forecasting short-term building energy loads.Moreover,their predictions are not transparent because of their black box nature.Hence,the building field currently lacks an AutoML framework capable of data quality enhancement,environment self-adaptation,and model interpretation.To address this research gap,an improved AutoML-based end-to-end data-driven modeling framework is proposed.Bayesian optimization is applied by this framework to find an optimal data preprocessing process for quality improvement of raw data.It bridges the gap where conventional AutoML technologies cannot automatically handle missing data and outliers.A sliding window-based model retraining strategy is utilized to achieve environment self-adaptation,contributing to the accuracy enhancement of AutoML technologies.Moreover,a local interpretable model-agnostic explanations-based approach is developed to interpret predictions made by the improved framework.It overcomes the poor interpretability of conventional AutoML technologies.The performance of the improved framework in forecasting one-hour ahead cooling loads is evaluated using two-year operational data from a real building.It is discovered that the accuracy of the improved framework increases by 4.24%–8.79%compared with four conventional frameworks for buildings with not only high-quality but also low-quality operational data.Furthermore,it is demonstrated that the developed model interpretation approach can effectively explain the predictions of the improved framework.The improved framework offers a novel perspective on creating accurate and reliable AutoML frameworks tailored to building energy load prediction tasks and other similar tasks. 展开更多
关键词 building energy load forecasting end-to-end data-driven modeling automated machine learning Bayesian optimization model retraining model interpretation
原文传递
Quant 4.0:engineering quantitative investment with automated,explainable,and knowledge-driven artificial intelligence
14
作者 Jian GUO Saizhuo WANG +1 位作者 Lionel M.NI Heung-Yeung SHUM 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2024年第11期1421-1445,共25页
Quantitative investment(abbreviated as“quant”in this paper)is an interdisciplinary field combining financial engineering,computer science,mathematics,statistics,etc.Quant has become one of the mainstream investment ... Quantitative investment(abbreviated as“quant”in this paper)is an interdisciplinary field combining financial engineering,computer science,mathematics,statistics,etc.Quant has become one of the mainstream investment methodologies over the past decades,and has experienced three generations:quant 1.0,trading by mathematical modeling to discover mis-priced assets in markets;quant 2.0,shifting the quant research pipeline from small“strategy workshops”to large“alpha factories”;quant 3.0,applying deep learning techniques to discover complex nonlinear pricing rules.Despite its advantage in prediction,deep learning relies on extremely large data volume and labor-intensive tuning of“black-box”neural network models.To address these limitations,in this paper,we introduce quant 4.0 and provide an engineering perspective for next-generation quant.Quant 4.0 has three key differentiating components.First,automated artificial intelligence(AI)changes the quant pipeline from traditional hand-crafted modeling to state-of-the-art automated modeling and employs the philosophy of“algorithm produces algorithm,model builds model,and eventually AI creates AI.”Second,explainable AI develops new techniques to better understand and interpret investment decisions made by machine learning black boxes,and explains complicated and hidden risk exposures.Third,knowledge-driven AI supplements data-driven AI such as deep learning and incorporates prior knowledge into modeling to improve investment decisions,in particular for quantitative value investing.Putting all these together,we discuss how to build a system that practices the quant 4.0 concept.We also discuss the application of large language models in quantitative finance.Finally,we propose 10 challenging research problems for quant technology,and discuss potential solutions,research directions,and future trends. 展开更多
关键词 Artificial general intelligence Artificial intelligence automated machine learning Causality engineering Deep learning Feature engineering Investment engineering Knowledge graph Knowledge reasoning Knowledge representation Model compression Neural architecture search Quant 4.0 Quantitative investment Risk graph Explainable artificial intelligence
原文传递
Performance of evolutionary optimized machine learning for modeling total organic carbon in core samples of shale gas fields
15
作者 Leonardo Goliatt C.M.Saporetti +1 位作者 L.C.Oliveira E.Pereira 《Petroleum》 EI CSCD 2024年第1期150-164,共15页
Rock samples'TOC content is the best indicator of the organic matter in source rocks.The origin rock samples’analysis is used to calculate it manually by specialists.This method requires time and resources becaus... Rock samples'TOC content is the best indicator of the organic matter in source rocks.The origin rock samples’analysis is used to calculate it manually by specialists.This method requires time and resources because it relies on samples from many well intervals in source rocks.Therefore,research has been done to aid this effort.Machine learning algorithms can estimate total organic carbon instead of well logs and stratigraphic studies.In light of these efforts,the current work present a study on automating the total organic carbon estimation using machine learning approaches improved by an evolutionary methodology to give the model flexibility and precision.Genetic algorithms,differential evolution,particle swarm optimization,grey wolf optimization,artificial bee colony,and evolution strategies were used to improve machine learning models to predict TOC.The six metaheuristics were integrated into four machine learning methods:extreme learning machine,elastic net linear model,linear support vector regression,and multivariate adaptive regression splines.Core samples from the YuDong-Nan shale gas field,located in the Sichuan basin,were used to evaluate the hybrid strategy.The findings show that combining machine learning models with an evolutionary algorithms in a hybrid fashion produce flexible models that accurately predict TOC.The results show that,independent of the metaheuristic used to guide the model selection,optimized extreme learning machines attained the best performance scores according to six metrics.Such hybrid models can be used in exploratory geological research,particularly for unconventional oil and gas resources. 展开更多
关键词 Total organic carbon Hybrid models automated machine learning Evolutionary algorithms GEOLOGY
原文传递
Measurement and Compensation of Deviations of Real Tooth Surface of Spiral Bevel Gear 被引量:8
16
作者 王军 王小椿 +1 位作者 姜虹 冯文军 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2003年第3期182-186,共5页
This paper presents a method for measurement of deviation of the real gear tooth surface from the theoretical one with a coordinate measurement machine and compensation of repeatable parts. By investigation of charact... This paper presents a method for measurement of deviation of the real gear tooth surface from the theoretical one with a coordinate measurement machine and compensation of repeatable parts. By investigation of characteristics of distortion of the gear tooth surface along the circle direction, the deviation is derived from distortion, and the definition of deviation with the geometrical invariability is proposed. Then the approach for determination of the location and orientation of the gear with respect to the coordinate measurement machine and the measurement way are developed. The deviation is represented with a difference surface, and an algorithm for derivation of parameters of global form deviations from the discrete points has been provided. Finally, the compensation approach is discussed. 展开更多
关键词 machine manufacture automation COMPENSATION spiral bevel gear 3-D measurement
在线阅读 下载PDF
Graph Neural Architecture Search:A Survey 被引量:2
17
作者 Babatounde Moctard Oloulade Jianliang Gao +2 位作者 Jiamin Chen Tengfei Lyu Raeed Al-Sabri 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2022年第4期692-708,共17页
In academia and industries,graph neural networks(GNNs)have emerged as a powerful approach to graph data processing ranging from node classification and link prediction tasks to graph clustering tasks.GNN models are us... In academia and industries,graph neural networks(GNNs)have emerged as a powerful approach to graph data processing ranging from node classification and link prediction tasks to graph clustering tasks.GNN models are usually handcrafted.However,building handcrafted GNN models is difficult and requires expert experience because GNN model components are complex and sensitive to variations.The complexity of GNN model components has brought significant challenges to the existing efficiencies of GNNs.Hence,many studies have focused on building automated machine learning frameworks to search for the best GNN models for targeted tasks.In this work,we provide a comprehensive review of automatic GNN model building frameworks to summarize the status of the field to facilitate future progress.We categorize the components of automatic GNN model building frameworks into three dimensions according to the challenges of building them.After reviewing the representative works for each dimension,we discuss promising future research directions in this rapidly growing field. 展开更多
关键词 graph neural network neural architecture search automated machine learning geometric deep learning
原文传递
Effective Model Compression via Stage-wise Pruning
18
作者 Ming-Yang Zhang Xin-Yi Yu Lin-Lin Ou 《Machine Intelligence Research》 EI CSCD 2023年第6期937-951,共15页
Automated machine learning(AutoML)pruning methods aim at searching for a pruning strategy automatically to reduce the computational complexity of deep convolutional neural networks(deep CNNs).However,some previous wor... Automated machine learning(AutoML)pruning methods aim at searching for a pruning strategy automatically to reduce the computational complexity of deep convolutional neural networks(deep CNNs).However,some previous work found that the results of many Auto-ML pruning methods cannot even surpass the results of the uniformly pruning method.In this paper,the ineffectiveness of Auto-ML pruning,which is caused by unfull and unfair training of the supernet,is shown.A deep supernet suffers from unfull training because it contains too many candidates.To overcome the unfull training,a stage-wise pruning(SWP)method is proposed,which splits a deep supernet into several stage-wise supernets to reduce the candidate number and utilize inplace distillation to supervise the stage training.Besides,a wide supernet is hit by unfair training since the sampling probability of each channel is unequal.Therefore,the fullnet and the tinynet are sampled in each training iteration to ensure that each channel can be overtrained.Remarkably,the proxy performance of the subnets trained with SWP is closer to the actual performance than that of most of the previous AutoML pruning work.Furthermore,experiments show that SWP achieves the state-of-the-art in both CIFAR-10 and ImageNet under the mobile setting. 展开更多
关键词 automated machine learning(AutoML) channel pruning model compression DISTILLATION convolutional neural networks(CNN)
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部