期刊文献+
共找到32,698篇文章
< 1 2 250 >
每页显示 20 50 100
SEFormer:A Lightweight CNN-Transformer Based on Separable Multiscale Depthwise Convolution and Efficient Self-Attention for Rotating Machinery Fault Diagnosis 被引量:1
1
作者 Hongxing Wang Xilai Ju +1 位作者 Hua Zhu Huafeng Li 《Computers, Materials & Continua》 SCIE EI 2025年第1期1417-1437,共21页
Traditional data-driven fault diagnosis methods depend on expert experience to manually extract effective fault features of signals,which has certain limitations.Conversely,deep learning techniques have gained promine... Traditional data-driven fault diagnosis methods depend on expert experience to manually extract effective fault features of signals,which has certain limitations.Conversely,deep learning techniques have gained prominence as a central focus of research in the field of fault diagnosis by strong fault feature extraction ability and end-to-end fault diagnosis efficiency.Recently,utilizing the respective advantages of convolution neural network(CNN)and Transformer in local and global feature extraction,research on cooperating the two have demonstrated promise in the field of fault diagnosis.However,the cross-channel convolution mechanism in CNN and the self-attention calculations in Transformer contribute to excessive complexity in the cooperative model.This complexity results in high computational costs and limited industrial applicability.To tackle the above challenges,this paper proposes a lightweight CNN-Transformer named as SEFormer for rotating machinery fault diagnosis.First,a separable multiscale depthwise convolution block is designed to extract and integrate multiscale feature information from different channel dimensions of vibration signals.Then,an efficient self-attention block is developed to capture critical fine-grained features of the signal from a global perspective.Finally,experimental results on the planetary gearbox dataset and themotor roller bearing dataset prove that the proposed framework can balance the advantages of robustness,generalization and lightweight compared to recent state-of-the-art fault diagnosis models based on CNN and Transformer.This study presents a feasible strategy for developing a lightweight rotating machinery fault diagnosis framework aimed at economical deployment. 展开更多
关键词 CNN-Transformer separable multiscale depthwise convolution efficient self-attention fault diagnosis
在线阅读 下载PDF
Occluded Gait Emotion Recognition Based on Multi-Scale Suppression Graph Convolutional Network
2
作者 Yuxiang Zou Ning He +2 位作者 Jiwu Sun Xunrui Huang Wenhua Wang 《Computers, Materials & Continua》 SCIE EI 2025年第1期1255-1276,共22页
In recent years,gait-based emotion recognition has been widely applied in the field of computer vision.However,existing gait emotion recognition methods typically rely on complete human skeleton data,and their accurac... In recent years,gait-based emotion recognition has been widely applied in the field of computer vision.However,existing gait emotion recognition methods typically rely on complete human skeleton data,and their accuracy significantly declines when the data is occluded.To enhance the accuracy of gait emotion recognition under occlusion,this paper proposes a Multi-scale Suppression Graph ConvolutionalNetwork(MS-GCN).TheMS-GCN consists of three main components:Joint Interpolation Module(JI Moudle),Multi-scale Temporal Convolution Network(MS-TCN),and Suppression Graph Convolutional Network(SGCN).The JI Module completes the spatially occluded skeletal joints using the(K-Nearest Neighbors)KNN interpolation method.The MS-TCN employs convolutional kernels of various sizes to comprehensively capture the emotional information embedded in the gait,compensating for the temporal occlusion of gait information.The SGCN extracts more non-prominent human gait features by suppressing the extraction of key body part features,thereby reducing the negative impact of occlusion on emotion recognition results.The proposed method is evaluated on two comprehensive datasets:Emotion-Gait,containing 4227 real gaits from sources like BML,ICT-Pollick,and ELMD,and 1000 synthetic gaits generated using STEP-Gen technology,and ELMB,consisting of 3924 gaits,with 1835 labeled with emotions such as“Happy,”“Sad,”“Angry,”and“Neutral.”On the standard datasets Emotion-Gait and ELMB,the proposed method achieved accuracies of 0.900 and 0.896,respectively,attaining performance comparable to other state-ofthe-artmethods.Furthermore,on occlusion datasets,the proposedmethod significantly mitigates the performance degradation caused by occlusion compared to other methods,the accuracy is significantly higher than that of other methods. 展开更多
关键词 KNN interpolation multi-scale temporal convolution suppression graph convolutional network gait emotion recognition human skeleton
在线阅读 下载PDF
Dynamic Multi-Graph Spatio-Temporal Graph Traffic Flow Prediction in Bangkok:An Application of a Continuous Convolutional Neural Network
3
作者 Pongsakon Promsawat Weerapan Sae-dan +2 位作者 Marisa Kaewsuwan Weerawat Sudsutad Aphirak Aphithana 《Computer Modeling in Engineering & Sciences》 SCIE EI 2025年第1期579-607,共29页
The ability to accurately predict urban traffic flows is crucial for optimising city operations.Consequently,various methods for forecasting urban traffic have been developed,focusing on analysing historical data to u... The ability to accurately predict urban traffic flows is crucial for optimising city operations.Consequently,various methods for forecasting urban traffic have been developed,focusing on analysing historical data to understand complex mobility patterns.Deep learning techniques,such as graph neural networks(GNNs),are popular for their ability to capture spatio-temporal dependencies.However,these models often become overly complex due to the large number of hyper-parameters involved.In this study,we introduce Dynamic Multi-Graph Spatial-Temporal Graph Neural Ordinary Differential Equation Networks(DMST-GNODE),a framework based on ordinary differential equations(ODEs)that autonomously discovers effective spatial-temporal graph neural network(STGNN)architectures for traffic prediction tasks.The comparative analysis of DMST-GNODE and baseline models indicates that DMST-GNODE model demonstrates superior performance across multiple datasets,consistently achieving the lowest Root Mean Square Error(RMSE)and Mean Absolute Error(MAE)values,alongside the highest accuracy.On the BKK(Bangkok)dataset,it outperformed other models with an RMSE of 3.3165 and an accuracy of 0.9367 for a 20-min interval,maintaining this trend across 40 and 60 min.Similarly,on the PeMS08 dataset,DMST-GNODE achieved the best performance with an RMSE of 19.4863 and an accuracy of 0.9377 at 20 min,demonstrating its effectiveness over longer periods.The Los_Loop dataset results further emphasise this model’s advantage,with an RMSE of 3.3422 and an accuracy of 0.7643 at 20 min,consistently maintaining superiority across all time intervals.These numerical highlights indicate that DMST-GNODE not only outperforms baseline models but also achieves higher accuracy and lower errors across different time intervals and datasets. 展开更多
关键词 Graph neural networks convolutional neural network deep learning dynamic multi-graph SPATIO-TEMPORAL
在线阅读 下载PDF
IDSSCNN-XgBoost:Improved Dual-Stream Shallow Convolutional Neural Network Based on Extreme Gradient Boosting Algorithm for Micro Expression Recognition
4
作者 Adnan Ahmad Zhao Li +1 位作者 Irfan Tariq Zhengran He 《Computers, Materials & Continua》 SCIE EI 2025年第1期729-749,共21页
Micro-expressions(ME)recognition is a complex task that requires advanced techniques to extract informative features fromfacial expressions.Numerous deep neural networks(DNNs)with convolutional structures have been pr... Micro-expressions(ME)recognition is a complex task that requires advanced techniques to extract informative features fromfacial expressions.Numerous deep neural networks(DNNs)with convolutional structures have been proposed.However,unlike DNNs,shallow convolutional neural networks often outperform deeper models in mitigating overfitting,particularly with small datasets.Still,many of these methods rely on a single feature for recognition,resulting in an insufficient ability to extract highly effective features.To address this limitation,in this paper,an Improved Dual-stream Shallow Convolutional Neural Network based on an Extreme Gradient Boosting Algorithm(IDSSCNN-XgBoost)is introduced for ME Recognition.The proposed method utilizes a dual-stream architecture where motion vectors(temporal features)are extracted using Optical Flow TV-L1 and amplify subtle changes(spatial features)via EulerianVideoMagnification(EVM).These features are processed by IDSSCNN,with an attention mechanism applied to refine the extracted effective features.The outputs are then fused,concatenated,and classified using the XgBoost algorithm.This comprehensive approach significantly improves recognition accuracy by leveraging the strengths of both temporal and spatial information,supported by the robust classification power of XgBoost.The proposed method is evaluated on three publicly available ME databases named Chinese Academy of Sciences Micro-expression Database(CASMEII),Spontaneous Micro-Expression Database(SMICHS),and Spontaneous Actions and Micro-Movements(SAMM).Experimental results indicate that the proposed model can achieve outstanding results compared to recent models.The accuracy results are 79.01%,69.22%,and 68.99%on CASMEII,SMIC-HS,and SAMM,and the F1-score are 75.47%,68.91%,and 63.84%,respectively.The proposed method has the advantage of operational efficiency and less computational time. 展开更多
关键词 ME recognition dual stream shallow convolutional neural network euler video magnification TV-L1 XgBoost
在线阅读 下载PDF
Calculation of Compound Hazard Probability Using Convolution of Distributions
5
作者 Luis Augusto Sanabria 《Atmospheric and Climate Sciences》 2025年第1期1-19,共19页
The hazard produced by natural phenomena on infrastructure and urban populations has been widely studied in the last 50 years. Researchers have recognised that the real danger posed by these phenomena depends on their... The hazard produced by natural phenomena on infrastructure and urban populations has been widely studied in the last 50 years. Researchers have recognised that the real danger posed by these phenomena depends on their extreme values. Most researchers focus on the extremes of natural phenomena considered in isolation, one variable at a time. However, what is relevant in hazard studies is coincident extremes of several climatic variables, i.e., the presence of compound extremes. The peak value of these extremes seldom coincides, but off-peak values located in the tail of the distributions are often concurrent and can lead to catastrophic events. What is essential in hazard studies is to calculate the probabilistic distribution of the extremes of coincident climatic variables. The presence of correlations between these variables complicates the problem. This paper presents a computationally efficient and robust mathematical methodology to solve the problem. The procedure is based on the convolution of the distributions of the climatic variables. Once the probabilistic distribution of the compound variables is found, it is possible to calculate the curves of the return period, which is the indicator of importance in hazard and risk studies. This compound Return Period is computed using the Statistics of Extreme Values. To illustrate the problem, the case of a cyclone landing close to a low-gradient coastal city is discussed, and its probability of flooding and recurrence period is calculated. We show that the failure to correctly model the correlation between variables can result in overestimating the Return Period curve, consequently increasing mitigation costs. 展开更多
关键词 Natural Hazards Compound Extremes Mathematical convolution
在线阅读 下载PDF
Aspect-Level Sentiment Analysis of Bi-Graph Convolutional Networks Based on Enhanced Syntactic Structural Information
6
作者 Junpeng Hu Yegang Li 《Journal of Computer and Communications》 2025年第1期72-89,共18页
Aspect-oriented sentiment analysis is a meticulous sentiment analysis task that aims to analyse the sentiment polarity of specific aspects. Most of the current research builds graph convolutional networks based on dep... Aspect-oriented sentiment analysis is a meticulous sentiment analysis task that aims to analyse the sentiment polarity of specific aspects. Most of the current research builds graph convolutional networks based on dependent syntactic trees, which improves the classification performance of the models to some extent. However, the technical limitations of dependent syntactic trees can introduce considerable noise into the model. Meanwhile, it is difficult for a single graph convolutional network to aggregate both semantic and syntactic structural information of nodes, which affects the final sentence classification. To cope with the above problems, this paper proposes a bi-channel graph convolutional network model. The model introduces a phrase structure tree and transforms it into a hierarchical phrase matrix. The adjacency matrix of the dependent syntactic tree and the hierarchical phrase matrix are combined as the initial matrix of the graph convolutional network to enhance the syntactic information. The semantic information feature representations of the sentences are obtained by the graph convolutional network with a multi-head attention mechanism and fused to achieve complementary learning of dual-channel features. Experimental results show that the model performs well and improves the accuracy of sentiment classification on three public benchmark datasets, namely Rest14, Lap14 and Twitter. 展开更多
关键词 Aspect-Level Sentiment Analysis Sentiment Knowledge Multi-Head Attention Mechanism Graph convolutional Networks
在线阅读 下载PDF
Multi-Head Attention Enhanced Parallel Dilated Convolution and Residual Learning for Network Traffic Anomaly Detection
7
作者 Guorong Qi Jian Mao +2 位作者 Kai Huang Zhengxian You Jinliang Lin 《Computers, Materials & Continua》 2025年第2期2159-2176,共18页
Abnormal network traffic, as a frequent security risk, requires a series of techniques to categorize and detect it. Existing network traffic anomaly detection still faces challenges: the inability to fully extract loc... Abnormal network traffic, as a frequent security risk, requires a series of techniques to categorize and detect it. Existing network traffic anomaly detection still faces challenges: the inability to fully extract local and global features, as well as the lack of effective mechanisms to capture complex interactions between features;Additionally, when increasing the receptive field to obtain deeper feature representations, the reliance on increasing network depth leads to a significant increase in computational resource consumption, affecting the efficiency and performance of detection. Based on these issues, firstly, this paper proposes a network traffic anomaly detection model based on parallel dilated convolution and residual learning (Res-PDC). To better explore the interactive relationships between features, the traffic samples are converted into two-dimensional matrix. A module combining parallel dilated convolutions and residual learning (res-pdc) was designed to extract local and global features of traffic at different scales. By utilizing res-pdc modules with different dilation rates, we can effectively capture spatial features at different scales and explore feature dependencies spanning wider regions without increasing computational resources. Secondly, to focus and integrate the information in different feature subspaces, further enhance and extract the interactions among the features, multi-head attention is added to Res-PDC, resulting in the final model: multi-head attention enhanced parallel dilated convolution and residual learning (MHA-Res-PDC) for network traffic anomaly detection. Finally, comparisons with other machine learning and deep learning algorithms are conducted on the NSL-KDD and CIC-IDS-2018 datasets. The experimental results demonstrate that the proposed method in this paper can effectively improve the detection performance. 展开更多
关键词 Network traffic anomaly detection multi-head attention parallel dilated convolution residual learning
在线阅读 下载PDF
A Modified Deep Residual-Convolutional Neural Network for Accurate Imputation of Missing Data
8
作者 Firdaus Firdaus Siti Nurmaini +8 位作者 Anggun Islami Annisa Darmawahyuni Ade Iriani Sapitri Muhammad Naufal Rachmatullah Bambang Tutuko Akhiar Wista Arum Muhammad Irfan Karim Yultrien Yultrien Ramadhana Noor Salassa Wandya 《Computers, Materials & Continua》 2025年第2期3419-3441,共23页
Handling missing data accurately is critical in clinical research, where data quality directly impacts decision-making and patient outcomes. While deep learning (DL) techniques for data imputation have gained attentio... Handling missing data accurately is critical in clinical research, where data quality directly impacts decision-making and patient outcomes. While deep learning (DL) techniques for data imputation have gained attention, challenges remain, especially when dealing with diverse data types. In this study, we introduce a novel data imputation method based on a modified convolutional neural network, specifically, a Deep Residual-Convolutional Neural Network (DRes-CNN) architecture designed to handle missing values across various datasets. Our approach demonstrates substantial improvements over existing imputation techniques by leveraging residual connections and optimized convolutional layers to capture complex data patterns. We evaluated the model on publicly available datasets, including Medical Information Mart for Intensive Care (MIMIC-III and MIMIC-IV), which contain critical care patient data, and the Beijing Multi-Site Air Quality dataset, which measures environmental air quality. The proposed DRes-CNN method achieved a root mean square error (RMSE) of 0.00006, highlighting its high accuracy and robustness. We also compared with Low Light-Convolutional Neural Network (LL-CNN) and U-Net methods, which had RMSE values of 0.00075 and 0.00073, respectively. This represented an improvement of approximately 92% over LL-CNN and 91% over U-Net. The results showed that this DRes-CNN-based imputation method outperforms current state-of-the-art models. These results established DRes-CNN as a reliable solution for addressing missing data. 展开更多
关键词 Data imputation missing data deep learning deep residual convolutional neural network
在线阅读 下载PDF
MSSTGCN: Multi-Head Self-Attention and Spatial-Temporal Graph Convolutional Network for Multi-Scale Traffic Flow Prediction
9
作者 Xinlu Zong Fan Yu +1 位作者 Zhen Chen Xue Xia 《Computers, Materials & Continua》 2025年第2期3517-3537,共21页
Accurate traffic flow prediction has a profound impact on modern traffic management. Traffic flow has complex spatial-temporal correlations and periodicity, which poses difficulties for precise prediction. To address ... Accurate traffic flow prediction has a profound impact on modern traffic management. Traffic flow has complex spatial-temporal correlations and periodicity, which poses difficulties for precise prediction. To address this problem, a Multi-head Self-attention and Spatial-Temporal Graph Convolutional Network (MSSTGCN) for multiscale traffic flow prediction is proposed. Firstly, to capture the hidden traffic periodicity of traffic flow, traffic flow is divided into three kinds of periods, including hourly, daily, and weekly data. Secondly, a graph attention residual layer is constructed to learn the global spatial features across regions. Local spatial-temporal dependence is captured by using a T-GCN module. Thirdly, a transformer layer is introduced to learn the long-term dependence in time. A position embedding mechanism is introduced to label position information for all traffic sequences. Thus, this multi-head self-attention mechanism can recognize the sequence order and allocate weights for different time nodes. Experimental results on four real-world datasets show that the MSSTGCN performs better than the baseline methods and can be successfully adapted to traffic prediction tasks. 展开更多
关键词 Graph convolutional network traffic flow prediction multi-scale traffic flow spatial-temporal model
在线阅读 下载PDF
Atmospheric neutron single event effects for multiple convolutional neural networks based on 28-nm and 16-nm SoC
10
作者 Xu Zhao Xuecheng Du +3 位作者 Chao Ma Zhiliang Hu Weitao Yang Bo Zheng 《Chinese Physics B》 2025年第1期477-484,共8页
The single event effects(SEEs)evaluations caused by atmospheric neutrons were conducted on three different convolutional neural network(CNN)models(Yolov3,MNIST,and ResNet50)in the atmospheric neutron irradiation spect... The single event effects(SEEs)evaluations caused by atmospheric neutrons were conducted on three different convolutional neural network(CNN)models(Yolov3,MNIST,and ResNet50)in the atmospheric neutron irradiation spectrometer(ANIS)at the China Spallation Neutron Source(CSNS).The Yolov3 and MNIST models were implemented on the XILINX28-nm system-on-chip(So C).Meanwhile,the Yolov3 and ResNet50 models were deployed on the XILINX 16-nm Fin FET Ultra Scale+MPSoC.The atmospheric neutron SEEs on the tested CNN systems were comprehensively evaluated from six aspects,including chip type,network architecture,deployment methods,inference time,datasets,and the position of the anchor boxes.The various types of SEE soft errors,SEE cross-sections,and their distribution were analyzed to explore the radiation sensitivities and rules of 28-nm and 16-nm SoC.The current research can provide the technology support of radiation-resistant design of CNN system for developing and applying high-reliability,long-lifespan domestic artificial intelligence chips. 展开更多
关键词 single event effects atmospheric neutron system on chip convolutional neural network
在线阅读 下载PDF
Convolutional Graph Neural Network with Novel Loss Strategies for Daily Temperature and Precipitation Statistical Downscaling over South China
11
作者 Wenjie YAN Shengjun LIU +6 位作者 Yulin ZOU Xinru LIU Diyao WEN Yamin HU Dangfu YANG Jiehong XIE Liang ZHAO 《Advances in Atmospheric Sciences》 2025年第1期232-247,共16页
Traditional meteorological downscaling methods face limitations due to the complex distribution of meteorological variables,which can lead to unstable forecasting results,especially in extreme scenarios.To overcome th... Traditional meteorological downscaling methods face limitations due to the complex distribution of meteorological variables,which can lead to unstable forecasting results,especially in extreme scenarios.To overcome this issue,we propose a convolutional graph neural network(CGNN)model,which we enhance with multilayer feature fusion and a squeeze-and-excitation block.Additionally,we introduce a spatially balanced mean squared error(SBMSE)loss function to address the imbalanced distribution and spatial variability of meteorological variables.The CGNN is capable of extracting essential spatial features and aggregating them from a global perspective,thereby improving the accuracy of prediction and enhancing the model's generalization ability.Based on the experimental results,CGNN has certain advantages in terms of bias distribution,exhibiting a smaller variance.When it comes to precipitation,both UNet and AE also demonstrate relatively small biases.As for temperature,AE and CNNdense perform outstandingly during the winter.The time correlation coefficients show an improvement of at least 10%at daily and monthly scales for both temperature and precipitation.Furthermore,the SBMSE loss function displays an advantage over existing loss functions in predicting the98th percentile and identifying areas where extreme events occur.However,the SBMSE tends to overestimate the distribution of extreme precipitation,which may be due to the theoretical assumptions about the posterior distribution of data that partially limit the effectiveness of the loss function.In future work,we will further optimize the SBMSE to improve prediction accuracy. 展开更多
关键词 statistical downscaling convolutional graph neural network feature processing SBMSE loss function
在线阅读 下载PDF
Container cluster placement in edge computing based on reinforcement learning incorporating graph convolutional networks scheme
12
作者 Zhuo Chen Bowen Zhu Chuan Zhou 《Digital Communications and Networks》 2025年第1期60-70,共11页
Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilizat... Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilization efficiency. To meet the diverse needs of tasks, it usually needs to instantiate multiple network functions in the form of containers interconnect various generated containers to build a Container Cluster(CC). Then CCs will be deployed on edge service nodes with relatively limited resources. However, the increasingly complex and timevarying nature of tasks brings great challenges to optimal placement of CC. This paper regards the charges for various resources occupied by providing services as revenue, the service efficiency and energy consumption as cost, thus formulates a Mixed Integer Programming(MIP) model to describe the optimal placement of CC on edge service nodes. Furthermore, an Actor-Critic based Deep Reinforcement Learning(DRL) incorporating Graph Convolutional Networks(GCN) framework named as RL-GCN is proposed to solve the optimization problem. The framework obtains an optimal placement strategy through self-learning according to the requirements and objectives of the placement of CC. Particularly, through the introduction of GCN, the features of the association relationship between multiple containers in CCs can be effectively extracted to improve the quality of placement.The experiment results show that under different scales of service nodes and task requests, the proposed method can obtain the improved system performance in terms of placement error ratio, time efficiency of solution output and cumulative system revenue compared with other representative baseline methods. 展开更多
关键词 Edge computing Network virtualization Container cluster Deep reinforcement learning Graph convolutional network
在线阅读 下载PDF
Study on Sediment Removal Method of Reservoir Based on Double Branch Convolution
13
作者 Hailong Wang Junchao Shi Xinjie Li 《Computers, Materials & Continua》 2025年第2期2951-2967,共17页
In response to the limitations and low computational efficiency of one-dimensional water and sediment models in representing complex hydrological conditions, this paper proposes a dual branch convolution method based ... In response to the limitations and low computational efficiency of one-dimensional water and sediment models in representing complex hydrological conditions, this paper proposes a dual branch convolution method based on deep learning. This method utilizes the ability of deep learning to extract data features and introduces a dual branch convolutional network to handle the non-stationary and nonlinear characteristics of noise and reservoir sediment transport data. This method combines permutation variant structure to preserve the original time series information, constructs a corresponding time series model, models and analyzes the changes in the outbound water and sediment sequence, and can more accurately predict the future trend of outbound sediment changes based on the current sequence changes. The experimental results show that the DCON model established in this paper has good predictive performance in monthly, bimonthly, seasonal, and semi-annual predictions, with determination coefficients of 0.891, 0.898, 0.921, and 0.931, respectively. The results can provide more reference schemes for personnel formulating reservoir scheduling plans. Although this study has shown good applicability in predicting sediment discharge, it has not been able to make timely predictions for some non-periodic events in reservoirs. Therefore, future research will gradually incorporate monitoring devices to obtain more comprehensive data, in order to further validate and expand the conclusions of this study. 展开更多
关键词 Prediction of reservoir sediment discharge double-branch convolution double prediction head deep learning
在线阅读 下载PDF
TMC-GCN: Encrypted Traffic Mapping Classification Method Based on Graph Convolutional Networks
14
作者 Baoquan Liu Xi Chen +2 位作者 Qingjun Yuan Degang Li Chunxiang Gu 《Computers, Materials & Continua》 2025年第2期3179-3201,共23页
With the emphasis on user privacy and communication security, encrypted traffic has increased dramatically, which brings great challenges to traffic classification. The classification method of encrypted traffic based... With the emphasis on user privacy and communication security, encrypted traffic has increased dramatically, which brings great challenges to traffic classification. The classification method of encrypted traffic based on GNN can deal with encrypted traffic well. However, existing GNN-based approaches ignore the relationship between client or server packets. In this paper, we design a network traffic topology based on GCN, called Flow Mapping Graph (FMG). FMG establishes sequential edges between vertexes by the arrival order of packets and establishes jump-order edges between vertexes by connecting packets in different bursts with the same direction. It not only reflects the time characteristics of the packet but also strengthens the relationship between the client or server packets. According to FMG, a Traffic Mapping Classification model (TMC-GCN) is designed, which can automatically capture and learn the characteristics and structure information of the top vertex in FMG. The TMC-GCN model is used to classify the encrypted traffic. The encryption stream classification problem is transformed into a graph classification problem, which can effectively deal with data from different data sources and application scenarios. By comparing the performance of TMC-GCN with other classical models in four public datasets, including CICIOT2023, ISCXVPN2016, CICAAGM2017, and GraphDapp, the effectiveness of the FMG algorithm is verified. The experimental results show that the accuracy rate of the TMC-GCN model is 96.13%, the recall rate is 95.04%, and the F1 rate is 94.54%. 展开更多
关键词 Encrypted traffic classification deep learning graph neural networks multi-layer perceptron graph convolutional networks
在线阅读 下载PDF
Handling class imbalance of radio frequency interference in deep learning-based fast radio burst search pipelines using a deep convolutional generative adversarial network
15
作者 Wenlong Du Yanling Liu Maozheng Chen 《Astronomical Techniques and Instruments》 2025年第1期10-15,共6页
This paper addresses the performance degradation issue in a fast radio burst search pipeline based on deep learning.This issue is caused by the class imbalance of the radio frequency interference samples in the traini... This paper addresses the performance degradation issue in a fast radio burst search pipeline based on deep learning.This issue is caused by the class imbalance of the radio frequency interference samples in the training dataset,and one solution is applied to improve the distribution of the training data by augmenting minority class samples using a deep convolutional generative adversarial network.Experi.mental results demonstrate that retraining the deep learning model with the newly generated dataset leads to a new fast radio burst classifier,which effectively reduces false positives caused by periodic wide-band impulsive radio frequency interference,thereby enhancing the performance of the search pipeline. 展开更多
关键词 Fast radio burst Deep convolutional generative adversarial network Class imbalance Radio frequency interference Deep learning
在线阅读 下载PDF
SGP-GCN:A Spatial-Geological Perception Graph Convolutional Neural Network for Long-Term Petroleum Production Forecasting
16
作者 Xin Liu Meng Sun +1 位作者 Bo Lin Shibo Gu 《Energy Engineering》 2025年第3期1053-1072,共20页
Long-termpetroleum production forecasting is essential for the effective development andmanagement of oilfields.Due to its ability to extract complex patterns,deep learning has gained popularity for production forecas... Long-termpetroleum production forecasting is essential for the effective development andmanagement of oilfields.Due to its ability to extract complex patterns,deep learning has gained popularity for production forecasting.However,existing deep learning models frequently overlook the selective utilization of information from other production wells,resulting in suboptimal performance in long-term production forecasting across multiple wells.To achieve accurate long-term petroleum production forecast,we propose a spatial-geological perception graph convolutional neural network(SGP-GCN)that accounts for the temporal,spatial,and geological dependencies inherent in petroleum production.Utilizing the attention mechanism,the SGP-GCN effectively captures intricate correlations within production and geological data,forming the representations of each production well.Based on the spatial distances and geological feature correlations,we construct a spatial-geological matrix as the weight matrix to enable differential utilization of information from other wells.Additionally,a matrix sparsification algorithm based on production clustering(SPC)is also proposed to optimize the weight distribution within the spatial-geological matrix,thereby enhancing long-term forecasting performance.Empirical evaluations have shown that the SGP-GCN outperforms existing deep learning models,such as CNN-LSTM-SA,in long-term petroleum production forecasting.This demonstrates the potential of the SGP-GCN as a valuable tool for long-term petroleum production forecasting across multiple wells. 展开更多
关键词 Petroleum production forecast graph convolutional neural networks(GCNs) spatial-geological rela-tionships production clustering attention mechanism
在线阅读 下载PDF
Anatomic Boundary-Aware Explanation for Convolutional Neural Networks in Diagnostic Radiology
17
作者 Han Yuan 《iRADIOLOGY》 2025年第1期47-60,共14页
Background:Convolutional neural networks(CNN)have achieved remarkable success in medical image analysis.However,unlike some general-domain tasks where model accuracy is paramount,medical applications demand both accur... Background:Convolutional neural networks(CNN)have achieved remarkable success in medical image analysis.However,unlike some general-domain tasks where model accuracy is paramount,medical applications demand both accuracy and explainability due to the high stakes affecting patients'lives.Based on model explanations,clinicians can evaluate the diagnostic decisions suggested by CNN.Nevertheless,prior explainable artificial intelligence methods treat medical image tasks akin to general vision tasks,following end-to-end paradigms to generate explanations and frequently overlooking crucial clinical domain knowledge.Methods:We propose a plug-and-play module that explicitly integrates anatomic boundary information into the explanation process for CNN-based thoracopathy classifiers.To generate the anatomic boundary of the lung parenchyma,we utilize a lung segmentation model developed on external public datasets and deploy it on the unseen target dataset to constrain model ex-planations within the lung parenchyma for the clinical task of thoracopathy classification.Results:Assessed by the intersection over union and dice similarity coefficient between model-extracted explanations and expert-annotated lesion areas,our method consistently outperformed the baseline devoid of clinical domain knowledge in 71 out of 72 scenarios,encompassing 3 CNN architectures(VGG-11,ResNet-18,and AlexNet),2 classification settings(binary and multi-label),3 explanation methods(Saliency Map,Grad-CAM,and Integrated Gradients),and 4 co-occurred thoracic diseases(Atelectasis,Fracture,Mass,and Pneumothorax).Conclusions:We underscore the effectiveness of leveraging radiology knowledge in improving model explanations for CNN and envisage that it could inspire future efforts to integrate clinical domain knowledge into medical image analysis. 展开更多
关键词 ATELECTASIS convolutional neural networks diagnostic radiology explainable artificial intelligence FRACTURE grad-cam integrated gradients mass PNEUMOTHORAX saliency map
在线阅读 下载PDF
Reconstruction of pile-up events using a one-dimensional convolutional autoencoder for the NEDA detector array
18
作者 J.M.Deltoro G.Jaworski +15 位作者 A.Goasduff V.González A.Gadea M.Palacz J.J.Valiente-Dobón J.Nyberg S.Casans A.E.Navarro-Antón E.Sanchis G.de Angelis A.Boujrad S.Coudert T.Dupasquier S.Ertürk O.Stezowski R.Wadsworth 《Nuclear Science and Techniques》 2025年第2期62-70,共9页
Pulse pile-up is a problem in nuclear spectroscopy and nuclear reaction studies that occurs when two pulses overlap and distort each other,degrading the quality of energy and timing information.Different methods have ... Pulse pile-up is a problem in nuclear spectroscopy and nuclear reaction studies that occurs when two pulses overlap and distort each other,degrading the quality of energy and timing information.Different methods have been used for pile-up rejection,both digital and analogue,but some pile-up events may contain pulses of interest and need to be reconstructed.The paper proposes a new method for reconstructing pile-up events acquired with a neutron detector array(NEDA)using an one-dimensional convolutional autoencoder(1D-CAE).The datasets for training and testing the 1D-CAE are created from data acquired from the NEDA.The new pile-up signal reconstruction method is evaluated from the point of view of how similar the reconstructed signals are to the original ones.Furthermore,it is analysed considering the result of the neutron-gamma discrimination based on charge comparison,comparing the result obtained from original and reconstructed signals. 展开更多
关键词 1D-CAE Autoencoder CAE convolutional neural network(CNN) Neutron detector Neutron-gamma discrimination(NGD) Machine learning Pulse shape discrimination Pile-up pulse
在线阅读 下载PDF
Experiments on image data augmentation techniques for geological rock type classification with convolutional neural networks
19
作者 Afshin Tatar Manouchehr Haghighi Abbas Zeinijahromi 《Journal of Rock Mechanics and Geotechnical Engineering》 2025年第1期106-125,共20页
The integration of image analysis through deep learning(DL)into rock classification represents a significant leap forward in geological research.While traditional methods remain invaluable for their expertise and hist... The integration of image analysis through deep learning(DL)into rock classification represents a significant leap forward in geological research.While traditional methods remain invaluable for their expertise and historical context,DL offers a powerful complement by enhancing the speed,objectivity,and precision of the classification process.This research explores the significance of image data augmentation techniques in optimizing the performance of convolutional neural networks(CNNs)for geological image analysis,particularly in the classification of igneous,metamorphic,and sedimentary rock types from rock thin section(RTS)images.This study primarily focuses on classic image augmentation techniques and evaluates their impact on model accuracy and precision.Results demonstrate that augmentation techniques like Equalize significantly enhance the model's classification capabilities,achieving an F1-Score of 0.9869 for igneous rocks,0.9884 for metamorphic rocks,and 0.9929 for sedimentary rocks,representing improvements compared to the baseline original results.Moreover,the weighted average F1-Score across all classes and techniques is 0.9886,indicating an enhancement.Conversely,methods like Distort lead to decreased accuracy and F1-Score,with an F1-Score of 0.949 for igneous rocks,0.954 for metamorphic rocks,and 0.9416 for sedimentary rocks,exacerbating the performance compared to the baseline.The study underscores the practicality of image data augmentation in geological image classification and advocates for the adoption of DL methods in this domain for automation and improved results.The findings of this study can benefit various fields,including remote sensing,mineral exploration,and environmental monitoring,by enhancing the accuracy of geological image analysis both for scientific research and industrial applications. 展开更多
关键词 Deep learning(DL) Image analysis Image data augmentation convolutional neural networks(CNNs) Geological image analysis Rock classification Rock thin section(RTS)images
在线阅读 下载PDF
Convolution-Transformer for Image Feature Extraction 被引量:1
20
作者 Lirong Yin Lei Wang +10 位作者 Siyu Lu Ruiyang Wang Youshuai Yang Bo Yang Shan Liu Ahmed AlSanad Salman A.AlQahtani Zhengtong Yin Xiaolu Li Xiaobing Chen Wenfeng Zheng 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第10期87-106,共20页
This study addresses the limitations of Transformer models in image feature extraction,particularly their lack of inductive bias for visual structures.Compared to Convolutional Neural Networks(CNNs),the Transformers a... This study addresses the limitations of Transformer models in image feature extraction,particularly their lack of inductive bias for visual structures.Compared to Convolutional Neural Networks(CNNs),the Transformers are more sensitive to different hyperparameters of optimizers,which leads to a lack of stability and slow convergence.To tackle these challenges,we propose the Convolution-based Efficient Transformer Image Feature Extraction Network(CEFormer)as an enhancement of the Transformer architecture.Our model incorporates E-Attention,depthwise separable convolution,and dilated convolution to introduce crucial inductive biases,such as translation invariance,locality,and scale invariance,into the Transformer framework.Additionally,we implement a lightweight convolution module to process the input images,resulting in faster convergence and improved stability.This results in an efficient convolution combined Transformer image feature extraction network.Experimental results on the ImageNet1k Top-1 dataset demonstrate that the proposed network achieves better accuracy while maintaining high computational speed.It achieves up to 85.0%accuracy across various model sizes on image classification,outperforming various baseline models.When integrated into the Mask Region-ConvolutionalNeuralNetwork(R-CNN)framework as a backbone network,CEFormer outperforms other models and achieves the highest mean Average Precision(mAP)scores.This research presents a significant advancement in Transformer-based image feature extraction,balancing performance and computational efficiency. 展开更多
关键词 TRANSFORMER E-Attention depth convolution dilated convolution CEFormer
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部