Medical procedures are inherently invasive and carry the risk of inducing pain to the mind and body.Recently,efforts have been made to alleviate the discomfort associated with invasive medical procedures through the u...Medical procedures are inherently invasive and carry the risk of inducing pain to the mind and body.Recently,efforts have been made to alleviate the discomfort associated with invasive medical procedures through the use of virtual reality(VR)technology.VR has been demonstrated to be an effective treatment for pain associated with medical procedures,as well as for chronic pain conditions for which no effective treatment has been established.The precise mechanism by which the diversion from reality facilitated by VR contributes to the diminution of pain and anxiety has yet to be elucidated.However,the provision of positive images through VR-based visual stimulation may enhance the functionality of brain networks.The salience network is diminished,while the default mode network is enhanced.Additionally,the medial prefrontal cortex may establish a stronger connection with the default mode network,which could result in a reduction of pain and anxiety.Further research into the potential of VR technology to alleviate pain could lead to a reduction in the number of individuals who overdose on painkillers and contribute to positive change in the medical field.展开更多
The increasing popularity of the Internet and the widespread use of information technology have led to a rise in the number and sophistication of network attacks and security threats.Intrusion detection systems are cr...The increasing popularity of the Internet and the widespread use of information technology have led to a rise in the number and sophistication of network attacks and security threats.Intrusion detection systems are crucial to network security,playing a pivotal role in safeguarding networks from potential threats.However,in the context of an evolving landscape of sophisticated and elusive attacks,existing intrusion detection methodologies often overlook critical aspects such as changes in network topology over time and interactions between hosts.To address these issues,this paper proposes a real-time network intrusion detection method based on graph neural networks.The proposedmethod leverages the advantages of graph neural networks and employs a straightforward graph construction method to represent network traffic as dynamic graph-structured data.Additionally,a graph convolution operation with a multi-head attention mechanism is utilized to enhance the model’s ability to capture the intricate relationships within the graph structure comprehensively.Furthermore,it uses an integrated graph neural network to address dynamic graphs’structural and topological changes at different time points and the challenges of edge embedding in intrusion detection data.The edge classification problem is effectively transformed into node classification by employing a line graph data representation,which facilitates fine-grained intrusion detection tasks on dynamic graph node feature representations.The efficacy of the proposed method is evaluated using two commonly used intrusion detection datasets,UNSW-NB15 and NF-ToN-IoT-v2,and results are compared with previous studies in this field.The experimental results demonstrate that our proposed method achieves 99.3%and 99.96%accuracy on the two datasets,respectively,and outperforms the benchmark model in several evaluation metrics.展开更多
The emergence of next generation networks(NextG),including 5G and beyond,is reshaping the technological landscape of cellular and mobile networks.These networks are sufficiently scaled to interconnect billions of user...The emergence of next generation networks(NextG),including 5G and beyond,is reshaping the technological landscape of cellular and mobile networks.These networks are sufficiently scaled to interconnect billions of users and devices.Researchers in academia and industry are focusing on technological advancements to achieve highspeed transmission,cell planning,and latency reduction to facilitate emerging applications such as virtual reality,the metaverse,smart cities,smart health,and autonomous vehicles.NextG continuously improves its network functionality to support these applications.Multiple input multiple output(MIMO)technology offers spectral efficiency,dependability,and overall performance in conjunctionwithNextG.This article proposes a secure channel estimation technique in MIMO topology using a norm-estimation model to provide comprehensive insights into protecting NextG network components against adversarial attacks.The technique aims to create long-lasting and secure NextG networks using this extended approach.The viability of MIMO applications and modern AI-driven methodologies to combat cybersecurity threats are explored in this research.Moreover,the proposed model demonstrates high performance in terms of reliability and accuracy,with a 20%reduction in the MalOut-RealOut-Diff metric compared to existing state-of-the-art techniques.展开更多
Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in us...Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency.展开更多
Software-defined networking(SDN)is an innovative paradigm that separates the control and data planes,introducing centralized network control.SDN is increasingly being adopted by Carrier Grade networks,offering enhance...Software-defined networking(SDN)is an innovative paradigm that separates the control and data planes,introducing centralized network control.SDN is increasingly being adopted by Carrier Grade networks,offering enhanced networkmanagement capabilities than those of traditional networks.However,because SDN is designed to ensure high-level service availability,it faces additional challenges.One of themost critical challenges is ensuring efficient detection and recovery from link failures in the data plane.Such failures can significantly impact network performance and lead to service outages,making resiliency a key concern for the effective adoption of SDN.Since the recovery process is intrinsically dependent on timely failure detection,this research surveys and analyzes the current literature on both failure detection and recovery approaches in SDN.The survey provides a critical comparison of existing failure detection techniques,highlighting their advantages and disadvantages.Additionally,it examines the current failure recovery methods,categorized as either restoration-based or protection-based,and offers a comprehensive comparison of their strengths and limitations.Lastly,future research challenges and directions are discussed to address the shortcomings of existing failure recovery methods.展开更多
Link failure is a critical issue in large networks and must be effectively addressed.In software-defined networks(SDN),link failure recovery schemes can be categorized into proactive and reactive approaches.Reactive s...Link failure is a critical issue in large networks and must be effectively addressed.In software-defined networks(SDN),link failure recovery schemes can be categorized into proactive and reactive approaches.Reactive schemes have longer recovery times while proactive schemes provide faster recovery but overwhelm the memory of switches by flow entries.As SDN adoption grows,ensuring efficient recovery from link failures in the data plane becomes crucial.In particular,data center networks(DCNs)demand rapid recovery times and efficient resource utilization to meet carrier-grade requirements.This paper proposes an efficient Decentralized Failure Recovery(DFR)model for SDNs,meeting recovery time requirements and optimizing switch memory resource consumption.The DFR model enables switches to autonomously reroute traffic upon link failures without involving the controller,achieving fast recovery times while minimizing memory usage.DFR employs the Fast Failover Group in the OpenFlow standard for local recovery without requiring controller communication and utilizes the k-shortest path algorithm to proactively install backup paths,allowing immediate local recovery without controller intervention and enhancing overall network stability and scalability.DFR employs flow entry aggregation techniques to reduce switch memory usage.Instead of matching flow entries to the destination host’s MAC address,DFR matches packets to the destination switch’s MAC address.This reduces the switches’Ternary Content-Addressable Memory(TCAM)consumption.Additionally,DFR modifies Address Resolution Protocol(ARP)replies to provide source hosts with the destination switch’s MAC address,facilitating flow entry aggregation without affecting normal network operations.The performance of DFR is evaluated through the network emulator Mininet 2.3.1 and Ryu 3.1 as SDN controller.For different number of active flows,number of hosts per edge switch,and different network sizes,the proposed model outperformed various failure recovery models:restoration-based,protection by flow entries,protection by group entries and protection by Vlan-tagging model in terms of recovery time,switch memory consumption and controller overhead which represented the number of flow entry updates to recover from the failure.Experimental results demonstrate that DFR achieves recovery times under 20 milliseconds,satisfying carrier-grade requirements for rapid failure recovery.Additionally,DFR reduces switch memory usage by up to 95%compared to traditional protection methods and minimizes controller load by eliminating the need for controller intervention during failure recovery.Theresults underscore the efficiency and scalability of the DFR model,making it a practical solution for enhancing network resilience in SDN environments.展开更多
Control signaling is mandatory for the operation and management of all types of communication networks,including the Third Generation Partnership Project(3GPP)mobile broadband networks.However,they consume important a...Control signaling is mandatory for the operation and management of all types of communication networks,including the Third Generation Partnership Project(3GPP)mobile broadband networks.However,they consume important and scarce network resources such as bandwidth and processing power.There have been several reports of these control signaling turning into signaling storms halting network operations and causing the respective Telecom companies big financial losses.This paper draws its motivation from such real network disaster incidents attributed to signaling storms.In this paper,we present a thorough survey of the causes,of the signaling storm problems in 3GPP-based mobile broadband networks and discuss in detail their possible solutions and countermeasures.We provide relevant analytical models to help quantify the effect of the potential causes and benefits of their corresponding solutions.Another important contribution of this paper is the comparison of the possible causes and solutions/countermeasures,concerning their effect on several important network aspects such as architecture,additional signaling,fidelity,etc.,in the form of a table.This paper presents an update and an extension of our earlier conference publication.To our knowledge,no similar survey study exists on the subject.展开更多
BACKGROUND Mitochondrial genes are involved in tumor metabolism in ovarian cancer(OC)and affect immune cell infiltration and treatment responses.AIM To predict prognosis and immunotherapy response in patients diagnose...BACKGROUND Mitochondrial genes are involved in tumor metabolism in ovarian cancer(OC)and affect immune cell infiltration and treatment responses.AIM To predict prognosis and immunotherapy response in patients diagnosed with OC using mitochondrial genes and neural networks.METHODS Prognosis,immunotherapy efficacy,and next-generation sequencing data of patients with OC were downloaded from The Cancer Genome Atlas and Gene Expression Omnibus.Mitochondrial genes were sourced from the MitoCarta3.0 database.The discovery cohort for model construction was created from 70% of the patients,whereas the remaining 30% constituted the validation cohort.Using the expression of mitochondrial genes as the predictor variable and based on neural network algorithm,the overall survival time and immunotherapy efficacy(complete or partial response)of patients were predicted.RESULTS In total,375 patients with OC were included to construct the prognostic model,and 26 patients were included to construct the immune efficacy model.The average area under the receiver operating characteristic curve of the prognostic model was 0.7268[95% confidence interval(CI):0.7258-0.7278]in the discovery cohort and 0.6475(95%CI:0.6466-0.6484)in the validation cohort.The average area under the receiver operating characteristic curve of the immunotherapy efficacy model was 0.9444(95%CI:0.8333-1.0000)in the discovery cohort and 0.9167(95%CI:0.6667-1.0000)in the validation cohort.CONCLUSION The application of mitochondrial genes and neural networks has the potential to predict prognosis and immunotherapy response in patients with OC,providing valuable insights into personalized treatment strategies.展开更多
This article examines the architecture of software-defined networks (SDN) and its implications for the modern management of communications infrastructures. By decoupling the control plane from the data plane, SDN offe...This article examines the architecture of software-defined networks (SDN) and its implications for the modern management of communications infrastructures. By decoupling the control plane from the data plane, SDN offers increased flexibility and programmability, enabling rapid adaptation to changing user requirements. However, this new approach poses significant challenges in terms of security, fault tolerance, and interoperability. This paper highlights these challenges and explores current strategies to ensure the resilience and reliability of SDN networks in the face of threats and failures. In addition, we analyze the future outlook for SDN and the importance of integrating robust security solutions into these infrastructures.展开更多
In this paper,we propose a neural network approach to learn the parameters of a class of stochastic Lotka-Volterra systems.Approximations of the mean and covariance matrix of the observational variables are obtained f...In this paper,we propose a neural network approach to learn the parameters of a class of stochastic Lotka-Volterra systems.Approximations of the mean and covariance matrix of the observational variables are obtained from the Euler-Maruyama discretization of the underlying stochastic differential equations(SDEs),based on which the loss function is built.The stochastic gradient descent method is applied in the neural network training.Numerical experiments demonstrate the effectiveness of our method.展开更多
The high proportion of uncertain distributed power sources and the access to large-scale random electric vehicle(EV)charging resources further aggravate the voltage fluctuation of the distribution network,and the exis...The high proportion of uncertain distributed power sources and the access to large-scale random electric vehicle(EV)charging resources further aggravate the voltage fluctuation of the distribution network,and the existing research has not deeply explored the EV active-reactive synergistic regulating characteristics,and failed to realize themulti-timescale synergistic control with other regulatingmeans,For this reason,this paper proposes amultilevel linkage coordinated optimization strategy to reduce the voltage deviation of the distribution network.Firstly,a capacitor bank reactive power compensation voltage control model and a distributed photovoltaic(PV)activereactive power regulationmodel are established.Additionally,an external characteristicmodel of EVactive-reactive power regulation is developed considering the four-quadrant operational characteristics of the EVcharger.Amultiobjective optimization model of the distribution network is then constructed considering the time-series coupling constraints of multiple types of voltage regulators.A multi-timescale control strategy is proposed by considering the impact of voltage regulators on active-reactive EV energy consumption and PV energy consumption.Then,a four-stage voltage control optimization strategy is proposed for various types of voltage regulators with multiple time scales.Themulti-objective optimization is solved with the improvedDrosophila algorithmto realize the power fluctuation control of the distribution network and themulti-stage voltage control optimization.Simulation results validate that the proposed voltage control optimization strategy achieves the coordinated control of decentralized voltage control resources in the distribution network.It effectively reduces the voltage deviation of the distribution network while ensuring the energy demand of EV users and enhancing the stability and economic efficiency of the distribution network.展开更多
Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay di...Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay differential equations(DDEs)and delay integrodifferential equations(DIDEs)with constant delays,primarily due to their low regularity at delayinduced breaking points.In this paper,a DNN method that combines multi-task learning(MTL)which is proposed to solve both the forward and inverse problems of DIDEs.The core idea of this approach is to divide the original equation into multiple tasks based on the delay,using auxiliary outputs to represent the integral terms,followed by the use of MTL to seamlessly incorporate the properties at the breaking points into the loss function.Furthermore,given the increased training dificulty associated with multiple tasks and outputs,we employ a sequential training scheme to reduce training complexity and provide reference solutions for subsequent tasks.This approach significantly enhances the approximation accuracy of solving DIDEs with DNNs,as demonstrated by comparisons with traditional DNN methods.We validate the effectiveness of this method through several numerical experiments,test various parameter sharing structures in MTL and compare the testing results of these structures.Finally,this method is implemented to solve the inverse problem of nonlinear DIDE and the results show that the unknown parameters of DIDE can be discovered with sparse or noisy data.展开更多
Time series forecasting is essential for generating predictive insights across various domains, including healthcare, finance, and energy. This study focuses on forecasting patient health data by comparing the perform...Time series forecasting is essential for generating predictive insights across various domains, including healthcare, finance, and energy. This study focuses on forecasting patient health data by comparing the performance of traditional linear time series models, namely Autoregressive Integrated Moving Average (ARIMA), Seasonal ARIMA, and Moving Average (MA) against neural network architectures. The primary goal is to evaluate the effectiveness of these models in predicting healthcare outcomes using patient records, specifically the Cancerpatient.xlsx dataset, which tracks variables such as patient age, symptoms, genetic risk factors, and environmental exposures over time. The proposed strategy involves training each model on historical patient data to predict age progression and other related health indicators, with performance evaluated using Mean Squared Error (MSE) and Root Mean Squared Error (RMSE) metrics. Our findings reveal that neural networks consistently outperform ARIMA and SARIMA by capturing non-linear patterns and complex temporal dependencies within the dataset, resulting in lower forecasting errors. This research highlights the potential of neural networks to enhance predictive accuracy in healthcare applications, supporting better resource allocation, patient monitoring, and long-term health outcome predictions.展开更多
Friendship paradox states that individuals are likely to have fewer friends than their friends do,on average.Despite of its wide existence and appealing applications in real social networks,the mathematical understand...Friendship paradox states that individuals are likely to have fewer friends than their friends do,on average.Despite of its wide existence and appealing applications in real social networks,the mathematical understanding of friendship paradox is very limited.Only few works provide theoretical evidence of single-step and multi-step friendship paradoxes,given that the neighbors of interest are onehop and multi-hop away from the target node.However,they consider non-evolving networks,as opposed to the topology of real social networks that are constantly growing over time.We are thus motivated to present a first look into friendship paradox in evolving networks,where newly added nodes preferentially attach themselves to those with higher degrees.Our analytical verification of both single-step and multistep friendship paradoxes in evolving networks,along with comparison to the non-evolving counterparts,discloses that“friendship paradox is even more paradoxical in evolving networks”,primarily from three aspects:1)we demonstrate a strengthened effect of single-step friendship paradox in evolving networks,with a larger probability(more than 0.8)of a random node’s neighbors having higher average degree than the random node itself;2)we unravel higher effectiveness of multi-step friendship paradox in seeking for influential nodes in evolving networks,as the rate of reaching the max degree node can be improved by a factor of at least Θ(t^(2/3))with t being the network size;3)we empirically verify our findings through both synthetic and real datasets,which suggest high agreements of results and consolidate the reasonability of evolving model for real social networks.展开更多
Wireless Sensor Network(WSN)comprises a set of interconnected,compact,autonomous,and resource-constrained sensor nodes that are wirelessly linked to monitor and gather data from the physical environment.WSNs are commo...Wireless Sensor Network(WSN)comprises a set of interconnected,compact,autonomous,and resource-constrained sensor nodes that are wirelessly linked to monitor and gather data from the physical environment.WSNs are commonly used in various applications such as environmental monitoring,surveillance,healthcare,agriculture,and industrial automation.Despite the benefits of WSN,energy efficiency remains a challenging problem that needs to be addressed.Clustering and routing can be considered effective solutions to accomplish energy efficiency in WSNs.Recent studies have reported that metaheuristic algorithms can be applied to optimize cluster formation and routing decisions.This study introduces a new Northern Goshawk Optimization with boosted coati optimization algorithm for cluster-based routing(NGOBCO-CBR)method for WSN.The proposed NGOBCO-CBR method resolves the hot spot problem,uneven load balancing,and energy consumption in WSN.The NGOBCO-CBR technique comprises two major processes such as NGO based clustering and BCO-based routing.In the initial phase,the NGObased clustering method is designed for cluster head(CH)selection and cluster construction using five input variables such as residual energy(RE),node proximity,load balancing,network average energy,and distance to BS(DBS).Besides,the NGOBCO-CBR technique applies the BCO algorithm for the optimum selection of routes to BS.The experimental results of the NGOBCOCBR technique are studied under different scenarios,and the obtained results showcased the improved efficiency of the NGOBCO-CBR technique over recent approaches in terms of different measures.展开更多
Pollution from heavy metals(HMs)(Cd,As,Cr,and Ni,etc.)has become a serious environmental issue in urban wetland ecosystems with more and more attention.Previous studies conducted in agricultural soils,rivers,and lakes...Pollution from heavy metals(HMs)(Cd,As,Cr,and Ni,etc.)has become a serious environmental issue in urban wetland ecosystems with more and more attention.Previous studies conducted in agricultural soils,rivers,and lakes demonstrated that microbial communities exhibit a response to HM pollution.Yet,little is known about the response of microbial communities to HM pollution in urban wetland ecosystems.We examined how heavy metals affect the stability of the microbial networks in the sediments of Sanyang wetland,Wenzhou,China.Key environmental parameters,including HMs,TC(total carbon),TN(total nitrogen),TP(total phosphorus),S,and pH,varied profoundly between moderately and heavily polluted areas in shaping microbial communities.Specifically,the microbial community composition in moderately polluted sites correlated significantly(P<0.05)with Ni,Cu,Cd and TP,whereas in heavily polluted sites,they correlated significantly with Cd,TN,TP,and S.Results show that the heavily polluted sites demonstrated more intricate and more stable microbial networks than those of the moderately polluted area.The heavily polluted sites exhibited higher values for various network parameters including total nodes,total links,average degree,average clustering coefficient,connectance,relative modularity,robustness,and cohesion.Moreover,the structural equation modeling analysis demonstrated a positive correlation between the stability of microbial networks and Cd,TN,TP,and S in heavily polluted sites.Conversely,in moderately polluted sites,the correlation was positively linked to Cd,Ni,and sediment pH.It implies that Cd could potentially play a crucial role in affecting the stability of microbial networks.This study shall enhance our comprehension of microbial co-occurrence patterns in urban wetland ecosystems and offer insights into the ways in which microbial communities respond to environmental factors in varying levels of HM pollution.展开更多
Aspect-oriented sentiment analysis is a meticulous sentiment analysis task that aims to analyse the sentiment polarity of specific aspects. Most of the current research builds graph convolutional networks based on dep...Aspect-oriented sentiment analysis is a meticulous sentiment analysis task that aims to analyse the sentiment polarity of specific aspects. Most of the current research builds graph convolutional networks based on dependent syntactic trees, which improves the classification performance of the models to some extent. However, the technical limitations of dependent syntactic trees can introduce considerable noise into the model. Meanwhile, it is difficult for a single graph convolutional network to aggregate both semantic and syntactic structural information of nodes, which affects the final sentence classification. To cope with the above problems, this paper proposes a bi-channel graph convolutional network model. The model introduces a phrase structure tree and transforms it into a hierarchical phrase matrix. The adjacency matrix of the dependent syntactic tree and the hierarchical phrase matrix are combined as the initial matrix of the graph convolutional network to enhance the syntactic information. The semantic information feature representations of the sentences are obtained by the graph convolutional network with a multi-head attention mechanism and fused to achieve complementary learning of dual-channel features. Experimental results show that the model performs well and improves the accuracy of sentiment classification on three public benchmark datasets, namely Rest14, Lap14 and Twitter.展开更多
Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilizat...Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilization efficiency. To meet the diverse needs of tasks, it usually needs to instantiate multiple network functions in the form of containers interconnect various generated containers to build a Container Cluster(CC). Then CCs will be deployed on edge service nodes with relatively limited resources. However, the increasingly complex and timevarying nature of tasks brings great challenges to optimal placement of CC. This paper regards the charges for various resources occupied by providing services as revenue, the service efficiency and energy consumption as cost, thus formulates a Mixed Integer Programming(MIP) model to describe the optimal placement of CC on edge service nodes. Furthermore, an Actor-Critic based Deep Reinforcement Learning(DRL) incorporating Graph Convolutional Networks(GCN) framework named as RL-GCN is proposed to solve the optimization problem. The framework obtains an optimal placement strategy through self-learning according to the requirements and objectives of the placement of CC. Particularly, through the introduction of GCN, the features of the association relationship between multiple containers in CCs can be effectively extracted to improve the quality of placement.The experiment results show that under different scales of service nodes and task requests, the proposed method can obtain the improved system performance in terms of placement error ratio, time efficiency of solution output and cumulative system revenue compared with other representative baseline methods.展开更多
With the emphasis on user privacy and communication security, encrypted traffic has increased dramatically, which brings great challenges to traffic classification. The classification method of encrypted traffic based...With the emphasis on user privacy and communication security, encrypted traffic has increased dramatically, which brings great challenges to traffic classification. The classification method of encrypted traffic based on GNN can deal with encrypted traffic well. However, existing GNN-based approaches ignore the relationship between client or server packets. In this paper, we design a network traffic topology based on GCN, called Flow Mapping Graph (FMG). FMG establishes sequential edges between vertexes by the arrival order of packets and establishes jump-order edges between vertexes by connecting packets in different bursts with the same direction. It not only reflects the time characteristics of the packet but also strengthens the relationship between the client or server packets. According to FMG, a Traffic Mapping Classification model (TMC-GCN) is designed, which can automatically capture and learn the characteristics and structure information of the top vertex in FMG. The TMC-GCN model is used to classify the encrypted traffic. The encryption stream classification problem is transformed into a graph classification problem, which can effectively deal with data from different data sources and application scenarios. By comparing the performance of TMC-GCN with other classical models in four public datasets, including CICIOT2023, ISCXVPN2016, CICAAGM2017, and GraphDapp, the effectiveness of the FMG algorithm is verified. The experimental results show that the accuracy rate of the TMC-GCN model is 96.13%, the recall rate is 95.04%, and the F1 rate is 94.54%.展开更多
The low Earth orbit(LEO)satellite networks have outstanding advantages such as wide coverage area and not being limited by geographic environment,which can provide a broader range of communication services and has bec...The low Earth orbit(LEO)satellite networks have outstanding advantages such as wide coverage area and not being limited by geographic environment,which can provide a broader range of communication services and has become an essential supplement to the terrestrial network.However,the dynamic changes and uneven distribution of satellite network traffic inevitably bring challenges to multipath routing.Even worse,the harsh space environment often leads to incomplete collection of network state data for routing decision-making,which further complicates this challenge.To address this problem,this paper proposes a state-incomplete intelligent dynamic multipath routing algorithm(SIDMRA)to maximize network efficiency even with incomplete state data as input.Specifically,we model the multipath routing problem as a markov decision process(MDP)and then combine the deep deterministic policy gradient(DDPG)and the K shortest paths(KSP)algorithm to solve the optimal multipath routing policy.We use the temporal correlation of the satellite network state to fit the incomplete state data and then use the message passing neuron network(MPNN)for data enhancement.Simulation results show that the proposed algorithm outperforms baseline algorithms regarding average end-to-end delay and packet loss rate and performs stably under certain missing rates of state data.展开更多
文摘Medical procedures are inherently invasive and carry the risk of inducing pain to the mind and body.Recently,efforts have been made to alleviate the discomfort associated with invasive medical procedures through the use of virtual reality(VR)technology.VR has been demonstrated to be an effective treatment for pain associated with medical procedures,as well as for chronic pain conditions for which no effective treatment has been established.The precise mechanism by which the diversion from reality facilitated by VR contributes to the diminution of pain and anxiety has yet to be elucidated.However,the provision of positive images through VR-based visual stimulation may enhance the functionality of brain networks.The salience network is diminished,while the default mode network is enhanced.Additionally,the medial prefrontal cortex may establish a stronger connection with the default mode network,which could result in a reduction of pain and anxiety.Further research into the potential of VR technology to alleviate pain could lead to a reduction in the number of individuals who overdose on painkillers and contribute to positive change in the medical field.
文摘The increasing popularity of the Internet and the widespread use of information technology have led to a rise in the number and sophistication of network attacks and security threats.Intrusion detection systems are crucial to network security,playing a pivotal role in safeguarding networks from potential threats.However,in the context of an evolving landscape of sophisticated and elusive attacks,existing intrusion detection methodologies often overlook critical aspects such as changes in network topology over time and interactions between hosts.To address these issues,this paper proposes a real-time network intrusion detection method based on graph neural networks.The proposedmethod leverages the advantages of graph neural networks and employs a straightforward graph construction method to represent network traffic as dynamic graph-structured data.Additionally,a graph convolution operation with a multi-head attention mechanism is utilized to enhance the model’s ability to capture the intricate relationships within the graph structure comprehensively.Furthermore,it uses an integrated graph neural network to address dynamic graphs’structural and topological changes at different time points and the challenges of edge embedding in intrusion detection data.The edge classification problem is effectively transformed into node classification by employing a line graph data representation,which facilitates fine-grained intrusion detection tasks on dynamic graph node feature representations.The efficacy of the proposed method is evaluated using two commonly used intrusion detection datasets,UNSW-NB15 and NF-ToN-IoT-v2,and results are compared with previous studies in this field.The experimental results demonstrate that our proposed method achieves 99.3%and 99.96%accuracy on the two datasets,respectively,and outperforms the benchmark model in several evaluation metrics.
基金funding from King Saud University through Researchers Supporting Project number(RSP2024R387),King Saud University,Riyadh,Saudi Arabia.
文摘The emergence of next generation networks(NextG),including 5G and beyond,is reshaping the technological landscape of cellular and mobile networks.These networks are sufficiently scaled to interconnect billions of users and devices.Researchers in academia and industry are focusing on technological advancements to achieve highspeed transmission,cell planning,and latency reduction to facilitate emerging applications such as virtual reality,the metaverse,smart cities,smart health,and autonomous vehicles.NextG continuously improves its network functionality to support these applications.Multiple input multiple output(MIMO)technology offers spectral efficiency,dependability,and overall performance in conjunctionwithNextG.This article proposes a secure channel estimation technique in MIMO topology using a norm-estimation model to provide comprehensive insights into protecting NextG network components against adversarial attacks.The technique aims to create long-lasting and secure NextG networks using this extended approach.The viability of MIMO applications and modern AI-driven methodologies to combat cybersecurity threats are explored in this research.Moreover,the proposed model demonstrates high performance in terms of reliability and accuracy,with a 20%reduction in the MalOut-RealOut-Diff metric compared to existing state-of-the-art techniques.
文摘Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency.
文摘Software-defined networking(SDN)is an innovative paradigm that separates the control and data planes,introducing centralized network control.SDN is increasingly being adopted by Carrier Grade networks,offering enhanced networkmanagement capabilities than those of traditional networks.However,because SDN is designed to ensure high-level service availability,it faces additional challenges.One of themost critical challenges is ensuring efficient detection and recovery from link failures in the data plane.Such failures can significantly impact network performance and lead to service outages,making resiliency a key concern for the effective adoption of SDN.Since the recovery process is intrinsically dependent on timely failure detection,this research surveys and analyzes the current literature on both failure detection and recovery approaches in SDN.The survey provides a critical comparison of existing failure detection techniques,highlighting their advantages and disadvantages.Additionally,it examines the current failure recovery methods,categorized as either restoration-based or protection-based,and offers a comprehensive comparison of their strengths and limitations.Lastly,future research challenges and directions are discussed to address the shortcomings of existing failure recovery methods.
文摘Link failure is a critical issue in large networks and must be effectively addressed.In software-defined networks(SDN),link failure recovery schemes can be categorized into proactive and reactive approaches.Reactive schemes have longer recovery times while proactive schemes provide faster recovery but overwhelm the memory of switches by flow entries.As SDN adoption grows,ensuring efficient recovery from link failures in the data plane becomes crucial.In particular,data center networks(DCNs)demand rapid recovery times and efficient resource utilization to meet carrier-grade requirements.This paper proposes an efficient Decentralized Failure Recovery(DFR)model for SDNs,meeting recovery time requirements and optimizing switch memory resource consumption.The DFR model enables switches to autonomously reroute traffic upon link failures without involving the controller,achieving fast recovery times while minimizing memory usage.DFR employs the Fast Failover Group in the OpenFlow standard for local recovery without requiring controller communication and utilizes the k-shortest path algorithm to proactively install backup paths,allowing immediate local recovery without controller intervention and enhancing overall network stability and scalability.DFR employs flow entry aggregation techniques to reduce switch memory usage.Instead of matching flow entries to the destination host’s MAC address,DFR matches packets to the destination switch’s MAC address.This reduces the switches’Ternary Content-Addressable Memory(TCAM)consumption.Additionally,DFR modifies Address Resolution Protocol(ARP)replies to provide source hosts with the destination switch’s MAC address,facilitating flow entry aggregation without affecting normal network operations.The performance of DFR is evaluated through the network emulator Mininet 2.3.1 and Ryu 3.1 as SDN controller.For different number of active flows,number of hosts per edge switch,and different network sizes,the proposed model outperformed various failure recovery models:restoration-based,protection by flow entries,protection by group entries and protection by Vlan-tagging model in terms of recovery time,switch memory consumption and controller overhead which represented the number of flow entry updates to recover from the failure.Experimental results demonstrate that DFR achieves recovery times under 20 milliseconds,satisfying carrier-grade requirements for rapid failure recovery.Additionally,DFR reduces switch memory usage by up to 95%compared to traditional protection methods and minimizes controller load by eliminating the need for controller intervention during failure recovery.Theresults underscore the efficiency and scalability of the DFR model,making it a practical solution for enhancing network resilience in SDN environments.
基金the Deanship of Graduate Studies and Scientific Research at Qassim University for financial support(QU-APC-2024-9/1).
文摘Control signaling is mandatory for the operation and management of all types of communication networks,including the Third Generation Partnership Project(3GPP)mobile broadband networks.However,they consume important and scarce network resources such as bandwidth and processing power.There have been several reports of these control signaling turning into signaling storms halting network operations and causing the respective Telecom companies big financial losses.This paper draws its motivation from such real network disaster incidents attributed to signaling storms.In this paper,we present a thorough survey of the causes,of the signaling storm problems in 3GPP-based mobile broadband networks and discuss in detail their possible solutions and countermeasures.We provide relevant analytical models to help quantify the effect of the potential causes and benefits of their corresponding solutions.Another important contribution of this paper is the comparison of the possible causes and solutions/countermeasures,concerning their effect on several important network aspects such as architecture,additional signaling,fidelity,etc.,in the form of a table.This paper presents an update and an extension of our earlier conference publication.To our knowledge,no similar survey study exists on the subject.
基金Supported by National Key Technology Research and Developmental Program of China,No.2022YFC2704400 and No.2022YFC2704405.
文摘BACKGROUND Mitochondrial genes are involved in tumor metabolism in ovarian cancer(OC)and affect immune cell infiltration and treatment responses.AIM To predict prognosis and immunotherapy response in patients diagnosed with OC using mitochondrial genes and neural networks.METHODS Prognosis,immunotherapy efficacy,and next-generation sequencing data of patients with OC were downloaded from The Cancer Genome Atlas and Gene Expression Omnibus.Mitochondrial genes were sourced from the MitoCarta3.0 database.The discovery cohort for model construction was created from 70% of the patients,whereas the remaining 30% constituted the validation cohort.Using the expression of mitochondrial genes as the predictor variable and based on neural network algorithm,the overall survival time and immunotherapy efficacy(complete or partial response)of patients were predicted.RESULTS In total,375 patients with OC were included to construct the prognostic model,and 26 patients were included to construct the immune efficacy model.The average area under the receiver operating characteristic curve of the prognostic model was 0.7268[95% confidence interval(CI):0.7258-0.7278]in the discovery cohort and 0.6475(95%CI:0.6466-0.6484)in the validation cohort.The average area under the receiver operating characteristic curve of the immunotherapy efficacy model was 0.9444(95%CI:0.8333-1.0000)in the discovery cohort and 0.9167(95%CI:0.6667-1.0000)in the validation cohort.CONCLUSION The application of mitochondrial genes and neural networks has the potential to predict prognosis and immunotherapy response in patients with OC,providing valuable insights into personalized treatment strategies.
文摘This article examines the architecture of software-defined networks (SDN) and its implications for the modern management of communications infrastructures. By decoupling the control plane from the data plane, SDN offers increased flexibility and programmability, enabling rapid adaptation to changing user requirements. However, this new approach poses significant challenges in terms of security, fault tolerance, and interoperability. This paper highlights these challenges and explores current strategies to ensure the resilience and reliability of SDN networks in the face of threats and failures. In addition, we analyze the future outlook for SDN and the importance of integrating robust security solutions into these infrastructures.
基金Supported by the National Natural Science Foundation of China(11971458,11471310)。
文摘In this paper,we propose a neural network approach to learn the parameters of a class of stochastic Lotka-Volterra systems.Approximations of the mean and covariance matrix of the observational variables are obtained from the Euler-Maruyama discretization of the underlying stochastic differential equations(SDEs),based on which the loss function is built.The stochastic gradient descent method is applied in the neural network training.Numerical experiments demonstrate the effectiveness of our method.
基金funded by the State Grid Corporation Science and Technology Project(5108-202218280A-2-391-XG).
文摘The high proportion of uncertain distributed power sources and the access to large-scale random electric vehicle(EV)charging resources further aggravate the voltage fluctuation of the distribution network,and the existing research has not deeply explored the EV active-reactive synergistic regulating characteristics,and failed to realize themulti-timescale synergistic control with other regulatingmeans,For this reason,this paper proposes amultilevel linkage coordinated optimization strategy to reduce the voltage deviation of the distribution network.Firstly,a capacitor bank reactive power compensation voltage control model and a distributed photovoltaic(PV)activereactive power regulationmodel are established.Additionally,an external characteristicmodel of EVactive-reactive power regulation is developed considering the four-quadrant operational characteristics of the EVcharger.Amultiobjective optimization model of the distribution network is then constructed considering the time-series coupling constraints of multiple types of voltage regulators.A multi-timescale control strategy is proposed by considering the impact of voltage regulators on active-reactive EV energy consumption and PV energy consumption.Then,a four-stage voltage control optimization strategy is proposed for various types of voltage regulators with multiple time scales.Themulti-objective optimization is solved with the improvedDrosophila algorithmto realize the power fluctuation control of the distribution network and themulti-stage voltage control optimization.Simulation results validate that the proposed voltage control optimization strategy achieves the coordinated control of decentralized voltage control resources in the distribution network.It effectively reduces the voltage deviation of the distribution network while ensuring the energy demand of EV users and enhancing the stability and economic efficiency of the distribution network.
文摘Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay differential equations(DDEs)and delay integrodifferential equations(DIDEs)with constant delays,primarily due to their low regularity at delayinduced breaking points.In this paper,a DNN method that combines multi-task learning(MTL)which is proposed to solve both the forward and inverse problems of DIDEs.The core idea of this approach is to divide the original equation into multiple tasks based on the delay,using auxiliary outputs to represent the integral terms,followed by the use of MTL to seamlessly incorporate the properties at the breaking points into the loss function.Furthermore,given the increased training dificulty associated with multiple tasks and outputs,we employ a sequential training scheme to reduce training complexity and provide reference solutions for subsequent tasks.This approach significantly enhances the approximation accuracy of solving DIDEs with DNNs,as demonstrated by comparisons with traditional DNN methods.We validate the effectiveness of this method through several numerical experiments,test various parameter sharing structures in MTL and compare the testing results of these structures.Finally,this method is implemented to solve the inverse problem of nonlinear DIDE and the results show that the unknown parameters of DIDE can be discovered with sparse or noisy data.
文摘Time series forecasting is essential for generating predictive insights across various domains, including healthcare, finance, and energy. This study focuses on forecasting patient health data by comparing the performance of traditional linear time series models, namely Autoregressive Integrated Moving Average (ARIMA), Seasonal ARIMA, and Moving Average (MA) against neural network architectures. The primary goal is to evaluate the effectiveness of these models in predicting healthcare outcomes using patient records, specifically the Cancerpatient.xlsx dataset, which tracks variables such as patient age, symptoms, genetic risk factors, and environmental exposures over time. The proposed strategy involves training each model on historical patient data to predict age progression and other related health indicators, with performance evaluated using Mean Squared Error (MSE) and Root Mean Squared Error (RMSE) metrics. Our findings reveal that neural networks consistently outperform ARIMA and SARIMA by capturing non-linear patterns and complex temporal dependencies within the dataset, resulting in lower forecasting errors. This research highlights the potential of neural networks to enhance predictive accuracy in healthcare applications, supporting better resource allocation, patient monitoring, and long-term health outcome predictions.
基金supported by NSF China(No.61960206002,62020106005,42050105,62061146002)Shanghai Pilot Program for Basic Research–Shanghai Jiao Tong University.
文摘Friendship paradox states that individuals are likely to have fewer friends than their friends do,on average.Despite of its wide existence and appealing applications in real social networks,the mathematical understanding of friendship paradox is very limited.Only few works provide theoretical evidence of single-step and multi-step friendship paradoxes,given that the neighbors of interest are onehop and multi-hop away from the target node.However,they consider non-evolving networks,as opposed to the topology of real social networks that are constantly growing over time.We are thus motivated to present a first look into friendship paradox in evolving networks,where newly added nodes preferentially attach themselves to those with higher degrees.Our analytical verification of both single-step and multistep friendship paradoxes in evolving networks,along with comparison to the non-evolving counterparts,discloses that“friendship paradox is even more paradoxical in evolving networks”,primarily from three aspects:1)we demonstrate a strengthened effect of single-step friendship paradox in evolving networks,with a larger probability(more than 0.8)of a random node’s neighbors having higher average degree than the random node itself;2)we unravel higher effectiveness of multi-step friendship paradox in seeking for influential nodes in evolving networks,as the rate of reaching the max degree node can be improved by a factor of at least Θ(t^(2/3))with t being the network size;3)we empirically verify our findings through both synthetic and real datasets,which suggest high agreements of results and consolidate the reasonability of evolving model for real social networks.
文摘Wireless Sensor Network(WSN)comprises a set of interconnected,compact,autonomous,and resource-constrained sensor nodes that are wirelessly linked to monitor and gather data from the physical environment.WSNs are commonly used in various applications such as environmental monitoring,surveillance,healthcare,agriculture,and industrial automation.Despite the benefits of WSN,energy efficiency remains a challenging problem that needs to be addressed.Clustering and routing can be considered effective solutions to accomplish energy efficiency in WSNs.Recent studies have reported that metaheuristic algorithms can be applied to optimize cluster formation and routing decisions.This study introduces a new Northern Goshawk Optimization with boosted coati optimization algorithm for cluster-based routing(NGOBCO-CBR)method for WSN.The proposed NGOBCO-CBR method resolves the hot spot problem,uneven load balancing,and energy consumption in WSN.The NGOBCO-CBR technique comprises two major processes such as NGO based clustering and BCO-based routing.In the initial phase,the NGObased clustering method is designed for cluster head(CH)selection and cluster construction using five input variables such as residual energy(RE),node proximity,load balancing,network average energy,and distance to BS(DBS).Besides,the NGOBCO-CBR technique applies the BCO algorithm for the optimum selection of routes to BS.The experimental results of the NGOBCOCBR technique are studied under different scenarios,and the obtained results showcased the improved efficiency of the NGOBCO-CBR technique over recent approaches in terms of different measures.
基金Supported by the Major Program of Institute for Eco-environmental Research of Sanyang Wetland(No.SY2022ZD-1001-05)。
文摘Pollution from heavy metals(HMs)(Cd,As,Cr,and Ni,etc.)has become a serious environmental issue in urban wetland ecosystems with more and more attention.Previous studies conducted in agricultural soils,rivers,and lakes demonstrated that microbial communities exhibit a response to HM pollution.Yet,little is known about the response of microbial communities to HM pollution in urban wetland ecosystems.We examined how heavy metals affect the stability of the microbial networks in the sediments of Sanyang wetland,Wenzhou,China.Key environmental parameters,including HMs,TC(total carbon),TN(total nitrogen),TP(total phosphorus),S,and pH,varied profoundly between moderately and heavily polluted areas in shaping microbial communities.Specifically,the microbial community composition in moderately polluted sites correlated significantly(P<0.05)with Ni,Cu,Cd and TP,whereas in heavily polluted sites,they correlated significantly with Cd,TN,TP,and S.Results show that the heavily polluted sites demonstrated more intricate and more stable microbial networks than those of the moderately polluted area.The heavily polluted sites exhibited higher values for various network parameters including total nodes,total links,average degree,average clustering coefficient,connectance,relative modularity,robustness,and cohesion.Moreover,the structural equation modeling analysis demonstrated a positive correlation between the stability of microbial networks and Cd,TN,TP,and S in heavily polluted sites.Conversely,in moderately polluted sites,the correlation was positively linked to Cd,Ni,and sediment pH.It implies that Cd could potentially play a crucial role in affecting the stability of microbial networks.This study shall enhance our comprehension of microbial co-occurrence patterns in urban wetland ecosystems and offer insights into the ways in which microbial communities respond to environmental factors in varying levels of HM pollution.
文摘Aspect-oriented sentiment analysis is a meticulous sentiment analysis task that aims to analyse the sentiment polarity of specific aspects. Most of the current research builds graph convolutional networks based on dependent syntactic trees, which improves the classification performance of the models to some extent. However, the technical limitations of dependent syntactic trees can introduce considerable noise into the model. Meanwhile, it is difficult for a single graph convolutional network to aggregate both semantic and syntactic structural information of nodes, which affects the final sentence classification. To cope with the above problems, this paper proposes a bi-channel graph convolutional network model. The model introduces a phrase structure tree and transforms it into a hierarchical phrase matrix. The adjacency matrix of the dependent syntactic tree and the hierarchical phrase matrix are combined as the initial matrix of the graph convolutional network to enhance the syntactic information. The semantic information feature representations of the sentences are obtained by the graph convolutional network with a multi-head attention mechanism and fused to achieve complementary learning of dual-channel features. Experimental results show that the model performs well and improves the accuracy of sentiment classification on three public benchmark datasets, namely Rest14, Lap14 and Twitter.
文摘Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilization efficiency. To meet the diverse needs of tasks, it usually needs to instantiate multiple network functions in the form of containers interconnect various generated containers to build a Container Cluster(CC). Then CCs will be deployed on edge service nodes with relatively limited resources. However, the increasingly complex and timevarying nature of tasks brings great challenges to optimal placement of CC. This paper regards the charges for various resources occupied by providing services as revenue, the service efficiency and energy consumption as cost, thus formulates a Mixed Integer Programming(MIP) model to describe the optimal placement of CC on edge service nodes. Furthermore, an Actor-Critic based Deep Reinforcement Learning(DRL) incorporating Graph Convolutional Networks(GCN) framework named as RL-GCN is proposed to solve the optimization problem. The framework obtains an optimal placement strategy through self-learning according to the requirements and objectives of the placement of CC. Particularly, through the introduction of GCN, the features of the association relationship between multiple containers in CCs can be effectively extracted to improve the quality of placement.The experiment results show that under different scales of service nodes and task requests, the proposed method can obtain the improved system performance in terms of placement error ratio, time efficiency of solution output and cumulative system revenue compared with other representative baseline methods.
基金supported by the National Key Research and Development Program of China No.2023YFA1009500.
文摘With the emphasis on user privacy and communication security, encrypted traffic has increased dramatically, which brings great challenges to traffic classification. The classification method of encrypted traffic based on GNN can deal with encrypted traffic well. However, existing GNN-based approaches ignore the relationship between client or server packets. In this paper, we design a network traffic topology based on GCN, called Flow Mapping Graph (FMG). FMG establishes sequential edges between vertexes by the arrival order of packets and establishes jump-order edges between vertexes by connecting packets in different bursts with the same direction. It not only reflects the time characteristics of the packet but also strengthens the relationship between the client or server packets. According to FMG, a Traffic Mapping Classification model (TMC-GCN) is designed, which can automatically capture and learn the characteristics and structure information of the top vertex in FMG. The TMC-GCN model is used to classify the encrypted traffic. The encryption stream classification problem is transformed into a graph classification problem, which can effectively deal with data from different data sources and application scenarios. By comparing the performance of TMC-GCN with other classical models in four public datasets, including CICIOT2023, ISCXVPN2016, CICAAGM2017, and GraphDapp, the effectiveness of the FMG algorithm is verified. The experimental results show that the accuracy rate of the TMC-GCN model is 96.13%, the recall rate is 95.04%, and the F1 rate is 94.54%.
文摘The low Earth orbit(LEO)satellite networks have outstanding advantages such as wide coverage area and not being limited by geographic environment,which can provide a broader range of communication services and has become an essential supplement to the terrestrial network.However,the dynamic changes and uneven distribution of satellite network traffic inevitably bring challenges to multipath routing.Even worse,the harsh space environment often leads to incomplete collection of network state data for routing decision-making,which further complicates this challenge.To address this problem,this paper proposes a state-incomplete intelligent dynamic multipath routing algorithm(SIDMRA)to maximize network efficiency even with incomplete state data as input.Specifically,we model the multipath routing problem as a markov decision process(MDP)and then combine the deep deterministic policy gradient(DDPG)and the K shortest paths(KSP)algorithm to solve the optimal multipath routing policy.We use the temporal correlation of the satellite network state to fit the incomplete state data and then use the message passing neuron network(MPNN)for data enhancement.Simulation results show that the proposed algorithm outperforms baseline algorithms regarding average end-to-end delay and packet loss rate and performs stably under certain missing rates of state data.