We study the target inactivation and recovery in two-layer networks. Five kinds of strategies are chosen to attack the two-layer networks and to recover the activity of the networks by increasing the inter-layer coupl...We study the target inactivation and recovery in two-layer networks. Five kinds of strategies are chosen to attack the two-layer networks and to recover the activity of the networks by increasing the inter-layer coupling strength. The results show that we can easily control the dying state effectively by a randomly attacked situation. We then investigate the recovery activity of the networks by increasing the inter-layer coupled strength. The optimal values of the inter-layer coupled strengths are found, which could provide a more effective range to recovery activity of complex networks. As the multilayer systems composed of active and inactive elements raise important and interesting problems, our results on the target inactivation and recovery in two-layer networks would be extended to different studies.展开更多
We study evolutionary games in two-layer networks by introducing the correlation between two layers through the C-dominance or the D-dominance. We assume that individuals play prisoner's dilemma game (PDG) in one l...We study evolutionary games in two-layer networks by introducing the correlation between two layers through the C-dominance or the D-dominance. We assume that individuals play prisoner's dilemma game (PDG) in one layer and snowdrift game (SDG) in the other. We explore the dependences of the fraction of the strategy cooperation in different layers on the game parameter and initial conditions. The results on two-layer square lattices show that, when cooperation is the dominant strategy, initial conditions strongly influence cooperation in the PDG layer while have no impact in the SDG layer. Moreover, in contrast to the result for PDG in single-layer square lattices, the parameter regime where cooperation could be maintained expands significantly in the PDG layer. We also investigate the effects of mutation and network topology. We find that different mutation rates do not change the cooperation behaviors. Moreover, similar behaviors on cooperation could be found in two-layer random networks.展开更多
Medical procedures are inherently invasive and carry the risk of inducing pain to the mind and body.Recently,efforts have been made to alleviate the discomfort associated with invasive medical procedures through the u...Medical procedures are inherently invasive and carry the risk of inducing pain to the mind and body.Recently,efforts have been made to alleviate the discomfort associated with invasive medical procedures through the use of virtual reality(VR)technology.VR has been demonstrated to be an effective treatment for pain associated with medical procedures,as well as for chronic pain conditions for which no effective treatment has been established.The precise mechanism by which the diversion from reality facilitated by VR contributes to the diminution of pain and anxiety has yet to be elucidated.However,the provision of positive images through VR-based visual stimulation may enhance the functionality of brain networks.The salience network is diminished,while the default mode network is enhanced.Additionally,the medial prefrontal cortex may establish a stronger connection with the default mode network,which could result in a reduction of pain and anxiety.Further research into the potential of VR technology to alleviate pain could lead to a reduction in the number of individuals who overdose on painkillers and contribute to positive change in the medical field.展开更多
The increasing popularity of the Internet and the widespread use of information technology have led to a rise in the number and sophistication of network attacks and security threats.Intrusion detection systems are cr...The increasing popularity of the Internet and the widespread use of information technology have led to a rise in the number and sophistication of network attacks and security threats.Intrusion detection systems are crucial to network security,playing a pivotal role in safeguarding networks from potential threats.However,in the context of an evolving landscape of sophisticated and elusive attacks,existing intrusion detection methodologies often overlook critical aspects such as changes in network topology over time and interactions between hosts.To address these issues,this paper proposes a real-time network intrusion detection method based on graph neural networks.The proposedmethod leverages the advantages of graph neural networks and employs a straightforward graph construction method to represent network traffic as dynamic graph-structured data.Additionally,a graph convolution operation with a multi-head attention mechanism is utilized to enhance the model’s ability to capture the intricate relationships within the graph structure comprehensively.Furthermore,it uses an integrated graph neural network to address dynamic graphs’structural and topological changes at different time points and the challenges of edge embedding in intrusion detection data.The edge classification problem is effectively transformed into node classification by employing a line graph data representation,which facilitates fine-grained intrusion detection tasks on dynamic graph node feature representations.The efficacy of the proposed method is evaluated using two commonly used intrusion detection datasets,UNSW-NB15 and NF-ToN-IoT-v2,and results are compared with previous studies in this field.The experimental results demonstrate that our proposed method achieves 99.3%and 99.96%accuracy on the two datasets,respectively,and outperforms the benchmark model in several evaluation metrics.展开更多
The emergence of next generation networks(NextG),including 5G and beyond,is reshaping the technological landscape of cellular and mobile networks.These networks are sufficiently scaled to interconnect billions of user...The emergence of next generation networks(NextG),including 5G and beyond,is reshaping the technological landscape of cellular and mobile networks.These networks are sufficiently scaled to interconnect billions of users and devices.Researchers in academia and industry are focusing on technological advancements to achieve highspeed transmission,cell planning,and latency reduction to facilitate emerging applications such as virtual reality,the metaverse,smart cities,smart health,and autonomous vehicles.NextG continuously improves its network functionality to support these applications.Multiple input multiple output(MIMO)technology offers spectral efficiency,dependability,and overall performance in conjunctionwithNextG.This article proposes a secure channel estimation technique in MIMO topology using a norm-estimation model to provide comprehensive insights into protecting NextG network components against adversarial attacks.The technique aims to create long-lasting and secure NextG networks using this extended approach.The viability of MIMO applications and modern AI-driven methodologies to combat cybersecurity threats are explored in this research.Moreover,the proposed model demonstrates high performance in terms of reliability and accuracy,with a 20%reduction in the MalOut-RealOut-Diff metric compared to existing state-of-the-art techniques.展开更多
Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in us...Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency.展开更多
Software-defined networking(SDN)is an innovative paradigm that separates the control and data planes,introducing centralized network control.SDN is increasingly being adopted by Carrier Grade networks,offering enhance...Software-defined networking(SDN)is an innovative paradigm that separates the control and data planes,introducing centralized network control.SDN is increasingly being adopted by Carrier Grade networks,offering enhanced networkmanagement capabilities than those of traditional networks.However,because SDN is designed to ensure high-level service availability,it faces additional challenges.One of themost critical challenges is ensuring efficient detection and recovery from link failures in the data plane.Such failures can significantly impact network performance and lead to service outages,making resiliency a key concern for the effective adoption of SDN.Since the recovery process is intrinsically dependent on timely failure detection,this research surveys and analyzes the current literature on both failure detection and recovery approaches in SDN.The survey provides a critical comparison of existing failure detection techniques,highlighting their advantages and disadvantages.Additionally,it examines the current failure recovery methods,categorized as either restoration-based or protection-based,and offers a comprehensive comparison of their strengths and limitations.Lastly,future research challenges and directions are discussed to address the shortcomings of existing failure recovery methods.展开更多
Link failure is a critical issue in large networks and must be effectively addressed.In software-defined networks(SDN),link failure recovery schemes can be categorized into proactive and reactive approaches.Reactive s...Link failure is a critical issue in large networks and must be effectively addressed.In software-defined networks(SDN),link failure recovery schemes can be categorized into proactive and reactive approaches.Reactive schemes have longer recovery times while proactive schemes provide faster recovery but overwhelm the memory of switches by flow entries.As SDN adoption grows,ensuring efficient recovery from link failures in the data plane becomes crucial.In particular,data center networks(DCNs)demand rapid recovery times and efficient resource utilization to meet carrier-grade requirements.This paper proposes an efficient Decentralized Failure Recovery(DFR)model for SDNs,meeting recovery time requirements and optimizing switch memory resource consumption.The DFR model enables switches to autonomously reroute traffic upon link failures without involving the controller,achieving fast recovery times while minimizing memory usage.DFR employs the Fast Failover Group in the OpenFlow standard for local recovery without requiring controller communication and utilizes the k-shortest path algorithm to proactively install backup paths,allowing immediate local recovery without controller intervention and enhancing overall network stability and scalability.DFR employs flow entry aggregation techniques to reduce switch memory usage.Instead of matching flow entries to the destination host’s MAC address,DFR matches packets to the destination switch’s MAC address.This reduces the switches’Ternary Content-Addressable Memory(TCAM)consumption.Additionally,DFR modifies Address Resolution Protocol(ARP)replies to provide source hosts with the destination switch’s MAC address,facilitating flow entry aggregation without affecting normal network operations.The performance of DFR is evaluated through the network emulator Mininet 2.3.1 and Ryu 3.1 as SDN controller.For different number of active flows,number of hosts per edge switch,and different network sizes,the proposed model outperformed various failure recovery models:restoration-based,protection by flow entries,protection by group entries and protection by Vlan-tagging model in terms of recovery time,switch memory consumption and controller overhead which represented the number of flow entry updates to recover from the failure.Experimental results demonstrate that DFR achieves recovery times under 20 milliseconds,satisfying carrier-grade requirements for rapid failure recovery.Additionally,DFR reduces switch memory usage by up to 95%compared to traditional protection methods and minimizes controller load by eliminating the need for controller intervention during failure recovery.Theresults underscore the efficiency and scalability of the DFR model,making it a practical solution for enhancing network resilience in SDN environments.展开更多
Control signaling is mandatory for the operation and management of all types of communication networks,including the Third Generation Partnership Project(3GPP)mobile broadband networks.However,they consume important a...Control signaling is mandatory for the operation and management of all types of communication networks,including the Third Generation Partnership Project(3GPP)mobile broadband networks.However,they consume important and scarce network resources such as bandwidth and processing power.There have been several reports of these control signaling turning into signaling storms halting network operations and causing the respective Telecom companies big financial losses.This paper draws its motivation from such real network disaster incidents attributed to signaling storms.In this paper,we present a thorough survey of the causes,of the signaling storm problems in 3GPP-based mobile broadband networks and discuss in detail their possible solutions and countermeasures.We provide relevant analytical models to help quantify the effect of the potential causes and benefits of their corresponding solutions.Another important contribution of this paper is the comparison of the possible causes and solutions/countermeasures,concerning their effect on several important network aspects such as architecture,additional signaling,fidelity,etc.,in the form of a table.This paper presents an update and an extension of our earlier conference publication.To our knowledge,no similar survey study exists on the subject.展开更多
BACKGROUND Mitochondrial genes are involved in tumor metabolism in ovarian cancer(OC)and affect immune cell infiltration and treatment responses.AIM To predict prognosis and immunotherapy response in patients diagnose...BACKGROUND Mitochondrial genes are involved in tumor metabolism in ovarian cancer(OC)and affect immune cell infiltration and treatment responses.AIM To predict prognosis and immunotherapy response in patients diagnosed with OC using mitochondrial genes and neural networks.METHODS Prognosis,immunotherapy efficacy,and next-generation sequencing data of patients with OC were downloaded from The Cancer Genome Atlas and Gene Expression Omnibus.Mitochondrial genes were sourced from the MitoCarta3.0 database.The discovery cohort for model construction was created from 70% of the patients,whereas the remaining 30% constituted the validation cohort.Using the expression of mitochondrial genes as the predictor variable and based on neural network algorithm,the overall survival time and immunotherapy efficacy(complete or partial response)of patients were predicted.RESULTS In total,375 patients with OC were included to construct the prognostic model,and 26 patients were included to construct the immune efficacy model.The average area under the receiver operating characteristic curve of the prognostic model was 0.7268[95% confidence interval(CI):0.7258-0.7278]in the discovery cohort and 0.6475(95%CI:0.6466-0.6484)in the validation cohort.The average area under the receiver operating characteristic curve of the immunotherapy efficacy model was 0.9444(95%CI:0.8333-1.0000)in the discovery cohort and 0.9167(95%CI:0.6667-1.0000)in the validation cohort.CONCLUSION The application of mitochondrial genes and neural networks has the potential to predict prognosis and immunotherapy response in patients with OC,providing valuable insights into personalized treatment strategies.展开更多
This article examines the architecture of software-defined networks (SDN) and its implications for the modern management of communications infrastructures. By decoupling the control plane from the data plane, SDN offe...This article examines the architecture of software-defined networks (SDN) and its implications for the modern management of communications infrastructures. By decoupling the control plane from the data plane, SDN offers increased flexibility and programmability, enabling rapid adaptation to changing user requirements. However, this new approach poses significant challenges in terms of security, fault tolerance, and interoperability. This paper highlights these challenges and explores current strategies to ensure the resilience and reliability of SDN networks in the face of threats and failures. In addition, we analyze the future outlook for SDN and the importance of integrating robust security solutions into these infrastructures.展开更多
In this paper,we propose a neural network approach to learn the parameters of a class of stochastic Lotka-Volterra systems.Approximations of the mean and covariance matrix of the observational variables are obtained f...In this paper,we propose a neural network approach to learn the parameters of a class of stochastic Lotka-Volterra systems.Approximations of the mean and covariance matrix of the observational variables are obtained from the Euler-Maruyama discretization of the underlying stochastic differential equations(SDEs),based on which the loss function is built.The stochastic gradient descent method is applied in the neural network training.Numerical experiments demonstrate the effectiveness of our method.展开更多
The high proportion of uncertain distributed power sources and the access to large-scale random electric vehicle(EV)charging resources further aggravate the voltage fluctuation of the distribution network,and the exis...The high proportion of uncertain distributed power sources and the access to large-scale random electric vehicle(EV)charging resources further aggravate the voltage fluctuation of the distribution network,and the existing research has not deeply explored the EV active-reactive synergistic regulating characteristics,and failed to realize themulti-timescale synergistic control with other regulatingmeans,For this reason,this paper proposes amultilevel linkage coordinated optimization strategy to reduce the voltage deviation of the distribution network.Firstly,a capacitor bank reactive power compensation voltage control model and a distributed photovoltaic(PV)activereactive power regulationmodel are established.Additionally,an external characteristicmodel of EVactive-reactive power regulation is developed considering the four-quadrant operational characteristics of the EVcharger.Amultiobjective optimization model of the distribution network is then constructed considering the time-series coupling constraints of multiple types of voltage regulators.A multi-timescale control strategy is proposed by considering the impact of voltage regulators on active-reactive EV energy consumption and PV energy consumption.Then,a four-stage voltage control optimization strategy is proposed for various types of voltage regulators with multiple time scales.Themulti-objective optimization is solved with the improvedDrosophila algorithmto realize the power fluctuation control of the distribution network and themulti-stage voltage control optimization.Simulation results validate that the proposed voltage control optimization strategy achieves the coordinated control of decentralized voltage control resources in the distribution network.It effectively reduces the voltage deviation of the distribution network while ensuring the energy demand of EV users and enhancing the stability and economic efficiency of the distribution network.展开更多
Effective small object detection is crucial in various applications including urban intelligent transportation and pedestrian detection.However,small objects are difficult to detect accurately because they contain les...Effective small object detection is crucial in various applications including urban intelligent transportation and pedestrian detection.However,small objects are difficult to detect accurately because they contain less information.Many current methods,particularly those based on Feature Pyramid Network(FPN),address this challenge by leveraging multi-scale feature fusion.However,existing FPN-based methods often suffer from inadequate feature fusion due to varying resolutions across different layers,leading to suboptimal small object detection.To address this problem,we propose the Two-layerAttention Feature Pyramid Network(TA-FPN),featuring two key modules:the Two-layer Attention Module(TAM)and the Small Object Detail Enhancement Module(SODEM).TAM uses the attention module to make the network more focused on the semantic information of the object and fuse it to the lower layer,so that each layer contains similar semantic information,to alleviate the problem of small object information being submerged due to semantic gaps between different layers.At the same time,SODEM is introduced to strengthen the local features of the object,suppress background noise,enhance the information details of the small object,and fuse the enhanced features to other feature layers to ensure that each layer is rich in small object information,to improve small object detection accuracy.Our extensive experiments on challenging datasets such as Microsoft Common Objects inContext(MSCOCO)and Pattern Analysis Statistical Modelling and Computational Learning,Visual Object Classes(PASCAL VOC)demonstrate the validity of the proposedmethod.Experimental results show a significant improvement in small object detection accuracy compared to state-of-theart detectors.展开更多
Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay di...Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay differential equations(DDEs)and delay integrodifferential equations(DIDEs)with constant delays,primarily due to their low regularity at delayinduced breaking points.In this paper,a DNN method that combines multi-task learning(MTL)which is proposed to solve both the forward and inverse problems of DIDEs.The core idea of this approach is to divide the original equation into multiple tasks based on the delay,using auxiliary outputs to represent the integral terms,followed by the use of MTL to seamlessly incorporate the properties at the breaking points into the loss function.Furthermore,given the increased training dificulty associated with multiple tasks and outputs,we employ a sequential training scheme to reduce training complexity and provide reference solutions for subsequent tasks.This approach significantly enhances the approximation accuracy of solving DIDEs with DNNs,as demonstrated by comparisons with traditional DNN methods.We validate the effectiveness of this method through several numerical experiments,test various parameter sharing structures in MTL and compare the testing results of these structures.Finally,this method is implemented to solve the inverse problem of nonlinear DIDE and the results show that the unknown parameters of DIDE can be discovered with sparse or noisy data.展开更多
Time series forecasting is essential for generating predictive insights across various domains, including healthcare, finance, and energy. This study focuses on forecasting patient health data by comparing the perform...Time series forecasting is essential for generating predictive insights across various domains, including healthcare, finance, and energy. This study focuses on forecasting patient health data by comparing the performance of traditional linear time series models, namely Autoregressive Integrated Moving Average (ARIMA), Seasonal ARIMA, and Moving Average (MA) against neural network architectures. The primary goal is to evaluate the effectiveness of these models in predicting healthcare outcomes using patient records, specifically the Cancerpatient.xlsx dataset, which tracks variables such as patient age, symptoms, genetic risk factors, and environmental exposures over time. The proposed strategy involves training each model on historical patient data to predict age progression and other related health indicators, with performance evaluated using Mean Squared Error (MSE) and Root Mean Squared Error (RMSE) metrics. Our findings reveal that neural networks consistently outperform ARIMA and SARIMA by capturing non-linear patterns and complex temporal dependencies within the dataset, resulting in lower forecasting errors. This research highlights the potential of neural networks to enhance predictive accuracy in healthcare applications, supporting better resource allocation, patient monitoring, and long-term health outcome predictions.展开更多
Aspect-oriented sentiment analysis is a meticulous sentiment analysis task that aims to analyse the sentiment polarity of specific aspects. Most of the current research builds graph convolutional networks based on dep...Aspect-oriented sentiment analysis is a meticulous sentiment analysis task that aims to analyse the sentiment polarity of specific aspects. Most of the current research builds graph convolutional networks based on dependent syntactic trees, which improves the classification performance of the models to some extent. However, the technical limitations of dependent syntactic trees can introduce considerable noise into the model. Meanwhile, it is difficult for a single graph convolutional network to aggregate both semantic and syntactic structural information of nodes, which affects the final sentence classification. To cope with the above problems, this paper proposes a bi-channel graph convolutional network model. The model introduces a phrase structure tree and transforms it into a hierarchical phrase matrix. The adjacency matrix of the dependent syntactic tree and the hierarchical phrase matrix are combined as the initial matrix of the graph convolutional network to enhance the syntactic information. The semantic information feature representations of the sentences are obtained by the graph convolutional network with a multi-head attention mechanism and fused to achieve complementary learning of dual-channel features. Experimental results show that the model performs well and improves the accuracy of sentiment classification on three public benchmark datasets, namely Rest14, Lap14 and Twitter.展开更多
Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilizat...Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilization efficiency. To meet the diverse needs of tasks, it usually needs to instantiate multiple network functions in the form of containers interconnect various generated containers to build a Container Cluster(CC). Then CCs will be deployed on edge service nodes with relatively limited resources. However, the increasingly complex and timevarying nature of tasks brings great challenges to optimal placement of CC. This paper regards the charges for various resources occupied by providing services as revenue, the service efficiency and energy consumption as cost, thus formulates a Mixed Integer Programming(MIP) model to describe the optimal placement of CC on edge service nodes. Furthermore, an Actor-Critic based Deep Reinforcement Learning(DRL) incorporating Graph Convolutional Networks(GCN) framework named as RL-GCN is proposed to solve the optimization problem. The framework obtains an optimal placement strategy through self-learning according to the requirements and objectives of the placement of CC. Particularly, through the introduction of GCN, the features of the association relationship between multiple containers in CCs can be effectively extracted to improve the quality of placement.The experiment results show that under different scales of service nodes and task requests, the proposed method can obtain the improved system performance in terms of placement error ratio, time efficiency of solution output and cumulative system revenue compared with other representative baseline methods.展开更多
The low Earth orbit(LEO)satellite networks have outstanding advantages such as wide coverage area and not being limited by geographic environment,which can provide a broader range of communication services and has bec...The low Earth orbit(LEO)satellite networks have outstanding advantages such as wide coverage area and not being limited by geographic environment,which can provide a broader range of communication services and has become an essential supplement to the terrestrial network.However,the dynamic changes and uneven distribution of satellite network traffic inevitably bring challenges to multipath routing.Even worse,the harsh space environment often leads to incomplete collection of network state data for routing decision-making,which further complicates this challenge.To address this problem,this paper proposes a state-incomplete intelligent dynamic multipath routing algorithm(SIDMRA)to maximize network efficiency even with incomplete state data as input.Specifically,we model the multipath routing problem as a markov decision process(MDP)and then combine the deep deterministic policy gradient(DDPG)and the K shortest paths(KSP)algorithm to solve the optimal multipath routing policy.We use the temporal correlation of the satellite network state to fit the incomplete state data and then use the message passing neuron network(MPNN)for data enhancement.Simulation results show that the proposed algorithm outperforms baseline algorithms regarding average end-to-end delay and packet loss rate and performs stably under certain missing rates of state data.展开更多
This paper proposes an efficient strategy for resource utilization in Elastic Optical Networks (EONs) to minimize spectrum fragmentation and reduce connection blocking probability during Routing and Spectrum Allocatio...This paper proposes an efficient strategy for resource utilization in Elastic Optical Networks (EONs) to minimize spectrum fragmentation and reduce connection blocking probability during Routing and Spectrum Allocation (RSA). The proposed method, Dynamic Threshold-Based Routing and Spectrum Allocation with Fragmentation Awareness (DT-RSAF), integrates rerouting and spectrum defragmentation as needed. By leveraging Yen’s shortest path algorithm, DT-RSAF enhances resource utilization while ensuring improved service continuity. A dynamic threshold mechanism enables the algorithm to adapt to varying network conditions, while its fragmentation awareness effectively mitigates spectrum fragmentation. Simulation results on NSFNET and COST 239 topologies demonstrate that DT-RSAF significantly outperforms methods such as K-Shortest Path Routing and Spectrum Allocation (KSP-RSA), Load Balanced and Fragmentation-Aware (LBFA), and the Invasive Weed Optimization-based RSA (IWO-RSA). Under heavy traffic, DT-RSAF reduces the blocking probability by up to 15% and achieves lower Bandwidth Fragmentation Ratios (BFR), ranging from 74% to 75%, compared to 77% - 80% for KSP-RSA, 75% - 77% for LBFA, and approximately 76% for IWO-RSA. DT-RSAF also demonstrated reasonable computation times compared to KSP-RSA, LBFA, and IWO-RSA. On a small-sized network, its computation time was 8710 times faster than that of Integer Linear Programming (ILP) on the same network topology. Additionally, it achieved a similar execution time to LBFA and outperformed IWO-RSA in terms of efficiency. These results highlight DT-RSAF’s capability to maintain large contiguous frequency blocks, making it highly effective for accommodating high-bandwidth requests in EONs while maintaining reasonable execution times.展开更多
基金Supported by the National Basic Research Program of China under Grant Nos 2013CBA01502,2011CB921503 and 2013CB834100the National Natural Science Foundation of China under Grant Nos 11374040 and 11274051
文摘We study the target inactivation and recovery in two-layer networks. Five kinds of strategies are chosen to attack the two-layer networks and to recover the activity of the networks by increasing the inter-layer coupling strength. The results show that we can easily control the dying state effectively by a randomly attacked situation. We then investigate the recovery activity of the networks by increasing the inter-layer coupled strength. The optimal values of the inter-layer coupled strengths are found, which could provide a more effective range to recovery activity of complex networks. As the multilayer systems composed of active and inactive elements raise important and interesting problems, our results on the target inactivation and recovery in two-layer networks would be extended to different studies.
基金Supported by the National Natural Science Foundation of China under Grant Nos 11575036,71301012,and 11505016
文摘We study evolutionary games in two-layer networks by introducing the correlation between two layers through the C-dominance or the D-dominance. We assume that individuals play prisoner's dilemma game (PDG) in one layer and snowdrift game (SDG) in the other. We explore the dependences of the fraction of the strategy cooperation in different layers on the game parameter and initial conditions. The results on two-layer square lattices show that, when cooperation is the dominant strategy, initial conditions strongly influence cooperation in the PDG layer while have no impact in the SDG layer. Moreover, in contrast to the result for PDG in single-layer square lattices, the parameter regime where cooperation could be maintained expands significantly in the PDG layer. We also investigate the effects of mutation and network topology. We find that different mutation rates do not change the cooperation behaviors. Moreover, similar behaviors on cooperation could be found in two-layer random networks.
文摘Medical procedures are inherently invasive and carry the risk of inducing pain to the mind and body.Recently,efforts have been made to alleviate the discomfort associated with invasive medical procedures through the use of virtual reality(VR)technology.VR has been demonstrated to be an effective treatment for pain associated with medical procedures,as well as for chronic pain conditions for which no effective treatment has been established.The precise mechanism by which the diversion from reality facilitated by VR contributes to the diminution of pain and anxiety has yet to be elucidated.However,the provision of positive images through VR-based visual stimulation may enhance the functionality of brain networks.The salience network is diminished,while the default mode network is enhanced.Additionally,the medial prefrontal cortex may establish a stronger connection with the default mode network,which could result in a reduction of pain and anxiety.Further research into the potential of VR technology to alleviate pain could lead to a reduction in the number of individuals who overdose on painkillers and contribute to positive change in the medical field.
文摘The increasing popularity of the Internet and the widespread use of information technology have led to a rise in the number and sophistication of network attacks and security threats.Intrusion detection systems are crucial to network security,playing a pivotal role in safeguarding networks from potential threats.However,in the context of an evolving landscape of sophisticated and elusive attacks,existing intrusion detection methodologies often overlook critical aspects such as changes in network topology over time and interactions between hosts.To address these issues,this paper proposes a real-time network intrusion detection method based on graph neural networks.The proposedmethod leverages the advantages of graph neural networks and employs a straightforward graph construction method to represent network traffic as dynamic graph-structured data.Additionally,a graph convolution operation with a multi-head attention mechanism is utilized to enhance the model’s ability to capture the intricate relationships within the graph structure comprehensively.Furthermore,it uses an integrated graph neural network to address dynamic graphs’structural and topological changes at different time points and the challenges of edge embedding in intrusion detection data.The edge classification problem is effectively transformed into node classification by employing a line graph data representation,which facilitates fine-grained intrusion detection tasks on dynamic graph node feature representations.The efficacy of the proposed method is evaluated using two commonly used intrusion detection datasets,UNSW-NB15 and NF-ToN-IoT-v2,and results are compared with previous studies in this field.The experimental results demonstrate that our proposed method achieves 99.3%and 99.96%accuracy on the two datasets,respectively,and outperforms the benchmark model in several evaluation metrics.
基金funding from King Saud University through Researchers Supporting Project number(RSP2024R387),King Saud University,Riyadh,Saudi Arabia.
文摘The emergence of next generation networks(NextG),including 5G and beyond,is reshaping the technological landscape of cellular and mobile networks.These networks are sufficiently scaled to interconnect billions of users and devices.Researchers in academia and industry are focusing on technological advancements to achieve highspeed transmission,cell planning,and latency reduction to facilitate emerging applications such as virtual reality,the metaverse,smart cities,smart health,and autonomous vehicles.NextG continuously improves its network functionality to support these applications.Multiple input multiple output(MIMO)technology offers spectral efficiency,dependability,and overall performance in conjunctionwithNextG.This article proposes a secure channel estimation technique in MIMO topology using a norm-estimation model to provide comprehensive insights into protecting NextG network components against adversarial attacks.The technique aims to create long-lasting and secure NextG networks using this extended approach.The viability of MIMO applications and modern AI-driven methodologies to combat cybersecurity threats are explored in this research.Moreover,the proposed model demonstrates high performance in terms of reliability and accuracy,with a 20%reduction in the MalOut-RealOut-Diff metric compared to existing state-of-the-art techniques.
文摘Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency.
文摘Software-defined networking(SDN)is an innovative paradigm that separates the control and data planes,introducing centralized network control.SDN is increasingly being adopted by Carrier Grade networks,offering enhanced networkmanagement capabilities than those of traditional networks.However,because SDN is designed to ensure high-level service availability,it faces additional challenges.One of themost critical challenges is ensuring efficient detection and recovery from link failures in the data plane.Such failures can significantly impact network performance and lead to service outages,making resiliency a key concern for the effective adoption of SDN.Since the recovery process is intrinsically dependent on timely failure detection,this research surveys and analyzes the current literature on both failure detection and recovery approaches in SDN.The survey provides a critical comparison of existing failure detection techniques,highlighting their advantages and disadvantages.Additionally,it examines the current failure recovery methods,categorized as either restoration-based or protection-based,and offers a comprehensive comparison of their strengths and limitations.Lastly,future research challenges and directions are discussed to address the shortcomings of existing failure recovery methods.
文摘Link failure is a critical issue in large networks and must be effectively addressed.In software-defined networks(SDN),link failure recovery schemes can be categorized into proactive and reactive approaches.Reactive schemes have longer recovery times while proactive schemes provide faster recovery but overwhelm the memory of switches by flow entries.As SDN adoption grows,ensuring efficient recovery from link failures in the data plane becomes crucial.In particular,data center networks(DCNs)demand rapid recovery times and efficient resource utilization to meet carrier-grade requirements.This paper proposes an efficient Decentralized Failure Recovery(DFR)model for SDNs,meeting recovery time requirements and optimizing switch memory resource consumption.The DFR model enables switches to autonomously reroute traffic upon link failures without involving the controller,achieving fast recovery times while minimizing memory usage.DFR employs the Fast Failover Group in the OpenFlow standard for local recovery without requiring controller communication and utilizes the k-shortest path algorithm to proactively install backup paths,allowing immediate local recovery without controller intervention and enhancing overall network stability and scalability.DFR employs flow entry aggregation techniques to reduce switch memory usage.Instead of matching flow entries to the destination host’s MAC address,DFR matches packets to the destination switch’s MAC address.This reduces the switches’Ternary Content-Addressable Memory(TCAM)consumption.Additionally,DFR modifies Address Resolution Protocol(ARP)replies to provide source hosts with the destination switch’s MAC address,facilitating flow entry aggregation without affecting normal network operations.The performance of DFR is evaluated through the network emulator Mininet 2.3.1 and Ryu 3.1 as SDN controller.For different number of active flows,number of hosts per edge switch,and different network sizes,the proposed model outperformed various failure recovery models:restoration-based,protection by flow entries,protection by group entries and protection by Vlan-tagging model in terms of recovery time,switch memory consumption and controller overhead which represented the number of flow entry updates to recover from the failure.Experimental results demonstrate that DFR achieves recovery times under 20 milliseconds,satisfying carrier-grade requirements for rapid failure recovery.Additionally,DFR reduces switch memory usage by up to 95%compared to traditional protection methods and minimizes controller load by eliminating the need for controller intervention during failure recovery.Theresults underscore the efficiency and scalability of the DFR model,making it a practical solution for enhancing network resilience in SDN environments.
基金the Deanship of Graduate Studies and Scientific Research at Qassim University for financial support(QU-APC-2024-9/1).
文摘Control signaling is mandatory for the operation and management of all types of communication networks,including the Third Generation Partnership Project(3GPP)mobile broadband networks.However,they consume important and scarce network resources such as bandwidth and processing power.There have been several reports of these control signaling turning into signaling storms halting network operations and causing the respective Telecom companies big financial losses.This paper draws its motivation from such real network disaster incidents attributed to signaling storms.In this paper,we present a thorough survey of the causes,of the signaling storm problems in 3GPP-based mobile broadband networks and discuss in detail their possible solutions and countermeasures.We provide relevant analytical models to help quantify the effect of the potential causes and benefits of their corresponding solutions.Another important contribution of this paper is the comparison of the possible causes and solutions/countermeasures,concerning their effect on several important network aspects such as architecture,additional signaling,fidelity,etc.,in the form of a table.This paper presents an update and an extension of our earlier conference publication.To our knowledge,no similar survey study exists on the subject.
基金Supported by National Key Technology Research and Developmental Program of China,No.2022YFC2704400 and No.2022YFC2704405.
文摘BACKGROUND Mitochondrial genes are involved in tumor metabolism in ovarian cancer(OC)and affect immune cell infiltration and treatment responses.AIM To predict prognosis and immunotherapy response in patients diagnosed with OC using mitochondrial genes and neural networks.METHODS Prognosis,immunotherapy efficacy,and next-generation sequencing data of patients with OC were downloaded from The Cancer Genome Atlas and Gene Expression Omnibus.Mitochondrial genes were sourced from the MitoCarta3.0 database.The discovery cohort for model construction was created from 70% of the patients,whereas the remaining 30% constituted the validation cohort.Using the expression of mitochondrial genes as the predictor variable and based on neural network algorithm,the overall survival time and immunotherapy efficacy(complete or partial response)of patients were predicted.RESULTS In total,375 patients with OC were included to construct the prognostic model,and 26 patients were included to construct the immune efficacy model.The average area under the receiver operating characteristic curve of the prognostic model was 0.7268[95% confidence interval(CI):0.7258-0.7278]in the discovery cohort and 0.6475(95%CI:0.6466-0.6484)in the validation cohort.The average area under the receiver operating characteristic curve of the immunotherapy efficacy model was 0.9444(95%CI:0.8333-1.0000)in the discovery cohort and 0.9167(95%CI:0.6667-1.0000)in the validation cohort.CONCLUSION The application of mitochondrial genes and neural networks has the potential to predict prognosis and immunotherapy response in patients with OC,providing valuable insights into personalized treatment strategies.
文摘This article examines the architecture of software-defined networks (SDN) and its implications for the modern management of communications infrastructures. By decoupling the control plane from the data plane, SDN offers increased flexibility and programmability, enabling rapid adaptation to changing user requirements. However, this new approach poses significant challenges in terms of security, fault tolerance, and interoperability. This paper highlights these challenges and explores current strategies to ensure the resilience and reliability of SDN networks in the face of threats and failures. In addition, we analyze the future outlook for SDN and the importance of integrating robust security solutions into these infrastructures.
基金Supported by the National Natural Science Foundation of China(11971458,11471310)。
文摘In this paper,we propose a neural network approach to learn the parameters of a class of stochastic Lotka-Volterra systems.Approximations of the mean and covariance matrix of the observational variables are obtained from the Euler-Maruyama discretization of the underlying stochastic differential equations(SDEs),based on which the loss function is built.The stochastic gradient descent method is applied in the neural network training.Numerical experiments demonstrate the effectiveness of our method.
基金funded by the State Grid Corporation Science and Technology Project(5108-202218280A-2-391-XG).
文摘The high proportion of uncertain distributed power sources and the access to large-scale random electric vehicle(EV)charging resources further aggravate the voltage fluctuation of the distribution network,and the existing research has not deeply explored the EV active-reactive synergistic regulating characteristics,and failed to realize themulti-timescale synergistic control with other regulatingmeans,For this reason,this paper proposes amultilevel linkage coordinated optimization strategy to reduce the voltage deviation of the distribution network.Firstly,a capacitor bank reactive power compensation voltage control model and a distributed photovoltaic(PV)activereactive power regulationmodel are established.Additionally,an external characteristicmodel of EVactive-reactive power regulation is developed considering the four-quadrant operational characteristics of the EVcharger.Amultiobjective optimization model of the distribution network is then constructed considering the time-series coupling constraints of multiple types of voltage regulators.A multi-timescale control strategy is proposed by considering the impact of voltage regulators on active-reactive EV energy consumption and PV energy consumption.Then,a four-stage voltage control optimization strategy is proposed for various types of voltage regulators with multiple time scales.Themulti-objective optimization is solved with the improvedDrosophila algorithmto realize the power fluctuation control of the distribution network and themulti-stage voltage control optimization.Simulation results validate that the proposed voltage control optimization strategy achieves the coordinated control of decentralized voltage control resources in the distribution network.It effectively reduces the voltage deviation of the distribution network while ensuring the energy demand of EV users and enhancing the stability and economic efficiency of the distribution network.
文摘Effective small object detection is crucial in various applications including urban intelligent transportation and pedestrian detection.However,small objects are difficult to detect accurately because they contain less information.Many current methods,particularly those based on Feature Pyramid Network(FPN),address this challenge by leveraging multi-scale feature fusion.However,existing FPN-based methods often suffer from inadequate feature fusion due to varying resolutions across different layers,leading to suboptimal small object detection.To address this problem,we propose the Two-layerAttention Feature Pyramid Network(TA-FPN),featuring two key modules:the Two-layer Attention Module(TAM)and the Small Object Detail Enhancement Module(SODEM).TAM uses the attention module to make the network more focused on the semantic information of the object and fuse it to the lower layer,so that each layer contains similar semantic information,to alleviate the problem of small object information being submerged due to semantic gaps between different layers.At the same time,SODEM is introduced to strengthen the local features of the object,suppress background noise,enhance the information details of the small object,and fuse the enhanced features to other feature layers to ensure that each layer is rich in small object information,to improve small object detection accuracy.Our extensive experiments on challenging datasets such as Microsoft Common Objects inContext(MSCOCO)and Pattern Analysis Statistical Modelling and Computational Learning,Visual Object Classes(PASCAL VOC)demonstrate the validity of the proposedmethod.Experimental results show a significant improvement in small object detection accuracy compared to state-of-theart detectors.
文摘Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay differential equations(DDEs)and delay integrodifferential equations(DIDEs)with constant delays,primarily due to their low regularity at delayinduced breaking points.In this paper,a DNN method that combines multi-task learning(MTL)which is proposed to solve both the forward and inverse problems of DIDEs.The core idea of this approach is to divide the original equation into multiple tasks based on the delay,using auxiliary outputs to represent the integral terms,followed by the use of MTL to seamlessly incorporate the properties at the breaking points into the loss function.Furthermore,given the increased training dificulty associated with multiple tasks and outputs,we employ a sequential training scheme to reduce training complexity and provide reference solutions for subsequent tasks.This approach significantly enhances the approximation accuracy of solving DIDEs with DNNs,as demonstrated by comparisons with traditional DNN methods.We validate the effectiveness of this method through several numerical experiments,test various parameter sharing structures in MTL and compare the testing results of these structures.Finally,this method is implemented to solve the inverse problem of nonlinear DIDE and the results show that the unknown parameters of DIDE can be discovered with sparse or noisy data.
文摘Time series forecasting is essential for generating predictive insights across various domains, including healthcare, finance, and energy. This study focuses on forecasting patient health data by comparing the performance of traditional linear time series models, namely Autoregressive Integrated Moving Average (ARIMA), Seasonal ARIMA, and Moving Average (MA) against neural network architectures. The primary goal is to evaluate the effectiveness of these models in predicting healthcare outcomes using patient records, specifically the Cancerpatient.xlsx dataset, which tracks variables such as patient age, symptoms, genetic risk factors, and environmental exposures over time. The proposed strategy involves training each model on historical patient data to predict age progression and other related health indicators, with performance evaluated using Mean Squared Error (MSE) and Root Mean Squared Error (RMSE) metrics. Our findings reveal that neural networks consistently outperform ARIMA and SARIMA by capturing non-linear patterns and complex temporal dependencies within the dataset, resulting in lower forecasting errors. This research highlights the potential of neural networks to enhance predictive accuracy in healthcare applications, supporting better resource allocation, patient monitoring, and long-term health outcome predictions.
文摘Aspect-oriented sentiment analysis is a meticulous sentiment analysis task that aims to analyse the sentiment polarity of specific aspects. Most of the current research builds graph convolutional networks based on dependent syntactic trees, which improves the classification performance of the models to some extent. However, the technical limitations of dependent syntactic trees can introduce considerable noise into the model. Meanwhile, it is difficult for a single graph convolutional network to aggregate both semantic and syntactic structural information of nodes, which affects the final sentence classification. To cope with the above problems, this paper proposes a bi-channel graph convolutional network model. The model introduces a phrase structure tree and transforms it into a hierarchical phrase matrix. The adjacency matrix of the dependent syntactic tree and the hierarchical phrase matrix are combined as the initial matrix of the graph convolutional network to enhance the syntactic information. The semantic information feature representations of the sentences are obtained by the graph convolutional network with a multi-head attention mechanism and fused to achieve complementary learning of dual-channel features. Experimental results show that the model performs well and improves the accuracy of sentiment classification on three public benchmark datasets, namely Rest14, Lap14 and Twitter.
文摘Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilization efficiency. To meet the diverse needs of tasks, it usually needs to instantiate multiple network functions in the form of containers interconnect various generated containers to build a Container Cluster(CC). Then CCs will be deployed on edge service nodes with relatively limited resources. However, the increasingly complex and timevarying nature of tasks brings great challenges to optimal placement of CC. This paper regards the charges for various resources occupied by providing services as revenue, the service efficiency and energy consumption as cost, thus formulates a Mixed Integer Programming(MIP) model to describe the optimal placement of CC on edge service nodes. Furthermore, an Actor-Critic based Deep Reinforcement Learning(DRL) incorporating Graph Convolutional Networks(GCN) framework named as RL-GCN is proposed to solve the optimization problem. The framework obtains an optimal placement strategy through self-learning according to the requirements and objectives of the placement of CC. Particularly, through the introduction of GCN, the features of the association relationship between multiple containers in CCs can be effectively extracted to improve the quality of placement.The experiment results show that under different scales of service nodes and task requests, the proposed method can obtain the improved system performance in terms of placement error ratio, time efficiency of solution output and cumulative system revenue compared with other representative baseline methods.
文摘The low Earth orbit(LEO)satellite networks have outstanding advantages such as wide coverage area and not being limited by geographic environment,which can provide a broader range of communication services and has become an essential supplement to the terrestrial network.However,the dynamic changes and uneven distribution of satellite network traffic inevitably bring challenges to multipath routing.Even worse,the harsh space environment often leads to incomplete collection of network state data for routing decision-making,which further complicates this challenge.To address this problem,this paper proposes a state-incomplete intelligent dynamic multipath routing algorithm(SIDMRA)to maximize network efficiency even with incomplete state data as input.Specifically,we model the multipath routing problem as a markov decision process(MDP)and then combine the deep deterministic policy gradient(DDPG)and the K shortest paths(KSP)algorithm to solve the optimal multipath routing policy.We use the temporal correlation of the satellite network state to fit the incomplete state data and then use the message passing neuron network(MPNN)for data enhancement.Simulation results show that the proposed algorithm outperforms baseline algorithms regarding average end-to-end delay and packet loss rate and performs stably under certain missing rates of state data.
文摘This paper proposes an efficient strategy for resource utilization in Elastic Optical Networks (EONs) to minimize spectrum fragmentation and reduce connection blocking probability during Routing and Spectrum Allocation (RSA). The proposed method, Dynamic Threshold-Based Routing and Spectrum Allocation with Fragmentation Awareness (DT-RSAF), integrates rerouting and spectrum defragmentation as needed. By leveraging Yen’s shortest path algorithm, DT-RSAF enhances resource utilization while ensuring improved service continuity. A dynamic threshold mechanism enables the algorithm to adapt to varying network conditions, while its fragmentation awareness effectively mitigates spectrum fragmentation. Simulation results on NSFNET and COST 239 topologies demonstrate that DT-RSAF significantly outperforms methods such as K-Shortest Path Routing and Spectrum Allocation (KSP-RSA), Load Balanced and Fragmentation-Aware (LBFA), and the Invasive Weed Optimization-based RSA (IWO-RSA). Under heavy traffic, DT-RSAF reduces the blocking probability by up to 15% and achieves lower Bandwidth Fragmentation Ratios (BFR), ranging from 74% to 75%, compared to 77% - 80% for KSP-RSA, 75% - 77% for LBFA, and approximately 76% for IWO-RSA. DT-RSAF also demonstrated reasonable computation times compared to KSP-RSA, LBFA, and IWO-RSA. On a small-sized network, its computation time was 8710 times faster than that of Integer Linear Programming (ILP) on the same network topology. Additionally, it achieved a similar execution time to LBFA and outperformed IWO-RSA in terms of efficiency. These results highlight DT-RSAF’s capability to maintain large contiguous frequency blocks, making it highly effective for accommodating high-bandwidth requests in EONs while maintaining reasonable execution times.