Medical procedures are inherently invasive and carry the risk of inducing pain to the mind and body.Recently,efforts have been made to alleviate the discomfort associated with invasive medical procedures through the u...Medical procedures are inherently invasive and carry the risk of inducing pain to the mind and body.Recently,efforts have been made to alleviate the discomfort associated with invasive medical procedures through the use of virtual reality(VR)technology.VR has been demonstrated to be an effective treatment for pain associated with medical procedures,as well as for chronic pain conditions for which no effective treatment has been established.The precise mechanism by which the diversion from reality facilitated by VR contributes to the diminution of pain and anxiety has yet to be elucidated.However,the provision of positive images through VR-based visual stimulation may enhance the functionality of brain networks.The salience network is diminished,while the default mode network is enhanced.Additionally,the medial prefrontal cortex may establish a stronger connection with the default mode network,which could result in a reduction of pain and anxiety.Further research into the potential of VR technology to alleviate pain could lead to a reduction in the number of individuals who overdose on painkillers and contribute to positive change in the medical field.展开更多
The increasing popularity of the Internet and the widespread use of information technology have led to a rise in the number and sophistication of network attacks and security threats.Intrusion detection systems are cr...The increasing popularity of the Internet and the widespread use of information technology have led to a rise in the number and sophistication of network attacks and security threats.Intrusion detection systems are crucial to network security,playing a pivotal role in safeguarding networks from potential threats.However,in the context of an evolving landscape of sophisticated and elusive attacks,existing intrusion detection methodologies often overlook critical aspects such as changes in network topology over time and interactions between hosts.To address these issues,this paper proposes a real-time network intrusion detection method based on graph neural networks.The proposedmethod leverages the advantages of graph neural networks and employs a straightforward graph construction method to represent network traffic as dynamic graph-structured data.Additionally,a graph convolution operation with a multi-head attention mechanism is utilized to enhance the model’s ability to capture the intricate relationships within the graph structure comprehensively.Furthermore,it uses an integrated graph neural network to address dynamic graphs’structural and topological changes at different time points and the challenges of edge embedding in intrusion detection data.The edge classification problem is effectively transformed into node classification by employing a line graph data representation,which facilitates fine-grained intrusion detection tasks on dynamic graph node feature representations.The efficacy of the proposed method is evaluated using two commonly used intrusion detection datasets,UNSW-NB15 and NF-ToN-IoT-v2,and results are compared with previous studies in this field.The experimental results demonstrate that our proposed method achieves 99.3%and 99.96%accuracy on the two datasets,respectively,and outperforms the benchmark model in several evaluation metrics.展开更多
The emergence of next generation networks(NextG),including 5G and beyond,is reshaping the technological landscape of cellular and mobile networks.These networks are sufficiently scaled to interconnect billions of user...The emergence of next generation networks(NextG),including 5G and beyond,is reshaping the technological landscape of cellular and mobile networks.These networks are sufficiently scaled to interconnect billions of users and devices.Researchers in academia and industry are focusing on technological advancements to achieve highspeed transmission,cell planning,and latency reduction to facilitate emerging applications such as virtual reality,the metaverse,smart cities,smart health,and autonomous vehicles.NextG continuously improves its network functionality to support these applications.Multiple input multiple output(MIMO)technology offers spectral efficiency,dependability,and overall performance in conjunctionwithNextG.This article proposes a secure channel estimation technique in MIMO topology using a norm-estimation model to provide comprehensive insights into protecting NextG network components against adversarial attacks.The technique aims to create long-lasting and secure NextG networks using this extended approach.The viability of MIMO applications and modern AI-driven methodologies to combat cybersecurity threats are explored in this research.Moreover,the proposed model demonstrates high performance in terms of reliability and accuracy,with a 20%reduction in the MalOut-RealOut-Diff metric compared to existing state-of-the-art techniques.展开更多
Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in us...Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency.展开更多
Software-defined networking(SDN)is an innovative paradigm that separates the control and data planes,introducing centralized network control.SDN is increasingly being adopted by Carrier Grade networks,offering enhance...Software-defined networking(SDN)is an innovative paradigm that separates the control and data planes,introducing centralized network control.SDN is increasingly being adopted by Carrier Grade networks,offering enhanced networkmanagement capabilities than those of traditional networks.However,because SDN is designed to ensure high-level service availability,it faces additional challenges.One of themost critical challenges is ensuring efficient detection and recovery from link failures in the data plane.Such failures can significantly impact network performance and lead to service outages,making resiliency a key concern for the effective adoption of SDN.Since the recovery process is intrinsically dependent on timely failure detection,this research surveys and analyzes the current literature on both failure detection and recovery approaches in SDN.The survey provides a critical comparison of existing failure detection techniques,highlighting their advantages and disadvantages.Additionally,it examines the current failure recovery methods,categorized as either restoration-based or protection-based,and offers a comprehensive comparison of their strengths and limitations.Lastly,future research challenges and directions are discussed to address the shortcomings of existing failure recovery methods.展开更多
Link failure is a critical issue in large networks and must be effectively addressed.In software-defined networks(SDN),link failure recovery schemes can be categorized into proactive and reactive approaches.Reactive s...Link failure is a critical issue in large networks and must be effectively addressed.In software-defined networks(SDN),link failure recovery schemes can be categorized into proactive and reactive approaches.Reactive schemes have longer recovery times while proactive schemes provide faster recovery but overwhelm the memory of switches by flow entries.As SDN adoption grows,ensuring efficient recovery from link failures in the data plane becomes crucial.In particular,data center networks(DCNs)demand rapid recovery times and efficient resource utilization to meet carrier-grade requirements.This paper proposes an efficient Decentralized Failure Recovery(DFR)model for SDNs,meeting recovery time requirements and optimizing switch memory resource consumption.The DFR model enables switches to autonomously reroute traffic upon link failures without involving the controller,achieving fast recovery times while minimizing memory usage.DFR employs the Fast Failover Group in the OpenFlow standard for local recovery without requiring controller communication and utilizes the k-shortest path algorithm to proactively install backup paths,allowing immediate local recovery without controller intervention and enhancing overall network stability and scalability.DFR employs flow entry aggregation techniques to reduce switch memory usage.Instead of matching flow entries to the destination host’s MAC address,DFR matches packets to the destination switch’s MAC address.This reduces the switches’Ternary Content-Addressable Memory(TCAM)consumption.Additionally,DFR modifies Address Resolution Protocol(ARP)replies to provide source hosts with the destination switch’s MAC address,facilitating flow entry aggregation without affecting normal network operations.The performance of DFR is evaluated through the network emulator Mininet 2.3.1 and Ryu 3.1 as SDN controller.For different number of active flows,number of hosts per edge switch,and different network sizes,the proposed model outperformed various failure recovery models:restoration-based,protection by flow entries,protection by group entries and protection by Vlan-tagging model in terms of recovery time,switch memory consumption and controller overhead which represented the number of flow entry updates to recover from the failure.Experimental results demonstrate that DFR achieves recovery times under 20 milliseconds,satisfying carrier-grade requirements for rapid failure recovery.Additionally,DFR reduces switch memory usage by up to 95%compared to traditional protection methods and minimizes controller load by eliminating the need for controller intervention during failure recovery.Theresults underscore the efficiency and scalability of the DFR model,making it a practical solution for enhancing network resilience in SDN environments.展开更多
Control signaling is mandatory for the operation and management of all types of communication networks,including the Third Generation Partnership Project(3GPP)mobile broadband networks.However,they consume important a...Control signaling is mandatory for the operation and management of all types of communication networks,including the Third Generation Partnership Project(3GPP)mobile broadband networks.However,they consume important and scarce network resources such as bandwidth and processing power.There have been several reports of these control signaling turning into signaling storms halting network operations and causing the respective Telecom companies big financial losses.This paper draws its motivation from such real network disaster incidents attributed to signaling storms.In this paper,we present a thorough survey of the causes,of the signaling storm problems in 3GPP-based mobile broadband networks and discuss in detail their possible solutions and countermeasures.We provide relevant analytical models to help quantify the effect of the potential causes and benefits of their corresponding solutions.Another important contribution of this paper is the comparison of the possible causes and solutions/countermeasures,concerning their effect on several important network aspects such as architecture,additional signaling,fidelity,etc.,in the form of a table.This paper presents an update and an extension of our earlier conference publication.To our knowledge,no similar survey study exists on the subject.展开更多
BACKGROUND Mitochondrial genes are involved in tumor metabolism in ovarian cancer(OC)and affect immune cell infiltration and treatment responses.AIM To predict prognosis and immunotherapy response in patients diagnose...BACKGROUND Mitochondrial genes are involved in tumor metabolism in ovarian cancer(OC)and affect immune cell infiltration and treatment responses.AIM To predict prognosis and immunotherapy response in patients diagnosed with OC using mitochondrial genes and neural networks.METHODS Prognosis,immunotherapy efficacy,and next-generation sequencing data of patients with OC were downloaded from The Cancer Genome Atlas and Gene Expression Omnibus.Mitochondrial genes were sourced from the MitoCarta3.0 database.The discovery cohort for model construction was created from 70% of the patients,whereas the remaining 30% constituted the validation cohort.Using the expression of mitochondrial genes as the predictor variable and based on neural network algorithm,the overall survival time and immunotherapy efficacy(complete or partial response)of patients were predicted.RESULTS In total,375 patients with OC were included to construct the prognostic model,and 26 patients were included to construct the immune efficacy model.The average area under the receiver operating characteristic curve of the prognostic model was 0.7268[95% confidence interval(CI):0.7258-0.7278]in the discovery cohort and 0.6475(95%CI:0.6466-0.6484)in the validation cohort.The average area under the receiver operating characteristic curve of the immunotherapy efficacy model was 0.9444(95%CI:0.8333-1.0000)in the discovery cohort and 0.9167(95%CI:0.6667-1.0000)in the validation cohort.CONCLUSION The application of mitochondrial genes and neural networks has the potential to predict prognosis and immunotherapy response in patients with OC,providing valuable insights into personalized treatment strategies.展开更多
This article examines the architecture of software-defined networks (SDN) and its implications for the modern management of communications infrastructures. By decoupling the control plane from the data plane, SDN offe...This article examines the architecture of software-defined networks (SDN) and its implications for the modern management of communications infrastructures. By decoupling the control plane from the data plane, SDN offers increased flexibility and programmability, enabling rapid adaptation to changing user requirements. However, this new approach poses significant challenges in terms of security, fault tolerance, and interoperability. This paper highlights these challenges and explores current strategies to ensure the resilience and reliability of SDN networks in the face of threats and failures. In addition, we analyze the future outlook for SDN and the importance of integrating robust security solutions into these infrastructures.展开更多
In this paper,we propose a neural network approach to learn the parameters of a class of stochastic Lotka-Volterra systems.Approximations of the mean and covariance matrix of the observational variables are obtained f...In this paper,we propose a neural network approach to learn the parameters of a class of stochastic Lotka-Volterra systems.Approximations of the mean and covariance matrix of the observational variables are obtained from the Euler-Maruyama discretization of the underlying stochastic differential equations(SDEs),based on which the loss function is built.The stochastic gradient descent method is applied in the neural network training.Numerical experiments demonstrate the effectiveness of our method.展开更多
The high proportion of uncertain distributed power sources and the access to large-scale random electric vehicle(EV)charging resources further aggravate the voltage fluctuation of the distribution network,and the exis...The high proportion of uncertain distributed power sources and the access to large-scale random electric vehicle(EV)charging resources further aggravate the voltage fluctuation of the distribution network,and the existing research has not deeply explored the EV active-reactive synergistic regulating characteristics,and failed to realize themulti-timescale synergistic control with other regulatingmeans,For this reason,this paper proposes amultilevel linkage coordinated optimization strategy to reduce the voltage deviation of the distribution network.Firstly,a capacitor bank reactive power compensation voltage control model and a distributed photovoltaic(PV)activereactive power regulationmodel are established.Additionally,an external characteristicmodel of EVactive-reactive power regulation is developed considering the four-quadrant operational characteristics of the EVcharger.Amultiobjective optimization model of the distribution network is then constructed considering the time-series coupling constraints of multiple types of voltage regulators.A multi-timescale control strategy is proposed by considering the impact of voltage regulators on active-reactive EV energy consumption and PV energy consumption.Then,a four-stage voltage control optimization strategy is proposed for various types of voltage regulators with multiple time scales.Themulti-objective optimization is solved with the improvedDrosophila algorithmto realize the power fluctuation control of the distribution network and themulti-stage voltage control optimization.Simulation results validate that the proposed voltage control optimization strategy achieves the coordinated control of decentralized voltage control resources in the distribution network.It effectively reduces the voltage deviation of the distribution network while ensuring the energy demand of EV users and enhancing the stability and economic efficiency of the distribution network.展开更多
Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay di...Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay differential equations(DDEs)and delay integrodifferential equations(DIDEs)with constant delays,primarily due to their low regularity at delayinduced breaking points.In this paper,a DNN method that combines multi-task learning(MTL)which is proposed to solve both the forward and inverse problems of DIDEs.The core idea of this approach is to divide the original equation into multiple tasks based on the delay,using auxiliary outputs to represent the integral terms,followed by the use of MTL to seamlessly incorporate the properties at the breaking points into the loss function.Furthermore,given the increased training dificulty associated with multiple tasks and outputs,we employ a sequential training scheme to reduce training complexity and provide reference solutions for subsequent tasks.This approach significantly enhances the approximation accuracy of solving DIDEs with DNNs,as demonstrated by comparisons with traditional DNN methods.We validate the effectiveness of this method through several numerical experiments,test various parameter sharing structures in MTL and compare the testing results of these structures.Finally,this method is implemented to solve the inverse problem of nonlinear DIDE and the results show that the unknown parameters of DIDE can be discovered with sparse or noisy data.展开更多
Time series forecasting is essential for generating predictive insights across various domains, including healthcare, finance, and energy. This study focuses on forecasting patient health data by comparing the perform...Time series forecasting is essential for generating predictive insights across various domains, including healthcare, finance, and energy. This study focuses on forecasting patient health data by comparing the performance of traditional linear time series models, namely Autoregressive Integrated Moving Average (ARIMA), Seasonal ARIMA, and Moving Average (MA) against neural network architectures. The primary goal is to evaluate the effectiveness of these models in predicting healthcare outcomes using patient records, specifically the Cancerpatient.xlsx dataset, which tracks variables such as patient age, symptoms, genetic risk factors, and environmental exposures over time. The proposed strategy involves training each model on historical patient data to predict age progression and other related health indicators, with performance evaluated using Mean Squared Error (MSE) and Root Mean Squared Error (RMSE) metrics. Our findings reveal that neural networks consistently outperform ARIMA and SARIMA by capturing non-linear patterns and complex temporal dependencies within the dataset, resulting in lower forecasting errors. This research highlights the potential of neural networks to enhance predictive accuracy in healthcare applications, supporting better resource allocation, patient monitoring, and long-term health outcome predictions.展开更多
Aspect-oriented sentiment analysis is a meticulous sentiment analysis task that aims to analyse the sentiment polarity of specific aspects. Most of the current research builds graph convolutional networks based on dep...Aspect-oriented sentiment analysis is a meticulous sentiment analysis task that aims to analyse the sentiment polarity of specific aspects. Most of the current research builds graph convolutional networks based on dependent syntactic trees, which improves the classification performance of the models to some extent. However, the technical limitations of dependent syntactic trees can introduce considerable noise into the model. Meanwhile, it is difficult for a single graph convolutional network to aggregate both semantic and syntactic structural information of nodes, which affects the final sentence classification. To cope with the above problems, this paper proposes a bi-channel graph convolutional network model. The model introduces a phrase structure tree and transforms it into a hierarchical phrase matrix. The adjacency matrix of the dependent syntactic tree and the hierarchical phrase matrix are combined as the initial matrix of the graph convolutional network to enhance the syntactic information. The semantic information feature representations of the sentences are obtained by the graph convolutional network with a multi-head attention mechanism and fused to achieve complementary learning of dual-channel features. Experimental results show that the model performs well and improves the accuracy of sentiment classification on three public benchmark datasets, namely Rest14, Lap14 and Twitter.展开更多
This paper proposes an efficient strategy for resource utilization in Elastic Optical Networks (EONs) to minimize spectrum fragmentation and reduce connection blocking probability during Routing and Spectrum Allocatio...This paper proposes an efficient strategy for resource utilization in Elastic Optical Networks (EONs) to minimize spectrum fragmentation and reduce connection blocking probability during Routing and Spectrum Allocation (RSA). The proposed method, Dynamic Threshold-Based Routing and Spectrum Allocation with Fragmentation Awareness (DT-RSAF), integrates rerouting and spectrum defragmentation as needed. By leveraging Yen’s shortest path algorithm, DT-RSAF enhances resource utilization while ensuring improved service continuity. A dynamic threshold mechanism enables the algorithm to adapt to varying network conditions, while its fragmentation awareness effectively mitigates spectrum fragmentation. Simulation results on NSFNET and COST 239 topologies demonstrate that DT-RSAF significantly outperforms methods such as K-Shortest Path Routing and Spectrum Allocation (KSP-RSA), Load Balanced and Fragmentation-Aware (LBFA), and the Invasive Weed Optimization-based RSA (IWO-RSA). Under heavy traffic, DT-RSAF reduces the blocking probability by up to 15% and achieves lower Bandwidth Fragmentation Ratios (BFR), ranging from 74% to 75%, compared to 77% - 80% for KSP-RSA, 75% - 77% for LBFA, and approximately 76% for IWO-RSA. DT-RSAF also demonstrated reasonable computation times compared to KSP-RSA, LBFA, and IWO-RSA. On a small-sized network, its computation time was 8710 times faster than that of Integer Linear Programming (ILP) on the same network topology. Additionally, it achieved a similar execution time to LBFA and outperformed IWO-RSA in terms of efficiency. These results highlight DT-RSAF’s capability to maintain large contiguous frequency blocks, making it highly effective for accommodating high-bandwidth requests in EONs while maintaining reasonable execution times.展开更多
Objective:To explore a simple method for improving the diagnostic accuracy of malignant lung nodules based on imaging features of lung nodules.Methods:A retrospective analysis was conducted on the imaging data of 114 ...Objective:To explore a simple method for improving the diagnostic accuracy of malignant lung nodules based on imaging features of lung nodules.Methods:A retrospective analysis was conducted on the imaging data of 114 patients who underwent lung nodule surgery in the Thoracic Surgery Department of the First People’s Hospital of Huzhou from June to September 2024.Imaging features of lung nodules were summarized and trained using a BP neural network.Results:Training with the BP neural network increased the diagnostic accuracy for distinguishing between benign and malignant lung nodules based on imaging features from 84.2%(manual assessment)to 94.1%.Conclusion:Training with the BP neural network significantly improves the diagnostic accuracy of lung nodule malignancy based solely on imaging features.展开更多
Patients in intensive care units(ICUs)require rapid critical decision making.Modern ICUs are data rich,where information streams from diverse sources.Machine learning(ML)and neural networks(NN)can leverage the rich da...Patients in intensive care units(ICUs)require rapid critical decision making.Modern ICUs are data rich,where information streams from diverse sources.Machine learning(ML)and neural networks(NN)can leverage the rich data for prognostication and clinical care.They can handle complex nonlinear relation-ships in medical data and have advantages over traditional predictive methods.A number of models are used:(1)Feedforward networks;and(2)Recurrent NN and convolutional NN to predict key outcomes such as mortality,length of stay in the ICU and the likelihood of complications.Current NN models exist in silos;their integration into clinical workflow requires greater transparency on data that are analyzed.Most models that are accurate enough for use in clinical care operate as‘black-boxes’in which the logic behind their decision making is opaque.Advan-ces have occurred to see through the opacity and peer into the processing of the black-box.In the near future ML is positioned to help in clinical decision making far beyond what is currently possible.Transparency is the first step toward vali-dation which is followed by clinical trust and adoption.In summary,NNs have the transformative ability to enhance predictive accuracy and improve patient management in ICUs.The concept should soon be turning into reality.展开更多
In wireless sensor networks(WSNs),nodes are usually powered by batteries.Since the energy consumption directly impacts the network lifespan,energy saving is a vital issue in WSNs,especially in the designing phase of c...In wireless sensor networks(WSNs),nodes are usually powered by batteries.Since the energy consumption directly impacts the network lifespan,energy saving is a vital issue in WSNs,especially in the designing phase of cryptographic algorithms.As a complementary mechanism,reputation has been applied to WSNs.Different from most reputation schemes that were based on beta distribution,negative multinomial distribution was deduced and its feasibility in the reputation modeling was proved.Through comparison tests with beta distribution based reputation in terms of the update computation,results show that the proposed method in this research is more energy-efficient for the reputation update and thus can better prolong the lifespan of WSNs.展开更多
Wireless Sensor Networks(WSNs)play an indispensable role in the lives of human beings in the fields of environment monitoring,manufacturing,education,agriculture etc.,However,the batteries in the sensor node under dep...Wireless Sensor Networks(WSNs)play an indispensable role in the lives of human beings in the fields of environment monitoring,manufacturing,education,agriculture etc.,However,the batteries in the sensor node under deployment in an unattended or remote area cannot be replaced because of their wireless existence.In this context,several researchers have contributed diversified number of cluster-based routing schemes that concentrate on the objective of extending node survival time.However,there still exists a room for improvement in Cluster Head(CH)selection based on the integration of critical parameters.The meta-heuristic methods that concentrate on guaranteeing both CH selection and data transmission for improving optimal network performance are predominant.In this paper,a hybrid Marine Predators Optimization and Improved Particle Swarm Optimizationbased Optimal Cluster Routing(MPO-IPSO-OCR)is proposed for ensuring both efficient CH selection and data transmission.The robust characteristic of MPOA is used in optimized CH selection,while improved PSO is used for determining the optimized route to ensure sink mobility.In specific,a strategy of position update is included in the improved PSO for enhancing the global searching efficiency of MPOA.The high-speed ratio,unit speed rate and low speed rate strategy inherited by MPOA facilitate better exploitation by preventing solution from being struck into local optimality point.The simulation investigation and statistical results confirm that the proposed MPOIPSO-OCR is capable of improving the energy stability by 21.28%,prolonging network lifetime by 18.62%and offering maximum throughput by 16.79%when compared to the benchmarked cluster-based routing schemes.展开更多
文摘Medical procedures are inherently invasive and carry the risk of inducing pain to the mind and body.Recently,efforts have been made to alleviate the discomfort associated with invasive medical procedures through the use of virtual reality(VR)technology.VR has been demonstrated to be an effective treatment for pain associated with medical procedures,as well as for chronic pain conditions for which no effective treatment has been established.The precise mechanism by which the diversion from reality facilitated by VR contributes to the diminution of pain and anxiety has yet to be elucidated.However,the provision of positive images through VR-based visual stimulation may enhance the functionality of brain networks.The salience network is diminished,while the default mode network is enhanced.Additionally,the medial prefrontal cortex may establish a stronger connection with the default mode network,which could result in a reduction of pain and anxiety.Further research into the potential of VR technology to alleviate pain could lead to a reduction in the number of individuals who overdose on painkillers and contribute to positive change in the medical field.
文摘The increasing popularity of the Internet and the widespread use of information technology have led to a rise in the number and sophistication of network attacks and security threats.Intrusion detection systems are crucial to network security,playing a pivotal role in safeguarding networks from potential threats.However,in the context of an evolving landscape of sophisticated and elusive attacks,existing intrusion detection methodologies often overlook critical aspects such as changes in network topology over time and interactions between hosts.To address these issues,this paper proposes a real-time network intrusion detection method based on graph neural networks.The proposedmethod leverages the advantages of graph neural networks and employs a straightforward graph construction method to represent network traffic as dynamic graph-structured data.Additionally,a graph convolution operation with a multi-head attention mechanism is utilized to enhance the model’s ability to capture the intricate relationships within the graph structure comprehensively.Furthermore,it uses an integrated graph neural network to address dynamic graphs’structural and topological changes at different time points and the challenges of edge embedding in intrusion detection data.The edge classification problem is effectively transformed into node classification by employing a line graph data representation,which facilitates fine-grained intrusion detection tasks on dynamic graph node feature representations.The efficacy of the proposed method is evaluated using two commonly used intrusion detection datasets,UNSW-NB15 and NF-ToN-IoT-v2,and results are compared with previous studies in this field.The experimental results demonstrate that our proposed method achieves 99.3%and 99.96%accuracy on the two datasets,respectively,and outperforms the benchmark model in several evaluation metrics.
基金funding from King Saud University through Researchers Supporting Project number(RSP2024R387),King Saud University,Riyadh,Saudi Arabia.
文摘The emergence of next generation networks(NextG),including 5G and beyond,is reshaping the technological landscape of cellular and mobile networks.These networks are sufficiently scaled to interconnect billions of users and devices.Researchers in academia and industry are focusing on technological advancements to achieve highspeed transmission,cell planning,and latency reduction to facilitate emerging applications such as virtual reality,the metaverse,smart cities,smart health,and autonomous vehicles.NextG continuously improves its network functionality to support these applications.Multiple input multiple output(MIMO)technology offers spectral efficiency,dependability,and overall performance in conjunctionwithNextG.This article proposes a secure channel estimation technique in MIMO topology using a norm-estimation model to provide comprehensive insights into protecting NextG network components against adversarial attacks.The technique aims to create long-lasting and secure NextG networks using this extended approach.The viability of MIMO applications and modern AI-driven methodologies to combat cybersecurity threats are explored in this research.Moreover,the proposed model demonstrates high performance in terms of reliability and accuracy,with a 20%reduction in the MalOut-RealOut-Diff metric compared to existing state-of-the-art techniques.
文摘Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency.
文摘Software-defined networking(SDN)is an innovative paradigm that separates the control and data planes,introducing centralized network control.SDN is increasingly being adopted by Carrier Grade networks,offering enhanced networkmanagement capabilities than those of traditional networks.However,because SDN is designed to ensure high-level service availability,it faces additional challenges.One of themost critical challenges is ensuring efficient detection and recovery from link failures in the data plane.Such failures can significantly impact network performance and lead to service outages,making resiliency a key concern for the effective adoption of SDN.Since the recovery process is intrinsically dependent on timely failure detection,this research surveys and analyzes the current literature on both failure detection and recovery approaches in SDN.The survey provides a critical comparison of existing failure detection techniques,highlighting their advantages and disadvantages.Additionally,it examines the current failure recovery methods,categorized as either restoration-based or protection-based,and offers a comprehensive comparison of their strengths and limitations.Lastly,future research challenges and directions are discussed to address the shortcomings of existing failure recovery methods.
文摘Link failure is a critical issue in large networks and must be effectively addressed.In software-defined networks(SDN),link failure recovery schemes can be categorized into proactive and reactive approaches.Reactive schemes have longer recovery times while proactive schemes provide faster recovery but overwhelm the memory of switches by flow entries.As SDN adoption grows,ensuring efficient recovery from link failures in the data plane becomes crucial.In particular,data center networks(DCNs)demand rapid recovery times and efficient resource utilization to meet carrier-grade requirements.This paper proposes an efficient Decentralized Failure Recovery(DFR)model for SDNs,meeting recovery time requirements and optimizing switch memory resource consumption.The DFR model enables switches to autonomously reroute traffic upon link failures without involving the controller,achieving fast recovery times while minimizing memory usage.DFR employs the Fast Failover Group in the OpenFlow standard for local recovery without requiring controller communication and utilizes the k-shortest path algorithm to proactively install backup paths,allowing immediate local recovery without controller intervention and enhancing overall network stability and scalability.DFR employs flow entry aggregation techniques to reduce switch memory usage.Instead of matching flow entries to the destination host’s MAC address,DFR matches packets to the destination switch’s MAC address.This reduces the switches’Ternary Content-Addressable Memory(TCAM)consumption.Additionally,DFR modifies Address Resolution Protocol(ARP)replies to provide source hosts with the destination switch’s MAC address,facilitating flow entry aggregation without affecting normal network operations.The performance of DFR is evaluated through the network emulator Mininet 2.3.1 and Ryu 3.1 as SDN controller.For different number of active flows,number of hosts per edge switch,and different network sizes,the proposed model outperformed various failure recovery models:restoration-based,protection by flow entries,protection by group entries and protection by Vlan-tagging model in terms of recovery time,switch memory consumption and controller overhead which represented the number of flow entry updates to recover from the failure.Experimental results demonstrate that DFR achieves recovery times under 20 milliseconds,satisfying carrier-grade requirements for rapid failure recovery.Additionally,DFR reduces switch memory usage by up to 95%compared to traditional protection methods and minimizes controller load by eliminating the need for controller intervention during failure recovery.Theresults underscore the efficiency and scalability of the DFR model,making it a practical solution for enhancing network resilience in SDN environments.
基金the Deanship of Graduate Studies and Scientific Research at Qassim University for financial support(QU-APC-2024-9/1).
文摘Control signaling is mandatory for the operation and management of all types of communication networks,including the Third Generation Partnership Project(3GPP)mobile broadband networks.However,they consume important and scarce network resources such as bandwidth and processing power.There have been several reports of these control signaling turning into signaling storms halting network operations and causing the respective Telecom companies big financial losses.This paper draws its motivation from such real network disaster incidents attributed to signaling storms.In this paper,we present a thorough survey of the causes,of the signaling storm problems in 3GPP-based mobile broadband networks and discuss in detail their possible solutions and countermeasures.We provide relevant analytical models to help quantify the effect of the potential causes and benefits of their corresponding solutions.Another important contribution of this paper is the comparison of the possible causes and solutions/countermeasures,concerning their effect on several important network aspects such as architecture,additional signaling,fidelity,etc.,in the form of a table.This paper presents an update and an extension of our earlier conference publication.To our knowledge,no similar survey study exists on the subject.
基金Supported by National Key Technology Research and Developmental Program of China,No.2022YFC2704400 and No.2022YFC2704405.
文摘BACKGROUND Mitochondrial genes are involved in tumor metabolism in ovarian cancer(OC)and affect immune cell infiltration and treatment responses.AIM To predict prognosis and immunotherapy response in patients diagnosed with OC using mitochondrial genes and neural networks.METHODS Prognosis,immunotherapy efficacy,and next-generation sequencing data of patients with OC were downloaded from The Cancer Genome Atlas and Gene Expression Omnibus.Mitochondrial genes were sourced from the MitoCarta3.0 database.The discovery cohort for model construction was created from 70% of the patients,whereas the remaining 30% constituted the validation cohort.Using the expression of mitochondrial genes as the predictor variable and based on neural network algorithm,the overall survival time and immunotherapy efficacy(complete or partial response)of patients were predicted.RESULTS In total,375 patients with OC were included to construct the prognostic model,and 26 patients were included to construct the immune efficacy model.The average area under the receiver operating characteristic curve of the prognostic model was 0.7268[95% confidence interval(CI):0.7258-0.7278]in the discovery cohort and 0.6475(95%CI:0.6466-0.6484)in the validation cohort.The average area under the receiver operating characteristic curve of the immunotherapy efficacy model was 0.9444(95%CI:0.8333-1.0000)in the discovery cohort and 0.9167(95%CI:0.6667-1.0000)in the validation cohort.CONCLUSION The application of mitochondrial genes and neural networks has the potential to predict prognosis and immunotherapy response in patients with OC,providing valuable insights into personalized treatment strategies.
文摘This article examines the architecture of software-defined networks (SDN) and its implications for the modern management of communications infrastructures. By decoupling the control plane from the data plane, SDN offers increased flexibility and programmability, enabling rapid adaptation to changing user requirements. However, this new approach poses significant challenges in terms of security, fault tolerance, and interoperability. This paper highlights these challenges and explores current strategies to ensure the resilience and reliability of SDN networks in the face of threats and failures. In addition, we analyze the future outlook for SDN and the importance of integrating robust security solutions into these infrastructures.
基金Supported by the National Natural Science Foundation of China(11971458,11471310)。
文摘In this paper,we propose a neural network approach to learn the parameters of a class of stochastic Lotka-Volterra systems.Approximations of the mean and covariance matrix of the observational variables are obtained from the Euler-Maruyama discretization of the underlying stochastic differential equations(SDEs),based on which the loss function is built.The stochastic gradient descent method is applied in the neural network training.Numerical experiments demonstrate the effectiveness of our method.
基金funded by the State Grid Corporation Science and Technology Project(5108-202218280A-2-391-XG).
文摘The high proportion of uncertain distributed power sources and the access to large-scale random electric vehicle(EV)charging resources further aggravate the voltage fluctuation of the distribution network,and the existing research has not deeply explored the EV active-reactive synergistic regulating characteristics,and failed to realize themulti-timescale synergistic control with other regulatingmeans,For this reason,this paper proposes amultilevel linkage coordinated optimization strategy to reduce the voltage deviation of the distribution network.Firstly,a capacitor bank reactive power compensation voltage control model and a distributed photovoltaic(PV)activereactive power regulationmodel are established.Additionally,an external characteristicmodel of EVactive-reactive power regulation is developed considering the four-quadrant operational characteristics of the EVcharger.Amultiobjective optimization model of the distribution network is then constructed considering the time-series coupling constraints of multiple types of voltage regulators.A multi-timescale control strategy is proposed by considering the impact of voltage regulators on active-reactive EV energy consumption and PV energy consumption.Then,a four-stage voltage control optimization strategy is proposed for various types of voltage regulators with multiple time scales.Themulti-objective optimization is solved with the improvedDrosophila algorithmto realize the power fluctuation control of the distribution network and themulti-stage voltage control optimization.Simulation results validate that the proposed voltage control optimization strategy achieves the coordinated control of decentralized voltage control resources in the distribution network.It effectively reduces the voltage deviation of the distribution network while ensuring the energy demand of EV users and enhancing the stability and economic efficiency of the distribution network.
文摘Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay differential equations(DDEs)and delay integrodifferential equations(DIDEs)with constant delays,primarily due to their low regularity at delayinduced breaking points.In this paper,a DNN method that combines multi-task learning(MTL)which is proposed to solve both the forward and inverse problems of DIDEs.The core idea of this approach is to divide the original equation into multiple tasks based on the delay,using auxiliary outputs to represent the integral terms,followed by the use of MTL to seamlessly incorporate the properties at the breaking points into the loss function.Furthermore,given the increased training dificulty associated with multiple tasks and outputs,we employ a sequential training scheme to reduce training complexity and provide reference solutions for subsequent tasks.This approach significantly enhances the approximation accuracy of solving DIDEs with DNNs,as demonstrated by comparisons with traditional DNN methods.We validate the effectiveness of this method through several numerical experiments,test various parameter sharing structures in MTL and compare the testing results of these structures.Finally,this method is implemented to solve the inverse problem of nonlinear DIDE and the results show that the unknown parameters of DIDE can be discovered with sparse or noisy data.
文摘Time series forecasting is essential for generating predictive insights across various domains, including healthcare, finance, and energy. This study focuses on forecasting patient health data by comparing the performance of traditional linear time series models, namely Autoregressive Integrated Moving Average (ARIMA), Seasonal ARIMA, and Moving Average (MA) against neural network architectures. The primary goal is to evaluate the effectiveness of these models in predicting healthcare outcomes using patient records, specifically the Cancerpatient.xlsx dataset, which tracks variables such as patient age, symptoms, genetic risk factors, and environmental exposures over time. The proposed strategy involves training each model on historical patient data to predict age progression and other related health indicators, with performance evaluated using Mean Squared Error (MSE) and Root Mean Squared Error (RMSE) metrics. Our findings reveal that neural networks consistently outperform ARIMA and SARIMA by capturing non-linear patterns and complex temporal dependencies within the dataset, resulting in lower forecasting errors. This research highlights the potential of neural networks to enhance predictive accuracy in healthcare applications, supporting better resource allocation, patient monitoring, and long-term health outcome predictions.
文摘Aspect-oriented sentiment analysis is a meticulous sentiment analysis task that aims to analyse the sentiment polarity of specific aspects. Most of the current research builds graph convolutional networks based on dependent syntactic trees, which improves the classification performance of the models to some extent. However, the technical limitations of dependent syntactic trees can introduce considerable noise into the model. Meanwhile, it is difficult for a single graph convolutional network to aggregate both semantic and syntactic structural information of nodes, which affects the final sentence classification. To cope with the above problems, this paper proposes a bi-channel graph convolutional network model. The model introduces a phrase structure tree and transforms it into a hierarchical phrase matrix. The adjacency matrix of the dependent syntactic tree and the hierarchical phrase matrix are combined as the initial matrix of the graph convolutional network to enhance the syntactic information. The semantic information feature representations of the sentences are obtained by the graph convolutional network with a multi-head attention mechanism and fused to achieve complementary learning of dual-channel features. Experimental results show that the model performs well and improves the accuracy of sentiment classification on three public benchmark datasets, namely Rest14, Lap14 and Twitter.
文摘This paper proposes an efficient strategy for resource utilization in Elastic Optical Networks (EONs) to minimize spectrum fragmentation and reduce connection blocking probability during Routing and Spectrum Allocation (RSA). The proposed method, Dynamic Threshold-Based Routing and Spectrum Allocation with Fragmentation Awareness (DT-RSAF), integrates rerouting and spectrum defragmentation as needed. By leveraging Yen’s shortest path algorithm, DT-RSAF enhances resource utilization while ensuring improved service continuity. A dynamic threshold mechanism enables the algorithm to adapt to varying network conditions, while its fragmentation awareness effectively mitigates spectrum fragmentation. Simulation results on NSFNET and COST 239 topologies demonstrate that DT-RSAF significantly outperforms methods such as K-Shortest Path Routing and Spectrum Allocation (KSP-RSA), Load Balanced and Fragmentation-Aware (LBFA), and the Invasive Weed Optimization-based RSA (IWO-RSA). Under heavy traffic, DT-RSAF reduces the blocking probability by up to 15% and achieves lower Bandwidth Fragmentation Ratios (BFR), ranging from 74% to 75%, compared to 77% - 80% for KSP-RSA, 75% - 77% for LBFA, and approximately 76% for IWO-RSA. DT-RSAF also demonstrated reasonable computation times compared to KSP-RSA, LBFA, and IWO-RSA. On a small-sized network, its computation time was 8710 times faster than that of Integer Linear Programming (ILP) on the same network topology. Additionally, it achieved a similar execution time to LBFA and outperformed IWO-RSA in terms of efficiency. These results highlight DT-RSAF’s capability to maintain large contiguous frequency blocks, making it highly effective for accommodating high-bandwidth requests in EONs while maintaining reasonable execution times.
基金Zhejiang Medical and Health Technology Project(Project No.2020PY072)。
文摘Objective:To explore a simple method for improving the diagnostic accuracy of malignant lung nodules based on imaging features of lung nodules.Methods:A retrospective analysis was conducted on the imaging data of 114 patients who underwent lung nodule surgery in the Thoracic Surgery Department of the First People’s Hospital of Huzhou from June to September 2024.Imaging features of lung nodules were summarized and trained using a BP neural network.Results:Training with the BP neural network increased the diagnostic accuracy for distinguishing between benign and malignant lung nodules based on imaging features from 84.2%(manual assessment)to 94.1%.Conclusion:Training with the BP neural network significantly improves the diagnostic accuracy of lung nodule malignancy based solely on imaging features.
文摘Patients in intensive care units(ICUs)require rapid critical decision making.Modern ICUs are data rich,where information streams from diverse sources.Machine learning(ML)and neural networks(NN)can leverage the rich data for prognostication and clinical care.They can handle complex nonlinear relation-ships in medical data and have advantages over traditional predictive methods.A number of models are used:(1)Feedforward networks;and(2)Recurrent NN and convolutional NN to predict key outcomes such as mortality,length of stay in the ICU and the likelihood of complications.Current NN models exist in silos;their integration into clinical workflow requires greater transparency on data that are analyzed.Most models that are accurate enough for use in clinical care operate as‘black-boxes’in which the logic behind their decision making is opaque.Advan-ces have occurred to see through the opacity and peer into the processing of the black-box.In the near future ML is positioned to help in clinical decision making far beyond what is currently possible.Transparency is the first step toward vali-dation which is followed by clinical trust and adoption.In summary,NNs have the transformative ability to enhance predictive accuracy and improve patient management in ICUs.The concept should soon be turning into reality.
基金National Natural Science Foundations of China (No.61073177,60905037)
文摘In wireless sensor networks(WSNs),nodes are usually powered by batteries.Since the energy consumption directly impacts the network lifespan,energy saving is a vital issue in WSNs,especially in the designing phase of cryptographic algorithms.As a complementary mechanism,reputation has been applied to WSNs.Different from most reputation schemes that were based on beta distribution,negative multinomial distribution was deduced and its feasibility in the reputation modeling was proved.Through comparison tests with beta distribution based reputation in terms of the update computation,results show that the proposed method in this research is more energy-efficient for the reputation update and thus can better prolong the lifespan of WSNs.
文摘Wireless Sensor Networks(WSNs)play an indispensable role in the lives of human beings in the fields of environment monitoring,manufacturing,education,agriculture etc.,However,the batteries in the sensor node under deployment in an unattended or remote area cannot be replaced because of their wireless existence.In this context,several researchers have contributed diversified number of cluster-based routing schemes that concentrate on the objective of extending node survival time.However,there still exists a room for improvement in Cluster Head(CH)selection based on the integration of critical parameters.The meta-heuristic methods that concentrate on guaranteeing both CH selection and data transmission for improving optimal network performance are predominant.In this paper,a hybrid Marine Predators Optimization and Improved Particle Swarm Optimizationbased Optimal Cluster Routing(MPO-IPSO-OCR)is proposed for ensuring both efficient CH selection and data transmission.The robust characteristic of MPOA is used in optimized CH selection,while improved PSO is used for determining the optimized route to ensure sink mobility.In specific,a strategy of position update is included in the improved PSO for enhancing the global searching efficiency of MPOA.The high-speed ratio,unit speed rate and low speed rate strategy inherited by MPOA facilitate better exploitation by preventing solution from being struck into local optimality point.The simulation investigation and statistical results confirm that the proposed MPOIPSO-OCR is capable of improving the energy stability by 21.28%,prolonging network lifetime by 18.62%and offering maximum throughput by 16.79%when compared to the benchmarked cluster-based routing schemes.