期刊文献+
共找到528,554篇文章
< 1 2 250 >
每页显示 20 50 100
Application of virtual reality technology improves the functionality of brain networks in individuals experiencing pain 被引量:1
1
作者 Takahiko Nagamine 《World Journal of Clinical Cases》 SCIE 2025年第3期66-68,共3页
Medical procedures are inherently invasive and carry the risk of inducing pain to the mind and body.Recently,efforts have been made to alleviate the discomfort associated with invasive medical procedures through the u... Medical procedures are inherently invasive and carry the risk of inducing pain to the mind and body.Recently,efforts have been made to alleviate the discomfort associated with invasive medical procedures through the use of virtual reality(VR)technology.VR has been demonstrated to be an effective treatment for pain associated with medical procedures,as well as for chronic pain conditions for which no effective treatment has been established.The precise mechanism by which the diversion from reality facilitated by VR contributes to the diminution of pain and anxiety has yet to be elucidated.However,the provision of positive images through VR-based visual stimulation may enhance the functionality of brain networks.The salience network is diminished,while the default mode network is enhanced.Additionally,the medial prefrontal cortex may establish a stronger connection with the default mode network,which could result in a reduction of pain and anxiety.Further research into the potential of VR technology to alleviate pain could lead to a reduction in the number of individuals who overdose on painkillers and contribute to positive change in the medical field. 展开更多
关键词 Virtual reality PAIN ANXIETY Salience network Default mode network
在线阅读 下载PDF
DIGNN-A:Real-Time Network Intrusion Detection with Integrated Neural Networks Based on Dynamic Graph
2
作者 Jizhao Liu Minghao Guo 《Computers, Materials & Continua》 SCIE EI 2025年第1期817-842,共26页
The increasing popularity of the Internet and the widespread use of information technology have led to a rise in the number and sophistication of network attacks and security threats.Intrusion detection systems are cr... The increasing popularity of the Internet and the widespread use of information technology have led to a rise in the number and sophistication of network attacks and security threats.Intrusion detection systems are crucial to network security,playing a pivotal role in safeguarding networks from potential threats.However,in the context of an evolving landscape of sophisticated and elusive attacks,existing intrusion detection methodologies often overlook critical aspects such as changes in network topology over time and interactions between hosts.To address these issues,this paper proposes a real-time network intrusion detection method based on graph neural networks.The proposedmethod leverages the advantages of graph neural networks and employs a straightforward graph construction method to represent network traffic as dynamic graph-structured data.Additionally,a graph convolution operation with a multi-head attention mechanism is utilized to enhance the model’s ability to capture the intricate relationships within the graph structure comprehensively.Furthermore,it uses an integrated graph neural network to address dynamic graphs’structural and topological changes at different time points and the challenges of edge embedding in intrusion detection data.The edge classification problem is effectively transformed into node classification by employing a line graph data representation,which facilitates fine-grained intrusion detection tasks on dynamic graph node feature representations.The efficacy of the proposed method is evaluated using two commonly used intrusion detection datasets,UNSW-NB15 and NF-ToN-IoT-v2,and results are compared with previous studies in this field.The experimental results demonstrate that our proposed method achieves 99.3%and 99.96%accuracy on the two datasets,respectively,and outperforms the benchmark model in several evaluation metrics. 展开更多
关键词 Intrusion detection graph neural networks attention mechanisms line graphs dynamic graph neural networks
在线阅读 下载PDF
Secure Channel Estimation Using Norm Estimation Model for 5G Next Generation Wireless Networks
3
作者 Khalil Ullah Song Jian +4 位作者 Muhammad Naeem Ul Hassan Suliman Khan Mohammad Babar Arshad Ahmad Shafiq Ahmad 《Computers, Materials & Continua》 SCIE EI 2025年第1期1151-1169,共19页
The emergence of next generation networks(NextG),including 5G and beyond,is reshaping the technological landscape of cellular and mobile networks.These networks are sufficiently scaled to interconnect billions of user... The emergence of next generation networks(NextG),including 5G and beyond,is reshaping the technological landscape of cellular and mobile networks.These networks are sufficiently scaled to interconnect billions of users and devices.Researchers in academia and industry are focusing on technological advancements to achieve highspeed transmission,cell planning,and latency reduction to facilitate emerging applications such as virtual reality,the metaverse,smart cities,smart health,and autonomous vehicles.NextG continuously improves its network functionality to support these applications.Multiple input multiple output(MIMO)technology offers spectral efficiency,dependability,and overall performance in conjunctionwithNextG.This article proposes a secure channel estimation technique in MIMO topology using a norm-estimation model to provide comprehensive insights into protecting NextG network components against adversarial attacks.The technique aims to create long-lasting and secure NextG networks using this extended approach.The viability of MIMO applications and modern AI-driven methodologies to combat cybersecurity threats are explored in this research.Moreover,the proposed model demonstrates high performance in terms of reliability and accuracy,with a 20%reduction in the MalOut-RealOut-Diff metric compared to existing state-of-the-art techniques. 展开更多
关键词 Next generation networks massive mimo communication network artificial intelligence 5G adversarial attacks channel estimation information security
在线阅读 下载PDF
Offload Strategy for Edge Computing in Satellite Networks Based on Software Defined Network
4
作者 Zhiguo Liu Yuqing Gui +1 位作者 Lin Wang Yingru Jiang 《Computers, Materials & Continua》 SCIE EI 2025年第1期863-879,共17页
Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in us... Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency. 展开更多
关键词 Satellite network edge computing task scheduling computing offloading
在线阅读 下载PDF
A Survey of Link Failure Detection and Recovery in Software-Defined Networks
5
作者 Suheib Alhiyari Siti Hafizah AB Hamid Nur Nasuha Daud 《Computers, Materials & Continua》 SCIE EI 2025年第1期103-137,共35页
Software-defined networking(SDN)is an innovative paradigm that separates the control and data planes,introducing centralized network control.SDN is increasingly being adopted by Carrier Grade networks,offering enhance... Software-defined networking(SDN)is an innovative paradigm that separates the control and data planes,introducing centralized network control.SDN is increasingly being adopted by Carrier Grade networks,offering enhanced networkmanagement capabilities than those of traditional networks.However,because SDN is designed to ensure high-level service availability,it faces additional challenges.One of themost critical challenges is ensuring efficient detection and recovery from link failures in the data plane.Such failures can significantly impact network performance and lead to service outages,making resiliency a key concern for the effective adoption of SDN.Since the recovery process is intrinsically dependent on timely failure detection,this research surveys and analyzes the current literature on both failure detection and recovery approaches in SDN.The survey provides a critical comparison of existing failure detection techniques,highlighting their advantages and disadvantages.Additionally,it examines the current failure recovery methods,categorized as either restoration-based or protection-based,and offers a comprehensive comparison of their strengths and limitations.Lastly,future research challenges and directions are discussed to address the shortcomings of existing failure recovery methods. 展开更多
关键词 Software defined networking failure detection failure recovery RESTORATION PROTECTION
在线阅读 下载PDF
A Decentralized and TCAM-Aware Failure Recovery Model in Software Defined Data Center Networks
6
作者 Suheib Alhiyari Siti Hafizah AB Hamid Nur Nasuha Daud 《Computers, Materials & Continua》 SCIE EI 2025年第1期1087-1107,共21页
Link failure is a critical issue in large networks and must be effectively addressed.In software-defined networks(SDN),link failure recovery schemes can be categorized into proactive and reactive approaches.Reactive s... Link failure is a critical issue in large networks and must be effectively addressed.In software-defined networks(SDN),link failure recovery schemes can be categorized into proactive and reactive approaches.Reactive schemes have longer recovery times while proactive schemes provide faster recovery but overwhelm the memory of switches by flow entries.As SDN adoption grows,ensuring efficient recovery from link failures in the data plane becomes crucial.In particular,data center networks(DCNs)demand rapid recovery times and efficient resource utilization to meet carrier-grade requirements.This paper proposes an efficient Decentralized Failure Recovery(DFR)model for SDNs,meeting recovery time requirements and optimizing switch memory resource consumption.The DFR model enables switches to autonomously reroute traffic upon link failures without involving the controller,achieving fast recovery times while minimizing memory usage.DFR employs the Fast Failover Group in the OpenFlow standard for local recovery without requiring controller communication and utilizes the k-shortest path algorithm to proactively install backup paths,allowing immediate local recovery without controller intervention and enhancing overall network stability and scalability.DFR employs flow entry aggregation techniques to reduce switch memory usage.Instead of matching flow entries to the destination host’s MAC address,DFR matches packets to the destination switch’s MAC address.This reduces the switches’Ternary Content-Addressable Memory(TCAM)consumption.Additionally,DFR modifies Address Resolution Protocol(ARP)replies to provide source hosts with the destination switch’s MAC address,facilitating flow entry aggregation without affecting normal network operations.The performance of DFR is evaluated through the network emulator Mininet 2.3.1 and Ryu 3.1 as SDN controller.For different number of active flows,number of hosts per edge switch,and different network sizes,the proposed model outperformed various failure recovery models:restoration-based,protection by flow entries,protection by group entries and protection by Vlan-tagging model in terms of recovery time,switch memory consumption and controller overhead which represented the number of flow entry updates to recover from the failure.Experimental results demonstrate that DFR achieves recovery times under 20 milliseconds,satisfying carrier-grade requirements for rapid failure recovery.Additionally,DFR reduces switch memory usage by up to 95%compared to traditional protection methods and minimizes controller load by eliminating the need for controller intervention during failure recovery.Theresults underscore the efficiency and scalability of the DFR model,making it a practical solution for enhancing network resilience in SDN environments. 展开更多
关键词 Software defined networking failure detection failure recovery RESTORATION protection TCAM size
在线阅读 下载PDF
Modeling and Comprehensive Review of Signaling Storms in 3GPP-Based Mobile Broadband Networks:Causes,Solutions,and Countermeasures
7
作者 Muhammad Qasim Khan Fazal Malik +1 位作者 Fahad Alturise Noor Rahman 《Computer Modeling in Engineering & Sciences》 SCIE EI 2025年第1期123-153,共31页
Control signaling is mandatory for the operation and management of all types of communication networks,including the Third Generation Partnership Project(3GPP)mobile broadband networks.However,they consume important a... Control signaling is mandatory for the operation and management of all types of communication networks,including the Third Generation Partnership Project(3GPP)mobile broadband networks.However,they consume important and scarce network resources such as bandwidth and processing power.There have been several reports of these control signaling turning into signaling storms halting network operations and causing the respective Telecom companies big financial losses.This paper draws its motivation from such real network disaster incidents attributed to signaling storms.In this paper,we present a thorough survey of the causes,of the signaling storm problems in 3GPP-based mobile broadband networks and discuss in detail their possible solutions and countermeasures.We provide relevant analytical models to help quantify the effect of the potential causes and benefits of their corresponding solutions.Another important contribution of this paper is the comparison of the possible causes and solutions/countermeasures,concerning their effect on several important network aspects such as architecture,additional signaling,fidelity,etc.,in the form of a table.This paper presents an update and an extension of our earlier conference publication.To our knowledge,no similar survey study exists on the subject. 展开更多
关键词 Signaling storm problems control signaling load analytical modeling 3GPP networks smart devices diameter signaling mobile broadband data access data traffic mobility management signaling network architecture 5G mobile communication
在线阅读 下载PDF
Unlocking the future:Mitochondrial genes and neural networks in predicting ovarian cancer prognosis and immunotherapy response
8
作者 Zhi-Jian Tang Yuan-Ming Pan +2 位作者 Wei Li Rui-Qiong Ma Jian-Liu Wang 《World Journal of Clinical Oncology》 2025年第1期43-52,共10页
BACKGROUND Mitochondrial genes are involved in tumor metabolism in ovarian cancer(OC)and affect immune cell infiltration and treatment responses.AIM To predict prognosis and immunotherapy response in patients diagnose... BACKGROUND Mitochondrial genes are involved in tumor metabolism in ovarian cancer(OC)and affect immune cell infiltration and treatment responses.AIM To predict prognosis and immunotherapy response in patients diagnosed with OC using mitochondrial genes and neural networks.METHODS Prognosis,immunotherapy efficacy,and next-generation sequencing data of patients with OC were downloaded from The Cancer Genome Atlas and Gene Expression Omnibus.Mitochondrial genes were sourced from the MitoCarta3.0 database.The discovery cohort for model construction was created from 70% of the patients,whereas the remaining 30% constituted the validation cohort.Using the expression of mitochondrial genes as the predictor variable and based on neural network algorithm,the overall survival time and immunotherapy efficacy(complete or partial response)of patients were predicted.RESULTS In total,375 patients with OC were included to construct the prognostic model,and 26 patients were included to construct the immune efficacy model.The average area under the receiver operating characteristic curve of the prognostic model was 0.7268[95% confidence interval(CI):0.7258-0.7278]in the discovery cohort and 0.6475(95%CI:0.6466-0.6484)in the validation cohort.The average area under the receiver operating characteristic curve of the immunotherapy efficacy model was 0.9444(95%CI:0.8333-1.0000)in the discovery cohort and 0.9167(95%CI:0.6667-1.0000)in the validation cohort.CONCLUSION The application of mitochondrial genes and neural networks has the potential to predict prognosis and immunotherapy response in patients with OC,providing valuable insights into personalized treatment strategies. 展开更多
关键词 Ovarian cancer MITOCHONDRIA PROGNOSIS IMMUNOTHERAPY Neural network
在线阅读 下载PDF
Software Defined Networks: Strengths, Weaknesses, and Resilience to Failures
9
作者 Wendnéso Aïda Ouedraogo Rakissaga Hamidou Harouna Omar Pegdwindé Justin Kouraogo 《Engineering(科研)》 2025年第1期19-29,共11页
This article examines the architecture of software-defined networks (SDN) and its implications for the modern management of communications infrastructures. By decoupling the control plane from the data plane, SDN offe... This article examines the architecture of software-defined networks (SDN) and its implications for the modern management of communications infrastructures. By decoupling the control plane from the data plane, SDN offers increased flexibility and programmability, enabling rapid adaptation to changing user requirements. However, this new approach poses significant challenges in terms of security, fault tolerance, and interoperability. This paper highlights these challenges and explores current strategies to ensure the resilience and reliability of SDN networks in the face of threats and failures. In addition, we analyze the future outlook for SDN and the importance of integrating robust security solutions into these infrastructures. 展开更多
关键词 Software Defined Networking (SDN) SDN Architecture Fault Tolerance Network Security PROGRAMMABILITY Interoperability Communication Infrastructures
在线阅读 下载PDF
Learning the parameters of a class of stochastic Lotka-Volterra systems with neural networks
10
作者 WANG Zhanpeng WANG Lijin 《中国科学院大学学报(中英文)》 北大核心 2025年第1期20-25,共6页
In this paper,we propose a neural network approach to learn the parameters of a class of stochastic Lotka-Volterra systems.Approximations of the mean and covariance matrix of the observational variables are obtained f... In this paper,we propose a neural network approach to learn the parameters of a class of stochastic Lotka-Volterra systems.Approximations of the mean and covariance matrix of the observational variables are obtained from the Euler-Maruyama discretization of the underlying stochastic differential equations(SDEs),based on which the loss function is built.The stochastic gradient descent method is applied in the neural network training.Numerical experiments demonstrate the effectiveness of our method. 展开更多
关键词 stochastic Lotka-Volterra systems neural networks Euler-Maruyama scheme parameter estimation
在线阅读 下载PDF
Multi-Stage Voltage Control Optimization Strategy for Distribution Networks Considering Active-Reactive Co-Regulation of Electric Vehicles
11
作者 Shukang Lyu Fei Zeng +3 位作者 Huachun Han Huiyu Miao Yi Pan Xiaodong Yuan 《Energy Engineering》 EI 2025年第1期221-242,共22页
The high proportion of uncertain distributed power sources and the access to large-scale random electric vehicle(EV)charging resources further aggravate the voltage fluctuation of the distribution network,and the exis... The high proportion of uncertain distributed power sources and the access to large-scale random electric vehicle(EV)charging resources further aggravate the voltage fluctuation of the distribution network,and the existing research has not deeply explored the EV active-reactive synergistic regulating characteristics,and failed to realize themulti-timescale synergistic control with other regulatingmeans,For this reason,this paper proposes amultilevel linkage coordinated optimization strategy to reduce the voltage deviation of the distribution network.Firstly,a capacitor bank reactive power compensation voltage control model and a distributed photovoltaic(PV)activereactive power regulationmodel are established.Additionally,an external characteristicmodel of EVactive-reactive power regulation is developed considering the four-quadrant operational characteristics of the EVcharger.Amultiobjective optimization model of the distribution network is then constructed considering the time-series coupling constraints of multiple types of voltage regulators.A multi-timescale control strategy is proposed by considering the impact of voltage regulators on active-reactive EV energy consumption and PV energy consumption.Then,a four-stage voltage control optimization strategy is proposed for various types of voltage regulators with multiple time scales.Themulti-objective optimization is solved with the improvedDrosophila algorithmto realize the power fluctuation control of the distribution network and themulti-stage voltage control optimization.Simulation results validate that the proposed voltage control optimization strategy achieves the coordinated control of decentralized voltage control resources in the distribution network.It effectively reduces the voltage deviation of the distribution network while ensuring the energy demand of EV users and enhancing the stability and economic efficiency of the distribution network. 展开更多
关键词 Electric vehicle(EV) distribution network multi-stage optimization active-reactive power regulation voltage control
在线阅读 下载PDF
DEEP NEURAL NETWORKS COMBINING MULTI-TASK LEARNING FOR SOLVING DELAY INTEGRO-DIFFERENTIAL EQUATIONS
12
作者 WANG Chen-yao SHI Feng 《数学杂志》 2025年第1期13-38,共26页
Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay di... Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay differential equations(DDEs)and delay integrodifferential equations(DIDEs)with constant delays,primarily due to their low regularity at delayinduced breaking points.In this paper,a DNN method that combines multi-task learning(MTL)which is proposed to solve both the forward and inverse problems of DIDEs.The core idea of this approach is to divide the original equation into multiple tasks based on the delay,using auxiliary outputs to represent the integral terms,followed by the use of MTL to seamlessly incorporate the properties at the breaking points into the loss function.Furthermore,given the increased training dificulty associated with multiple tasks and outputs,we employ a sequential training scheme to reduce training complexity and provide reference solutions for subsequent tasks.This approach significantly enhances the approximation accuracy of solving DIDEs with DNNs,as demonstrated by comparisons with traditional DNN methods.We validate the effectiveness of this method through several numerical experiments,test various parameter sharing structures in MTL and compare the testing results of these structures.Finally,this method is implemented to solve the inverse problem of nonlinear DIDE and the results show that the unknown parameters of DIDE can be discovered with sparse or noisy data. 展开更多
关键词 Delay integro-differential equation Multi-task learning parameter sharing structure deep neural network sequential training scheme
在线阅读 下载PDF
Time Series Forecasting in Healthcare: A Comparative Study of Statistical Models and Neural Networks
13
作者 Ghadah Alsheheri 《Journal of Applied Mathematics and Physics》 2025年第2期633-663,共31页
Time series forecasting is essential for generating predictive insights across various domains, including healthcare, finance, and energy. This study focuses on forecasting patient health data by comparing the perform... Time series forecasting is essential for generating predictive insights across various domains, including healthcare, finance, and energy. This study focuses on forecasting patient health data by comparing the performance of traditional linear time series models, namely Autoregressive Integrated Moving Average (ARIMA), Seasonal ARIMA, and Moving Average (MA) against neural network architectures. The primary goal is to evaluate the effectiveness of these models in predicting healthcare outcomes using patient records, specifically the Cancerpatient.xlsx dataset, which tracks variables such as patient age, symptoms, genetic risk factors, and environmental exposures over time. The proposed strategy involves training each model on historical patient data to predict age progression and other related health indicators, with performance evaluated using Mean Squared Error (MSE) and Root Mean Squared Error (RMSE) metrics. Our findings reveal that neural networks consistently outperform ARIMA and SARIMA by capturing non-linear patterns and complex temporal dependencies within the dataset, resulting in lower forecasting errors. This research highlights the potential of neural networks to enhance predictive accuracy in healthcare applications, supporting better resource allocation, patient monitoring, and long-term health outcome predictions. 展开更多
关键词 Time Series Forecasting ARIMA SARIMA Neutral Network Predictive Modeling MSE
在线阅读 下载PDF
Aspect-Level Sentiment Analysis of Bi-Graph Convolutional Networks Based on Enhanced Syntactic Structural Information
14
作者 Junpeng Hu Yegang Li 《Journal of Computer and Communications》 2025年第1期72-89,共18页
Aspect-oriented sentiment analysis is a meticulous sentiment analysis task that aims to analyse the sentiment polarity of specific aspects. Most of the current research builds graph convolutional networks based on dep... Aspect-oriented sentiment analysis is a meticulous sentiment analysis task that aims to analyse the sentiment polarity of specific aspects. Most of the current research builds graph convolutional networks based on dependent syntactic trees, which improves the classification performance of the models to some extent. However, the technical limitations of dependent syntactic trees can introduce considerable noise into the model. Meanwhile, it is difficult for a single graph convolutional network to aggregate both semantic and syntactic structural information of nodes, which affects the final sentence classification. To cope with the above problems, this paper proposes a bi-channel graph convolutional network model. The model introduces a phrase structure tree and transforms it into a hierarchical phrase matrix. The adjacency matrix of the dependent syntactic tree and the hierarchical phrase matrix are combined as the initial matrix of the graph convolutional network to enhance the syntactic information. The semantic information feature representations of the sentences are obtained by the graph convolutional network with a multi-head attention mechanism and fused to achieve complementary learning of dual-channel features. Experimental results show that the model performs well and improves the accuracy of sentiment classification on three public benchmark datasets, namely Rest14, Lap14 and Twitter. 展开更多
关键词 Aspect-Level Sentiment Analysis Sentiment Knowledge Multi-Head Attention Mechanism Graph Convolutional networks
在线阅读 下载PDF
Resource Optimization in Elastic Optical Networks Using Threshold-Based Routing and Fragmentation-Aware Spectrum Allocation
15
作者 Kamagaté Beman Hamidja Kanga Koffi +2 位作者 Coulibaly Kpinan Tiekoura Konaté Adama Michel Babri 《Open Journal of Applied Sciences》 2025年第1期168-186,共19页
This paper proposes an efficient strategy for resource utilization in Elastic Optical Networks (EONs) to minimize spectrum fragmentation and reduce connection blocking probability during Routing and Spectrum Allocatio... This paper proposes an efficient strategy for resource utilization in Elastic Optical Networks (EONs) to minimize spectrum fragmentation and reduce connection blocking probability during Routing and Spectrum Allocation (RSA). The proposed method, Dynamic Threshold-Based Routing and Spectrum Allocation with Fragmentation Awareness (DT-RSAF), integrates rerouting and spectrum defragmentation as needed. By leveraging Yen’s shortest path algorithm, DT-RSAF enhances resource utilization while ensuring improved service continuity. A dynamic threshold mechanism enables the algorithm to adapt to varying network conditions, while its fragmentation awareness effectively mitigates spectrum fragmentation. Simulation results on NSFNET and COST 239 topologies demonstrate that DT-RSAF significantly outperforms methods such as K-Shortest Path Routing and Spectrum Allocation (KSP-RSA), Load Balanced and Fragmentation-Aware (LBFA), and the Invasive Weed Optimization-based RSA (IWO-RSA). Under heavy traffic, DT-RSAF reduces the blocking probability by up to 15% and achieves lower Bandwidth Fragmentation Ratios (BFR), ranging from 74% to 75%, compared to 77% - 80% for KSP-RSA, 75% - 77% for LBFA, and approximately 76% for IWO-RSA. DT-RSAF also demonstrated reasonable computation times compared to KSP-RSA, LBFA, and IWO-RSA. On a small-sized network, its computation time was 8710 times faster than that of Integer Linear Programming (ILP) on the same network topology. Additionally, it achieved a similar execution time to LBFA and outperformed IWO-RSA in terms of efficiency. These results highlight DT-RSAF’s capability to maintain large contiguous frequency blocks, making it highly effective for accommodating high-bandwidth requests in EONs while maintaining reasonable execution times. 展开更多
关键词 Elastic Optical networks (EONs) Spectrum Fragmentation Routing and Spectrum Allocation (RSA) Connection Rerouting HEURISTIC
在线阅读 下载PDF
Application of Artificial Neural Networks in Predicting Malignant Lung Nodules on Chest CT Scans
16
作者 Wenhui Li Yuping Yang +2 位作者 Yixian Liang Pengliang Xu Qiuqiang Chen 《Proceedings of Anticancer Research》 2025年第1期115-121,共7页
Objective:To explore a simple method for improving the diagnostic accuracy of malignant lung nodules based on imaging features of lung nodules.Methods:A retrospective analysis was conducted on the imaging data of 114 ... Objective:To explore a simple method for improving the diagnostic accuracy of malignant lung nodules based on imaging features of lung nodules.Methods:A retrospective analysis was conducted on the imaging data of 114 patients who underwent lung nodule surgery in the Thoracic Surgery Department of the First People’s Hospital of Huzhou from June to September 2024.Imaging features of lung nodules were summarized and trained using a BP neural network.Results:Training with the BP neural network increased the diagnostic accuracy for distinguishing between benign and malignant lung nodules based on imaging features from 84.2%(manual assessment)to 94.1%.Conclusion:Training with the BP neural network significantly improves the diagnostic accuracy of lung nodule malignancy based solely on imaging features. 展开更多
关键词 Lung nodule Malignant lung tumor Neural network Chest CT
在线阅读 下载PDF
Predicting outcomes using neural networks in the intensive care unit
17
作者 Gumpeny R Sridhar Venkat Yarabati Lakshmi Gumpeny 《World Journal of Clinical Cases》 2025年第11期1-11,共11页
Patients in intensive care units(ICUs)require rapid critical decision making.Modern ICUs are data rich,where information streams from diverse sources.Machine learning(ML)and neural networks(NN)can leverage the rich da... Patients in intensive care units(ICUs)require rapid critical decision making.Modern ICUs are data rich,where information streams from diverse sources.Machine learning(ML)and neural networks(NN)can leverage the rich data for prognostication and clinical care.They can handle complex nonlinear relation-ships in medical data and have advantages over traditional predictive methods.A number of models are used:(1)Feedforward networks;and(2)Recurrent NN and convolutional NN to predict key outcomes such as mortality,length of stay in the ICU and the likelihood of complications.Current NN models exist in silos;their integration into clinical workflow requires greater transparency on data that are analyzed.Most models that are accurate enough for use in clinical care operate as‘black-boxes’in which the logic behind their decision making is opaque.Advan-ces have occurred to see through the opacity and peer into the processing of the black-box.In the near future ML is positioned to help in clinical decision making far beyond what is currently possible.Transparency is the first step toward vali-dation which is followed by clinical trust and adoption.In summary,NNs have the transformative ability to enhance predictive accuracy and improve patient management in ICUs.The concept should soon be turning into reality. 展开更多
关键词 Large language models HALLUCINATIONS Supervised learning Unsupervised learning Convoluted neural networks BLACK-BOX WORKFLOW
在线阅读 下载PDF
基于WSNs的电站智能开关设备测控系统研究
18
作者 蒋池剑 黄凌宇 +2 位作者 王博天 刘春祥 范文杰 《工业仪表与自动化装置》 2025年第1期93-97,共5页
为实现电站开关设备的智能化、网络化和无人化管理,基于无线传感器网络(WSNs)技术,构建起电站智能开关设备测控系统,系统以ZigBee技术为基础,以CC3200MCU为核心,可实现对设备电压(电流)采集、温度采集、状态采集以及分/合闸控制等功能... 为实现电站开关设备的智能化、网络化和无人化管理,基于无线传感器网络(WSNs)技术,构建起电站智能开关设备测控系统,系统以ZigBee技术为基础,以CC3200MCU为核心,可实现对设备电压(电流)采集、温度采集、状态采集以及分/合闸控制等功能。系统采用MDS-MAP定位算法,该算法相比其他定位算法,具有更高的定位精度,针对电站智能开关设备,算法的最佳设置参数为:节点数量设置为100个,通讯半径设置为60 m,锚节点占比设置为40%,区域大小设置为100×100。系统可顺利实现电流/电压采集、温度采集、分闸/合闸、继电器控制、无线传输等功能,温度和电流采集平均误差均小于0.1%,温度采集平均误差均控制在±0.5℃范围内。为进一步提升系统的监测精度,在系统中植入了温度修正程序,对不同温度区间对采集到的温度值进行校正。 展开更多
关键词 无线传感器网络技术 电站智能开关设备 监控系统 ZIGBEE 定位算法 误差
在线阅读 下载PDF
A Novel Negative Multinomial Distribution Based Energy-Efficient Reputation Modeling for Wireless Sensor Networks(WSNs) 被引量:1
19
作者 魏哲 王芳 《Journal of Donghua University(English Edition)》 EI CAS 2012年第2期153-156,共4页
In wireless sensor networks(WSNs),nodes are usually powered by batteries.Since the energy consumption directly impacts the network lifespan,energy saving is a vital issue in WSNs,especially in the designing phase of c... In wireless sensor networks(WSNs),nodes are usually powered by batteries.Since the energy consumption directly impacts the network lifespan,energy saving is a vital issue in WSNs,especially in the designing phase of cryptographic algorithms.As a complementary mechanism,reputation has been applied to WSNs.Different from most reputation schemes that were based on beta distribution,negative multinomial distribution was deduced and its feasibility in the reputation modeling was proved.Through comparison tests with beta distribution based reputation in terms of the update computation,results show that the proposed method in this research is more energy-efficient for the reputation update and thus can better prolong the lifespan of WSNs. 展开更多
关键词 WIRELESS sensor networks(wsns) NEGATIVE MULTINOMIAL distribution REPUTATION ENERGY-EFFICIENT
在线阅读 下载PDF
Hybrid Marine Predators Optimization and Improved Particle Swarm Optimization-Based Optimal Cluster Routing in Wireless Sensor Networks(WSNs)
20
作者 A.Balamurugan Sengathir Janakiraman +1 位作者 M.Deva Priya A.Christy Jeba Malar 《China Communications》 SCIE CSCD 2022年第6期219-247,共29页
Wireless Sensor Networks(WSNs)play an indispensable role in the lives of human beings in the fields of environment monitoring,manufacturing,education,agriculture etc.,However,the batteries in the sensor node under dep... Wireless Sensor Networks(WSNs)play an indispensable role in the lives of human beings in the fields of environment monitoring,manufacturing,education,agriculture etc.,However,the batteries in the sensor node under deployment in an unattended or remote area cannot be replaced because of their wireless existence.In this context,several researchers have contributed diversified number of cluster-based routing schemes that concentrate on the objective of extending node survival time.However,there still exists a room for improvement in Cluster Head(CH)selection based on the integration of critical parameters.The meta-heuristic methods that concentrate on guaranteeing both CH selection and data transmission for improving optimal network performance are predominant.In this paper,a hybrid Marine Predators Optimization and Improved Particle Swarm Optimizationbased Optimal Cluster Routing(MPO-IPSO-OCR)is proposed for ensuring both efficient CH selection and data transmission.The robust characteristic of MPOA is used in optimized CH selection,while improved PSO is used for determining the optimized route to ensure sink mobility.In specific,a strategy of position update is included in the improved PSO for enhancing the global searching efficiency of MPOA.The high-speed ratio,unit speed rate and low speed rate strategy inherited by MPOA facilitate better exploitation by preventing solution from being struck into local optimality point.The simulation investigation and statistical results confirm that the proposed MPOIPSO-OCR is capable of improving the energy stability by 21.28%,prolonging network lifetime by 18.62%and offering maximum throughput by 16.79%when compared to the benchmarked cluster-based routing schemes. 展开更多
关键词 Marine Predators Optimization Algorithm(MPOA) Particle Swarm Optimization(PSO) Optimal Cluster-based Routing Cluster Head(CH)selection Wireless Sensor networks(wsns)
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部