期刊文献+
共找到249,724篇文章
< 1 2 250 >
每页显示 20 50 100
Data-Driven Healthcare:The Role of Computational Methods in Medical Innovation
1
作者 Hariharasakthisudhan Ponnarengan Sivakumar Rajendran +2 位作者 Vikas Khalkar Gunapriya Devarajan Logesh Kamaraj 《Computer Modeling in Engineering & Sciences》 SCIE EI 2025年第1期1-48,共48页
The purpose of this review is to explore the intersection of computational engineering and biomedical science,highlighting the transformative potential this convergence holds for innovation in healthcare and medical r... The purpose of this review is to explore the intersection of computational engineering and biomedical science,highlighting the transformative potential this convergence holds for innovation in healthcare and medical research.The review covers key topics such as computational modelling,bioinformatics,machine learning in medical diagnostics,and the integration of wearable technology for real-time health monitoring.Major findings indicate that computational models have significantly enhanced the understanding of complex biological systems,while machine learning algorithms have improved the accuracy of disease prediction and diagnosis.The synergy between bioinformatics and computational techniques has led to breakthroughs in personalized medicine,enabling more precise treatment strategies.Additionally,the integration of wearable devices with advanced computational methods has opened new avenues for continuous health monitoring and early disease detection.The review emphasizes the need for interdisciplinary collaboration to further advance this field.Future research should focus on developing more robust and scalable computational models,enhancing data integration techniques,and addressing ethical considerations related to data privacy and security.By fostering innovation at the intersection of these disciplines,the potential to revolutionize healthcare delivery and outcomes becomes increasingly attainable. 展开更多
关键词 computational models biomedical engineering BIOINFORMATICS machine learning wearable technology
在线阅读 下载PDF
Foundations of Holographic Quantum Computation
2
作者 Logan Nye 《Journal of Applied Mathematics and Physics》 2025年第1期11-60,共50页
We present a comprehensive mathematical framework establishing the foundations of holographic quantum computing, a novel paradigm that leverages holographic phenomena to achieve superior error correction and algorithm... We present a comprehensive mathematical framework establishing the foundations of holographic quantum computing, a novel paradigm that leverages holographic phenomena to achieve superior error correction and algorithmic efficiency. We rigorously demonstrate that quantum information can be encoded and processed using holographic principles, establishing fundamental theorems characterizing the error-correcting properties of holographic codes. We develop a complete set of universal quantum gates with explicit constructions and prove exponential speedups for specific classes of computational problems. Our framework demonstrates that holographic quantum codes achieve a code rate scaling as O(1/logn), superior to traditional quantum LDPC codes, while providing inherent protection against errors via geometric properties of the code structures. We prove a threshold theorem establishing that arbitrary quantum computations can be performed reliably when physical error rates fall below a constant threshold. Notably, our analysis suggests certain algorithms, including those involving high-dimensional state spaces and long-range interactions, achieve exponential speedups over both classical and conventional quantum approaches. This work establishes the theoretical foundations for a new approach to quantum computation that provides natural fault tolerance and scalability, directly addressing longstanding challenges of the field. 展开更多
关键词 Holographic Quantum computing Error Correction Universal Quantum Gates Exponential Speedups Fault Tolerance
在线阅读 下载PDF
DDPG-Based Intelligent Computation Offloading and Resource Allocation for LEO Satellite Edge Computing Network
3
作者 Jia Min Wu Jian +2 位作者 Zhang Liang Wang Xinyu Guo Qing 《China Communications》 2025年第3期1-15,共15页
Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for t... Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for the global ground users.In this paper,the computation offloading problem and resource allocation problem are formulated as a mixed integer nonlinear program(MINLP)problem.This paper proposes a computation offloading algorithm based on deep deterministic policy gradient(DDPG)to obtain the user offloading decisions and user uplink transmission power.This paper uses the convex optimization algorithm based on Lagrange multiplier method to obtain the optimal MEC server resource allocation scheme.In addition,the expression of suboptimal user local CPU cycles is derived by relaxation method.Simulation results show that the proposed algorithm can achieve excellent convergence effect,and the proposed algorithm significantly reduces the system utility values at considerable time cost compared with other algorithms. 展开更多
关键词 computation offloading deep deterministic policy gradient low earth orbit satellite mobile edge computing resource allocation
在线阅读 下载PDF
A Study for Inter-Satellite Cooperative Computation Offloading in LEO Satellite Networks
4
作者 Gang Yuanshuo Zhang Yuexia +2 位作者 Wu Peng Zheng Hui Fan Guangteng 《China Communications》 2025年第2期12-25,共14页
Low Earth orbit(LEO)satellite networks have the advantages of low transmission delay and low deployment cost,playing an important role in providing reliable services to ground users.This paper studies an efficient int... Low Earth orbit(LEO)satellite networks have the advantages of low transmission delay and low deployment cost,playing an important role in providing reliable services to ground users.This paper studies an efficient inter-satellite cooperative computation offloading(ICCO)algorithm for LEO satellite networks.Specifically,an ICCO system model is constructed,which considers using neighboring satellites in the LEO satellite networks to collaboratively process tasks generated by ground user terminals,effectively improving resource utilization efficiency.Additionally,the optimization objective of minimizing the system task computation offloading delay and energy consumption is established,which is decoupled into two sub-problems.In terms of computational resource allocation,the convexity of the problem is proved through theoretical derivation,and the Lagrange multiplier method is used to obtain the optimal solution of computational resources.To deal with the task offloading decision,a dynamic sticky binary particle swarm optimization algorithm is designed to obtain the offloading decision by iteration.Simulation results show that the ICCO algorithm can effectively reduce the delay and energy consumption. 展开更多
关键词 computation offloading inter-satellite co-operation LEO satellite networks
在线阅读 下载PDF
Latency minimization for multiuser computation offloading in fog-radio access networks
5
作者 Wei Zhang Shafei Wang +3 位作者 Ye Pan Qiang Li Jingran Lin Xiaoxiao Wu 《Digital Communications and Networks》 2025年第1期160-171,共12页
Recently,the Fog-Radio Access Network(F-RAN)has gained considerable attention,because of its flexible architecture that allows rapid response to user requirements.In this paper,computational offloading in F-RAN is con... Recently,the Fog-Radio Access Network(F-RAN)has gained considerable attention,because of its flexible architecture that allows rapid response to user requirements.In this paper,computational offloading in F-RAN is considered,where multiple User Equipments(UEs)offload their computational tasks to the F-RAN through fog nodes.Each UE can select one of the fog nodes to offload its task,and each fog node may serve multiple UEs.The tasks are computed by the fog nodes or further offloaded to the cloud via a capacity-limited fronhaul link.In order to compute all UEs'tasks quickly,joint optimization of UE-Fog association,radio and computation resources of F-RAN is proposed to minimize the maximum latency of all UEs.This min-max problem is formulated as a Mixed Integer Nonlinear Program(MINP).To tackle it,first,MINP is reformulated as a continuous optimization problem,and then the Majorization Minimization(MM)method is used to find a solution.The MM approach that we develop is unconventional in that each MM subproblem is solved inexactly with the same provable convergence guarantee as the exact MM,thereby reducing the complexity of MM iteration.In addition,a cooperative offloading model is considered,where the fog nodes compress-and-forward their received signals to the cloud.Under this model,a similar min-max latency optimization problem is formulated and tackled by the inexact MM.Simulation results show that the proposed algorithms outperform some offloading strategies,and that the cooperative offloading can exploit transmission diversity better than noncooperative offloading to achieve better latency performance. 展开更多
关键词 Fog-radio access network Fog computing Majorization minimization WMMSE
在线阅读 下载PDF
Robust Transmission Design for Federated Learning Through Over-the-Air Computation
6
作者 Hamideh Zamanpour Abyaneh Saba Asaad Amir Masoud Rabiei 《China Communications》 2025年第3期65-75,共11页
Over-the-air computation(AirComp)enables federated learning(FL)to rapidly aggregate local models at the central server using waveform superposition property of wireless channel.In this paper,a robust transmission sche... Over-the-air computation(AirComp)enables federated learning(FL)to rapidly aggregate local models at the central server using waveform superposition property of wireless channel.In this paper,a robust transmission scheme for an AirCompbased FL system with imperfect channel state information(CSI)is proposed.To model CSI uncertainty,an expectation-based error model is utilized.The main objective is to maximize the number of selected devices that meet mean-squared error(MSE)requirements for model broadcast and model aggregation.The problem is formulated as a combinatorial optimization problem and is solved in two steps.First,the priority order of devices is determined by a sparsity-inducing procedure.Then,a feasibility detection scheme is used to select the maximum number of devices to guarantee that the MSE requirements are met.An alternating optimization(AO)scheme is used to transform the resulting nonconvex problem into two convex subproblems.Numerical results illustrate the effectiveness and robustness of the proposed scheme. 展开更多
关键词 federated learning imperfect CSI optimization over-the-air computing robust design
在线阅读 下载PDF
Complex Adaptive Systems:Computational Modeling and Simulation in the Social Sciences
7
作者 Qiang SUN 《计算社会科学》 2025年第1期17-36,共20页
This paper develops a comprehensive computational modeling and simulation framework based on Complex Adaptive Systems(CAS)theory to unveil the underlying mechanisms of self-organization,nonlinear evolution,and emergen... This paper develops a comprehensive computational modeling and simulation framework based on Complex Adaptive Systems(CAS)theory to unveil the underlying mechanisms of self-organization,nonlinear evolution,and emergence in social systems.By integrating mathematical models,agent-based modeling,network dynamic analysis,and hybrid modeling approaches,the study applies CAS theory to case studies in economic markets,political decision-making,and social interactions.The experimental results demonstrate that local interactions among individual agents can give rise to complex global phenomena,such as market fluctuations,opinion polarization,and sudden outbreaks of social movements.This framework not only provides a more robust explanation for the nonlinear dynamics and abrupt transitions that traditional models often fail to capture,but also offers valuable decision-support tools for public policy formulation,social governance,and risk management.Emphasizing the importance of interdisciplinary approaches,this work outlines future research directions in high-performance computing,artificial intelligence,and real-time data integration to further advance the theoretical and practical applications of CAS in the social sciences. 展开更多
关键词 Complex Adaptive Systems computational Modeling Simulation Experiments Agent-Based Modeling Network Analysis EMERGENCE Nonlinear Dynamics Social Systems
在线阅读 下载PDF
Sesamin is an effective spleen tyrosine kinase inhibitor against IgE-mediated food allergy in computational,cell-based and animal studies
8
作者 Yu Li Xuerui Chen +4 位作者 Longhua Xu Xintong Tan Dapeng Li Dongxiao Sun-Waterhouse Feng Li 《Food Science and Human Wellness》 2025年第2期469-483,共15页
Food allergy has become a global concern.Spleen tyrosine kinase(SYK)inhibitors are promising therapeutics against allergic disorders.In this study,a total of 300 natural phenolic compounds were firstly subjected to vi... Food allergy has become a global concern.Spleen tyrosine kinase(SYK)inhibitors are promising therapeutics against allergic disorders.In this study,a total of 300 natural phenolic compounds were firstly subjected to virtual screening.Sesamin and its metabolites,sesamin monocatechol(SC-1)and sesamin dicatechol(SC-2),were identified as potential SYK inhibitors,showing high binding affinity and inhibition efficiency towards SYK.Compared with R406(a traditional SYK inhibitor),sesamin,SC-1,and SC-2 had lower binding energy and inhibition constant(Ki)during molecular docking,exhibited higher bioavailability,safety,metabolism/clearance rate,and distribution uniformity ADMET predictions,and showed high stability in occupying the ATP-binding pocket of SYK during molecular dynamics simulations.In anti-dinitrophenyl-immunoglobulin E(Anti-DNP-Ig E)/dinitrophenyl-human serum albumin(DNP-HSA)-stimulated rat basophilic leukemia(RBL-2H3)cells,sesamin in the concentration range of 5-80μmol/L influenced significantly the degranulation and cytokine release,with 54.00%inhibition againstβ-hexosaminidase release and 58.45%decrease in histamine.In BALB/c mice,sesamin could ameliorate Anti-DNP-Ig E/DNP-HSA-induced passive cutaneous anaphylaxis(PCA)and ovalbumin(OVA)-induced active systemic anaphylaxis(ASA)reactions,reduce the levels of allergic mediators(immunoglobulins and pro-inflammatory cytokines),partially correct the imbalance of T helper(Th)cells differentiation in the spleen,and inhibit the phosphorylation of SYK and its downstream signaling proteins,including p38 mitogen-activated protein kinases(p38 MAPK),extracellular signalregulated kinases(ERK),and p65 nuclear factor-κB(p65 NF-κB)in the spleen.Thus,sesamin may be a safe and versatile SYK inhibitor that can alleviate Ig E-mediated food allergies. 展开更多
关键词 Food allergy Spleen tyrosine kinase SESAMIN computational tools RBL-2H3 cells BALB/c mice
在线阅读 下载PDF
Computational Methods in Quantum Social Science:Innovative Theoretical,Interdisciplinary,and Empirical Approaches
9
作者 Changkui LI 《计算社会科学》 2025年第1期1-16,共16页
This paper proposes an innovative approach to social science research based on quantum theory,integrating quantum probability,quantum game theory,and quantum statistical methods into a comprehensive interdisciplinary ... This paper proposes an innovative approach to social science research based on quantum theory,integrating quantum probability,quantum game theory,and quantum statistical methods into a comprehensive interdisciplinary framework for both theoretical and empirical investigation.The study elaborates on how core quantum concepts such as superposition,interference,and measurement collapse can be applied to model social decision making,cognition,and interactions.Advanced quantum computational methods and algorithms are employed to transition from theoretical model development to simulation and experimental validation.Through case studies in international relations,economic games,and political decision making,the research demonstrates that quantum models possess significant advantages in explaining irrational and context-dependent behaviors that traditional methods often fail to capture.The paper also explores the potential applications of quantum social science in policy formulation and public decision making,addresses the ethical,privacy,and social equity challenges posed by quantum artificial intelligence,and outlines future research directions at the convergence of quantum AI,quantum machine learning,and big data analytics.The findings suggest that quantum social science not only offers a novel perspective for understanding complex social phenomena but also lays the foundation for more accurate and efficient systems in social forecasting and decision support. 展开更多
关键词 Quantum Social Science Quantum Probability Quantum Game Theory Quantum Statistics computational Methods INTERDISCIPLINARY Empirical Analysis Social Decision Making
在线阅读 下载PDF
Single Window for International Trade:Intelligent Optimization and Computational Social Science Methodological Exploration
10
作者 Sophia LI 《计算社会科学》 2025年第1期68-76,共9页
The rapid evolution of international trade necessitates the adoption of intelligent digital solutions to enhance trade facilitation.The Single Window System(SWS)has emerged as a key mechanism for streamlining trade do... The rapid evolution of international trade necessitates the adoption of intelligent digital solutions to enhance trade facilitation.The Single Window System(SWS)has emerged as a key mechanism for streamlining trade documentation,customs clearance,and regulatory compliance.However,traditional SWS implementations face challenges such as data fragmentation,inefficient processing,and limited real-time intelligence.This study proposes a computational social science framework that integrates artificial intelligence(AI),machine learning,network analytics,and blockchain to optimize SWS operations.By employing predictive modeling,agentbased simulations,and algorithmic governance,this research demonstrates how computational methodologies improve trade efficiency,enhance regulatory compliance,and reduce transaction costs.Empirical case studies on AI-driven customs clearance,blockchain-enabled trade transparency,and network-based trade policy simulation illustrate the practical applications of these techniques.The study concludes that interdisciplinary collaboration and algorithmic governance are essential for advancing digital trade facilitation,ensuring resilience,transparency,and adaptability in global trade ecosystems. 展开更多
关键词 computational Social Science Single Window System(SWS) Trade Facilitation Artificial Intelligence Machine Learning Blockchain Network Analytics Algorithmic Governance
在线阅读 下载PDF
Computational Experiments for Complex Social Systems:Experiment Design and Generative Explanation 被引量:2
11
作者 Xiao Xue Deyu Zhou +5 位作者 Xiangning Yu Gang Wang Juanjuan Li Xia Xie Lizhen Cui Fei-Yue Wang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第4期1022-1038,共17页
Powered by advanced information technology,more and more complex systems are exhibiting characteristics of the cyber-physical-social systems(CPSS).In this context,computational experiments method has emerged as a nove... Powered by advanced information technology,more and more complex systems are exhibiting characteristics of the cyber-physical-social systems(CPSS).In this context,computational experiments method has emerged as a novel approach for the design,analysis,management,control,and integration of CPSS,which can realize the causal analysis of complex systems by means of“algorithmization”of“counterfactuals”.However,because CPSS involve human and social factors(e.g.,autonomy,initiative,and sociality),it is difficult for traditional design of experiment(DOE)methods to achieve the generative explanation of system emergence.To address this challenge,this paper proposes an integrated approach to the design of computational experiments,incorporating three key modules:1)Descriptive module:Determining the influencing factors and response variables of the system by means of the modeling of an artificial society;2)Interpretative module:Selecting factorial experimental design solution to identify the relationship between influencing factors and macro phenomena;3)Predictive module:Building a meta-model that is equivalent to artificial society to explore its operating laws.Finally,a case study of crowd-sourcing platforms is presented to illustrate the application process and effectiveness of the proposed approach,which can reveal the social impact of algorithmic behavior on“rider race”. 展开更多
关键词 Agent-based modeling computational experiments cyber-physical-social systems(CPSS) generative deduction generative experiments meta model
在线阅读 下载PDF
Computation Offloading in Edge Computing for Internet of Vehicles via Game Theory 被引量:1
12
作者 Jianhua Liu Jincheng Wei +3 位作者 Rongxin Luo Guilin Yuan Jiajia Liu Xiaoguang Tu 《Computers, Materials & Continua》 SCIE EI 2024年第10期1337-1361,共25页
With the rapid advancement of Internet of Vehicles(IoV)technology,the demands for real-time navigation,advanced driver-assistance systems(ADAS),vehicle-to-vehicle(V2V)and vehicle-to-infrastructure(V2I)communications,a... With the rapid advancement of Internet of Vehicles(IoV)technology,the demands for real-time navigation,advanced driver-assistance systems(ADAS),vehicle-to-vehicle(V2V)and vehicle-to-infrastructure(V2I)communications,and multimedia entertainment systems have made in-vehicle applications increasingly computingintensive and delay-sensitive.These applications require significant computing resources,which can overwhelm the limited computing capabilities of vehicle terminals despite advancements in computing hardware due to the complexity of tasks,energy consumption,and cost constraints.To address this issue in IoV-based edge computing,particularly in scenarios where available computing resources in vehicles are scarce,a multi-master and multi-slave double-layer game model is proposed,which is based on task offloading and pricing strategies.The establishment of Nash equilibrium of the game is proven,and a distributed artificial bee colonies algorithm is employed to achieve game equilibrium.Our proposed solution addresses these bottlenecks by leveraging a game-theoretic approach for task offloading and resource allocation in mobile edge computing(MEC)-enabled IoV environments.Simulation results demonstrate that the proposed scheme outperforms existing solutions in terms of convergence speed and system utility.Specifically,the total revenue achieved by our scheme surpasses other algorithms by at least 8.98%. 展开更多
关键词 Edge computing internet of vehicles resource allocation game theory artificial bee colony algorithm
在线阅读 下载PDF
Hypergraph Computation
13
作者 Yue Gao Shuyi Ji +1 位作者 Xiangmin Han Qionghai Dai 《Engineering》 SCIE EI CAS CSCD 2024年第9期188-201,共14页
Practical real-world scenarios such as the Internet,social networks,and biological networks present the challenges of data scarcity and complex correlations,which limit the applications of artificial intelligence.The ... Practical real-world scenarios such as the Internet,social networks,and biological networks present the challenges of data scarcity and complex correlations,which limit the applications of artificial intelligence.The graph structure is a typical tool used to formulate such correlations,it is incapable of modeling highorder correlations among different objects in systems;thus,the graph structure cannot fully convey the intricate correlations among objects.Confronted with the aforementioned two challenges,hypergraph computation models high-order correlations among data,knowledge,and rules through hyperedges and leverages these high-order correlations to enhance the data.Additionally,hypergraph computation achieves collaborative computation using data and high-order correlations,thereby offering greater modeling flexibility.In particular,we introduce three types of hypergraph computation methods:①hypergraph structure modeling,②hypergraph semantic computing,and③efficient hypergraph computing.We then specify how to adopt hypergraph computation in practice by focusing on specific tasks such as three-dimensional(3D)object recognition,revealing that hypergraph computation can reduce the data requirement by 80%while achieving comparable performance or improve the performance by 52%given the same data,compared with a traditional data-based method.A comprehensive overview of the applications of hypergraph computation in diverse domains,such as intelligent medicine and computer vision,is also provided.Finally,we introduce an open-source deep learning library,DeepHypergraph(DHG),which can serve as a tool for the practical usage of hypergraph computation. 展开更多
关键词 High-order correlation Hypergraph structure modeling Hypergraph semantic computing Efficient hypergraph computing Hypergraph computation framework
在线阅读 下载PDF
Numerical investigation of turbulent mass transfer processes in turbulent fluidized bed by computational mass transfer
14
作者 Hailun Ren Liang Zeng +3 位作者 Wenbin Li Shuyong Chen Zhongli Tang Donghui Zhang 《Chinese Journal of Chemical Engineering》 SCIE EI CAS CSCD 2024年第12期64-74,共11页
Turbulent fluidized bed possesses a distinct advantage over bubbling fluidized bed in high solids contact efficiency and thus exerts great potential in applications to many industrial processes.Simulation for fluidiza... Turbulent fluidized bed possesses a distinct advantage over bubbling fluidized bed in high solids contact efficiency and thus exerts great potential in applications to many industrial processes.Simulation for fluidization of fluid catalytic cracking(FCC)particles and the catalytic reaction of ozone decomposition in turbulent fluidized bed is conducted using the EulerianeEulerian approach,where the recently developed two-equation turbulent(TET)model is introduced to describe the turbulent mass diffusion.The energy minimization multi-scale(EMMS)drag model and the kinetic theory of granular flow(KTGF)are adopted to describe gaseparticles interaction and particleeparticle interaction respectively.The TET model features the rigorous closure for the turbulent mass transfer equations and thus enables more reliable simulation.With this model,distributions of ozone concentration and gaseparticles two-phase velocity as well as volume fraction are obtained and compared against experimental data.The average absolute relative deviation for the simulated ozone concentration is 9.67%which confirms the validity of the proposed model.Moreover,it is found that the transition velocity from bubbling fluidization to turbulent fluidization for FCC particles is about 0.5 m$se1 which is consistent with experimental observation. 展开更多
关键词 Turbulent fluidized bed Simulation computational mass transfer TURBULENCE computational fluid dynamics
在线阅读 下载PDF
Joint computation offloading and parallel scheduling to maximize delay-guarantee in cooperative MEC systems
15
作者 Mian Guo Mithun Mukherjee +3 位作者 Jaime Lloret Lei Li Quansheng Guan Fei Ji 《Digital Communications and Networks》 SCIE CSCD 2024年第3期693-705,共13页
The growing development of the Internet of Things(IoT)is accelerating the emergence and growth of new IoT services and applications,which will result in massive amounts of data being generated,transmitted and pro-cess... The growing development of the Internet of Things(IoT)is accelerating the emergence and growth of new IoT services and applications,which will result in massive amounts of data being generated,transmitted and pro-cessed in wireless communication networks.Mobile Edge Computing(MEC)is a desired paradigm to timely process the data from IoT for value maximization.In MEC,a number of computing-capable devices are deployed at the network edge near data sources to support edge computing,such that the long network transmission delay in cloud computing paradigm could be avoided.Since an edge device might not always have sufficient resources to process the massive amount of data,computation offloading is significantly important considering the coop-eration among edge devices.However,the dynamic traffic characteristics and heterogeneous computing capa-bilities of edge devices challenge the offloading.In addition,different scheduling schemes might provide different computation delays to the offloaded tasks.Thus,offloading in mobile nodes and scheduling in the MEC server are coupled to determine service delay.This paper seeks to guarantee low delay for computation intensive applica-tions by jointly optimizing the offloading and scheduling in such an MEC system.We propose a Delay-Greedy Computation Offloading(DGCO)algorithm to make offloading decisions for new tasks in distributed computing-enabled mobile devices.A Reinforcement Learning-based Parallel Scheduling(RLPS)algorithm is further designed to schedule offloaded tasks in the multi-core MEC server.With an offloading delay broadcast mechanism,the DGCO and RLPS cooperate to achieve the goal of delay-guarantee-ratio maximization.Finally,the simulation results show that our proposal can bound the end-to-end delay of various tasks.Even under slightly heavy task load,the delay-guarantee-ratio given by DGCO-RLPS can still approximate 95%,while that given by benchmarked algorithms is reduced to intolerable value.The simulation results are demonstrated the effective-ness of DGCO-RLPS for delay guarantee in MEC. 展开更多
关键词 Edge computing computation offloading Parallel scheduling Mobile-edge cooperation Delay guarantee
在线阅读 下载PDF
Secure and Efficient Outsourced Computation in Cloud Computing Environments
16
作者 Varun Dixit Davinderjit Kaur 《Journal of Software Engineering and Applications》 2024年第9期750-762,共13页
Secure and efficient outsourced computation in cloud computing environments is crucial for ensuring data confidentiality, integrity, and resource optimization. In this research, we propose novel algorithms and methodo... Secure and efficient outsourced computation in cloud computing environments is crucial for ensuring data confidentiality, integrity, and resource optimization. In this research, we propose novel algorithms and methodologies to address these challenges. Through a series of experiments, we evaluate the performance, security, and efficiency of the proposed algorithms in real-world cloud environments. Our results demonstrate the effectiveness of homomorphic encryption-based secure computation, secure multiparty computation, and trusted execution environment-based approaches in mitigating security threats while ensuring efficient resource utilization. Specifically, our homomorphic encryption-based algorithm exhibits encryption times ranging from 20 to 1000 milliseconds and decryption times ranging from 25 to 1250 milliseconds for payload sizes varying from 100 KB to 5000 KB. Furthermore, our comparative analysis against state-of-the-art solutions reveals the strengths of our proposed algorithms in terms of security guarantees, encryption overhead, and communication latency. 展开更多
关键词 Secure computation Cloud computing Homomorphic Encryption Secure Multiparty computation Resource Optimization
在线阅读 下载PDF
EG-STC: An Efficient Secure Two-Party Computation Scheme Based on Embedded GPU for Artificial Intelligence Systems
17
作者 Zhenjiang Dong Xin Ge +2 位作者 Yuehua Huang Jiankuo Dong Jiang Xu 《Computers, Materials & Continua》 SCIE EI 2024年第6期4021-4044,共24页
This paper presents a comprehensive exploration into the integration of Internet of Things(IoT),big data analysis,cloud computing,and Artificial Intelligence(AI),which has led to an unprecedented era of connectivity.W... This paper presents a comprehensive exploration into the integration of Internet of Things(IoT),big data analysis,cloud computing,and Artificial Intelligence(AI),which has led to an unprecedented era of connectivity.We delve into the emerging trend of machine learning on embedded devices,enabling tasks in resource-limited environ-ments.However,the widespread adoption of machine learning raises significant privacy concerns,necessitating the development of privacy-preserving techniques.One such technique,secure multi-party computation(MPC),allows collaborative computations without exposing private inputs.Despite its potential,complex protocols and communication interactions hinder performance,especially on resource-constrained devices.Efforts to enhance efficiency have been made,but scalability remains a challenge.Given the success of GPUs in deep learning,lever-aging embedded GPUs,such as those offered by NVIDIA,emerges as a promising solution.Therefore,we propose an Embedded GPU-based Secure Two-party Computation(EG-STC)framework for Artificial Intelligence(AI)systems.To the best of our knowledge,this work represents the first endeavor to fully implement machine learning model training based on secure two-party computing on the Embedded GPU platform.Our experimental results demonstrate the effectiveness of EG-STC.On an embedded GPU with a power draw of 5 W,our implementation achieved a secure two-party matrix multiplication throughput of 5881.5 kilo-operations per millisecond(kops/ms),with an energy efficiency ratio of 1176.3 kops/ms/W.Furthermore,leveraging our EG-STC framework,we achieved an overall time acceleration ratio of 5–6 times compared to solutions running on server-grade CPUs.Our solution also exhibited a reduced runtime,requiring only 60%to 70%of the runtime of previously best-known methods on the same platform.In summary,our research contributes to the advancement of secure and efficient machine learning implementations on resource-constrained embedded devices,paving the way for broader adoption of AI technologies in various applications. 展开更多
关键词 Secure two-party computation embedded GPU acceleration privacy-preserving machine learning edge computing
在线阅读 下载PDF
Secure Computation Efficiency Resource Allocation for Massive MIMO-Enabled Mobile Edge Computing Networks
18
作者 Sun Gangcan Sun Jiwei +3 位作者 Hao Wanming Zhu Zhengyu Ji Xiang Zhou Yiqing 《China Communications》 SCIE CSCD 2024年第11期150-162,共13页
In this article,the secure computation efficiency(SCE)problem is studied in a massive multipleinput multiple-output(mMIMO)-assisted mobile edge computing(MEC)network.We first derive the secure transmission rate based ... In this article,the secure computation efficiency(SCE)problem is studied in a massive multipleinput multiple-output(mMIMO)-assisted mobile edge computing(MEC)network.We first derive the secure transmission rate based on the mMIMO under imperfect channel state information.Based on this,the SCE maximization problem is formulated by jointly optimizing the local computation frequency,the offloading time,the downloading time,the users and the base station transmit power.Due to its difficulty to directly solve the formulated problem,we first transform the fractional objective function into the subtractive form one via the dinkelbach method.Next,the original problem is transformed into a convex one by applying the successive convex approximation technique,and an iteration algorithm is proposed to obtain the solutions.Finally,the stimulations are conducted to show that the performance of the proposed schemes is superior to that of the other schemes. 展开更多
关键词 EAVESDROPPING massive multiple input multiple output mobile edge computing partial offloading secure computation efficiency
在线阅读 下载PDF
Computation Rate Maximization for Wireless-Powered and Multiple-User MEC System with Buffer Queue
19
作者 ABDUL Rauf ZHAO Ping 《Journal of Donghua University(English Edition)》 CAS 2024年第6期689-701,共13页
Mobile edge computing (MEC) has a vital role in various delay-sensitive applications. With the increasing popularity of low-computing-capability Internet of Things (IoT) devices in industry 4.0 technology, MEC also fa... Mobile edge computing (MEC) has a vital role in various delay-sensitive applications. With the increasing popularity of low-computing-capability Internet of Things (IoT) devices in industry 4.0 technology, MEC also facilitates wireless power transfer, enhancing efficiency and sustainability for these devices. The most related studies concerning the computation rate in MEC are based on the coordinate descent method, the alternating direction method of multipliers (ADMMs) and Lyapunov optimization. Nevertheless, these studies do not consider the buffer queue size. This research work concerns the computation rate maximization for wireless-powered and multiple-user MEC systems, specifically focusing on the computation rate of end devices and managing the task buffer queue before computation at the terminal devices. A deep reinforcement learning (RL)-based task offloading algorithm is proposed to maximize the computation rate of end devices and minimizes the buffer queue size at the terminal devices.Precisely, considering the channel gain, the buffer queue size and wireless power transfer, it further formalizes the task offloading problem. The mode selection for task offloading is based on the individual channel gain, the buffer queue size and wireless power transfer maximization in a particular time slot.The central idea of this work is to explore the best optimal mode selection for IoT devices connected to the MEC system. The proposed algorithm optimizes computation delay by maximizing the computation rate of end devices and minimizing the buffer queue size before computation at the terminal devices. Then, the current study presents a deep RL-based task offloading algorithm to solve such a mixed-integer and non-convex optimization problem, aiming to get a better trade-off between the buffer queue size and the computation rate. The extensive simulation results reveal that the presented algorithm is much more efficient than the existing work to maintain a small buffer queue for terminal devices while simultaneously achieving a high-level computation rate. 展开更多
关键词 computation rate mobile edge computing(MEC) buffer queue non-convex optimization deep reinforcement learning
在线阅读 下载PDF
From the perspective of experimental practice: High-throughput computational screening in photocatalysis
20
作者 Yunxuan Zhao Junyu Gao +2 位作者 Xuanang Bian Han Tang Tierui Zhang 《Green Energy & Environment》 SCIE EI CAS CSCD 2024年第1期1-6,共6页
Photocatalysis,a critical strategy for harvesting sunlight to address energy demand and environmental concerns,is underpinned by the discovery of high-performance photocatalysts,thereby how to design photocatalysts is... Photocatalysis,a critical strategy for harvesting sunlight to address energy demand and environmental concerns,is underpinned by the discovery of high-performance photocatalysts,thereby how to design photocatalysts is now generating widespread interest in boosting the conversion effi-ciency of solar energy.In the past decade,computational technologies and theoretical simulations have led to a major leap in the development of high-throughput computational screening strategies for novel high-efficiency photocatalysts.In this viewpoint,we started with introducing the challenges of photocatalysis from the view of experimental practice,especially the inefficiency of the traditional“trial and error”method.Sub-sequently,a cross-sectional comparison between experimental and high-throughput computational screening for photocatalysis is presented and discussed in detail.On the basis of the current experimental progress in photocatalysis,we also exemplified the various challenges associated with high-throughput computational screening strategies.Finally,we offered a preferred high-throughput computational screening procedure for pho-tocatalysts from an experimental practice perspective(model construction and screening,standardized experiments,assessment and revision),with the aim of a better correlation of high-throughput simulations and experimental practices,motivating to search for better descriptors. 展开更多
关键词 PHOTOCATALYSIS High-throughput computational screening PHOTOCATALYST Theoretical simulations Experiments
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部