The purpose of this review is to explore the intersection of computational engineering and biomedical science,highlighting the transformative potential this convergence holds for innovation in healthcare and medical r...The purpose of this review is to explore the intersection of computational engineering and biomedical science,highlighting the transformative potential this convergence holds for innovation in healthcare and medical research.The review covers key topics such as computational modelling,bioinformatics,machine learning in medical diagnostics,and the integration of wearable technology for real-time health monitoring.Major findings indicate that computational models have significantly enhanced the understanding of complex biological systems,while machine learning algorithms have improved the accuracy of disease prediction and diagnosis.The synergy between bioinformatics and computational techniques has led to breakthroughs in personalized medicine,enabling more precise treatment strategies.Additionally,the integration of wearable devices with advanced computational methods has opened new avenues for continuous health monitoring and early disease detection.The review emphasizes the need for interdisciplinary collaboration to further advance this field.Future research should focus on developing more robust and scalable computational models,enhancing data integration techniques,and addressing ethical considerations related to data privacy and security.By fostering innovation at the intersection of these disciplines,the potential to revolutionize healthcare delivery and outcomes becomes increasingly attainable.展开更多
We present a comprehensive mathematical framework establishing the foundations of holographic quantum computing, a novel paradigm that leverages holographic phenomena to achieve superior error correction and algorithm...We present a comprehensive mathematical framework establishing the foundations of holographic quantum computing, a novel paradigm that leverages holographic phenomena to achieve superior error correction and algorithmic efficiency. We rigorously demonstrate that quantum information can be encoded and processed using holographic principles, establishing fundamental theorems characterizing the error-correcting properties of holographic codes. We develop a complete set of universal quantum gates with explicit constructions and prove exponential speedups for specific classes of computational problems. Our framework demonstrates that holographic quantum codes achieve a code rate scaling as O(1/logn), superior to traditional quantum LDPC codes, while providing inherent protection against errors via geometric properties of the code structures. We prove a threshold theorem establishing that arbitrary quantum computations can be performed reliably when physical error rates fall below a constant threshold. Notably, our analysis suggests certain algorithms, including those involving high-dimensional state spaces and long-range interactions, achieve exponential speedups over both classical and conventional quantum approaches. This work establishes the theoretical foundations for a new approach to quantum computation that provides natural fault tolerance and scalability, directly addressing longstanding challenges of the field.展开更多
Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for t...Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for the global ground users.In this paper,the computation offloading problem and resource allocation problem are formulated as a mixed integer nonlinear program(MINLP)problem.This paper proposes a computation offloading algorithm based on deep deterministic policy gradient(DDPG)to obtain the user offloading decisions and user uplink transmission power.This paper uses the convex optimization algorithm based on Lagrange multiplier method to obtain the optimal MEC server resource allocation scheme.In addition,the expression of suboptimal user local CPU cycles is derived by relaxation method.Simulation results show that the proposed algorithm can achieve excellent convergence effect,and the proposed algorithm significantly reduces the system utility values at considerable time cost compared with other algorithms.展开更多
Low Earth orbit(LEO)satellite networks have the advantages of low transmission delay and low deployment cost,playing an important role in providing reliable services to ground users.This paper studies an efficient int...Low Earth orbit(LEO)satellite networks have the advantages of low transmission delay and low deployment cost,playing an important role in providing reliable services to ground users.This paper studies an efficient inter-satellite cooperative computation offloading(ICCO)algorithm for LEO satellite networks.Specifically,an ICCO system model is constructed,which considers using neighboring satellites in the LEO satellite networks to collaboratively process tasks generated by ground user terminals,effectively improving resource utilization efficiency.Additionally,the optimization objective of minimizing the system task computation offloading delay and energy consumption is established,which is decoupled into two sub-problems.In terms of computational resource allocation,the convexity of the problem is proved through theoretical derivation,and the Lagrange multiplier method is used to obtain the optimal solution of computational resources.To deal with the task offloading decision,a dynamic sticky binary particle swarm optimization algorithm is designed to obtain the offloading decision by iteration.Simulation results show that the ICCO algorithm can effectively reduce the delay and energy consumption.展开更多
Recently,the Fog-Radio Access Network(F-RAN)has gained considerable attention,because of its flexible architecture that allows rapid response to user requirements.In this paper,computational offloading in F-RAN is con...Recently,the Fog-Radio Access Network(F-RAN)has gained considerable attention,because of its flexible architecture that allows rapid response to user requirements.In this paper,computational offloading in F-RAN is considered,where multiple User Equipments(UEs)offload their computational tasks to the F-RAN through fog nodes.Each UE can select one of the fog nodes to offload its task,and each fog node may serve multiple UEs.The tasks are computed by the fog nodes or further offloaded to the cloud via a capacity-limited fronhaul link.In order to compute all UEs'tasks quickly,joint optimization of UE-Fog association,radio and computation resources of F-RAN is proposed to minimize the maximum latency of all UEs.This min-max problem is formulated as a Mixed Integer Nonlinear Program(MINP).To tackle it,first,MINP is reformulated as a continuous optimization problem,and then the Majorization Minimization(MM)method is used to find a solution.The MM approach that we develop is unconventional in that each MM subproblem is solved inexactly with the same provable convergence guarantee as the exact MM,thereby reducing the complexity of MM iteration.In addition,a cooperative offloading model is considered,where the fog nodes compress-and-forward their received signals to the cloud.Under this model,a similar min-max latency optimization problem is formulated and tackled by the inexact MM.Simulation results show that the proposed algorithms outperform some offloading strategies,and that the cooperative offloading can exploit transmission diversity better than noncooperative offloading to achieve better latency performance.展开更多
Over-the-air computation(AirComp)enables federated learning(FL)to rapidly aggregate local models at the central server using waveform superposition property of wireless channel.In this paper,a robust transmission sche...Over-the-air computation(AirComp)enables federated learning(FL)to rapidly aggregate local models at the central server using waveform superposition property of wireless channel.In this paper,a robust transmission scheme for an AirCompbased FL system with imperfect channel state information(CSI)is proposed.To model CSI uncertainty,an expectation-based error model is utilized.The main objective is to maximize the number of selected devices that meet mean-squared error(MSE)requirements for model broadcast and model aggregation.The problem is formulated as a combinatorial optimization problem and is solved in two steps.First,the priority order of devices is determined by a sparsity-inducing procedure.Then,a feasibility detection scheme is used to select the maximum number of devices to guarantee that the MSE requirements are met.An alternating optimization(AO)scheme is used to transform the resulting nonconvex problem into two convex subproblems.Numerical results illustrate the effectiveness and robustness of the proposed scheme.展开更多
This paper develops a comprehensive computational modeling and simulation framework based on Complex Adaptive Systems(CAS)theory to unveil the underlying mechanisms of self-organization,nonlinear evolution,and emergen...This paper develops a comprehensive computational modeling and simulation framework based on Complex Adaptive Systems(CAS)theory to unveil the underlying mechanisms of self-organization,nonlinear evolution,and emergence in social systems.By integrating mathematical models,agent-based modeling,network dynamic analysis,and hybrid modeling approaches,the study applies CAS theory to case studies in economic markets,political decision-making,and social interactions.The experimental results demonstrate that local interactions among individual agents can give rise to complex global phenomena,such as market fluctuations,opinion polarization,and sudden outbreaks of social movements.This framework not only provides a more robust explanation for the nonlinear dynamics and abrupt transitions that traditional models often fail to capture,but also offers valuable decision-support tools for public policy formulation,social governance,and risk management.Emphasizing the importance of interdisciplinary approaches,this work outlines future research directions in high-performance computing,artificial intelligence,and real-time data integration to further advance the theoretical and practical applications of CAS in the social sciences.展开更多
Food allergy has become a global concern.Spleen tyrosine kinase(SYK)inhibitors are promising therapeutics against allergic disorders.In this study,a total of 300 natural phenolic compounds were firstly subjected to vi...Food allergy has become a global concern.Spleen tyrosine kinase(SYK)inhibitors are promising therapeutics against allergic disorders.In this study,a total of 300 natural phenolic compounds were firstly subjected to virtual screening.Sesamin and its metabolites,sesamin monocatechol(SC-1)and sesamin dicatechol(SC-2),were identified as potential SYK inhibitors,showing high binding affinity and inhibition efficiency towards SYK.Compared with R406(a traditional SYK inhibitor),sesamin,SC-1,and SC-2 had lower binding energy and inhibition constant(Ki)during molecular docking,exhibited higher bioavailability,safety,metabolism/clearance rate,and distribution uniformity ADMET predictions,and showed high stability in occupying the ATP-binding pocket of SYK during molecular dynamics simulations.In anti-dinitrophenyl-immunoglobulin E(Anti-DNP-Ig E)/dinitrophenyl-human serum albumin(DNP-HSA)-stimulated rat basophilic leukemia(RBL-2H3)cells,sesamin in the concentration range of 5-80μmol/L influenced significantly the degranulation and cytokine release,with 54.00%inhibition againstβ-hexosaminidase release and 58.45%decrease in histamine.In BALB/c mice,sesamin could ameliorate Anti-DNP-Ig E/DNP-HSA-induced passive cutaneous anaphylaxis(PCA)and ovalbumin(OVA)-induced active systemic anaphylaxis(ASA)reactions,reduce the levels of allergic mediators(immunoglobulins and pro-inflammatory cytokines),partially correct the imbalance of T helper(Th)cells differentiation in the spleen,and inhibit the phosphorylation of SYK and its downstream signaling proteins,including p38 mitogen-activated protein kinases(p38 MAPK),extracellular signalregulated kinases(ERK),and p65 nuclear factor-κB(p65 NF-κB)in the spleen.Thus,sesamin may be a safe and versatile SYK inhibitor that can alleviate Ig E-mediated food allergies.展开更多
This paper proposes an innovative approach to social science research based on quantum theory,integrating quantum probability,quantum game theory,and quantum statistical methods into a comprehensive interdisciplinary ...This paper proposes an innovative approach to social science research based on quantum theory,integrating quantum probability,quantum game theory,and quantum statistical methods into a comprehensive interdisciplinary framework for both theoretical and empirical investigation.The study elaborates on how core quantum concepts such as superposition,interference,and measurement collapse can be applied to model social decision making,cognition,and interactions.Advanced quantum computational methods and algorithms are employed to transition from theoretical model development to simulation and experimental validation.Through case studies in international relations,economic games,and political decision making,the research demonstrates that quantum models possess significant advantages in explaining irrational and context-dependent behaviors that traditional methods often fail to capture.The paper also explores the potential applications of quantum social science in policy formulation and public decision making,addresses the ethical,privacy,and social equity challenges posed by quantum artificial intelligence,and outlines future research directions at the convergence of quantum AI,quantum machine learning,and big data analytics.The findings suggest that quantum social science not only offers a novel perspective for understanding complex social phenomena but also lays the foundation for more accurate and efficient systems in social forecasting and decision support.展开更多
The rapid evolution of international trade necessitates the adoption of intelligent digital solutions to enhance trade facilitation.The Single Window System(SWS)has emerged as a key mechanism for streamlining trade do...The rapid evolution of international trade necessitates the adoption of intelligent digital solutions to enhance trade facilitation.The Single Window System(SWS)has emerged as a key mechanism for streamlining trade documentation,customs clearance,and regulatory compliance.However,traditional SWS implementations face challenges such as data fragmentation,inefficient processing,and limited real-time intelligence.This study proposes a computational social science framework that integrates artificial intelligence(AI),machine learning,network analytics,and blockchain to optimize SWS operations.By employing predictive modeling,agentbased simulations,and algorithmic governance,this research demonstrates how computational methodologies improve trade efficiency,enhance regulatory compliance,and reduce transaction costs.Empirical case studies on AI-driven customs clearance,blockchain-enabled trade transparency,and network-based trade policy simulation illustrate the practical applications of these techniques.The study concludes that interdisciplinary collaboration and algorithmic governance are essential for advancing digital trade facilitation,ensuring resilience,transparency,and adaptability in global trade ecosystems.展开更多
Powered by advanced information technology,more and more complex systems are exhibiting characteristics of the cyber-physical-social systems(CPSS).In this context,computational experiments method has emerged as a nove...Powered by advanced information technology,more and more complex systems are exhibiting characteristics of the cyber-physical-social systems(CPSS).In this context,computational experiments method has emerged as a novel approach for the design,analysis,management,control,and integration of CPSS,which can realize the causal analysis of complex systems by means of“algorithmization”of“counterfactuals”.However,because CPSS involve human and social factors(e.g.,autonomy,initiative,and sociality),it is difficult for traditional design of experiment(DOE)methods to achieve the generative explanation of system emergence.To address this challenge,this paper proposes an integrated approach to the design of computational experiments,incorporating three key modules:1)Descriptive module:Determining the influencing factors and response variables of the system by means of the modeling of an artificial society;2)Interpretative module:Selecting factorial experimental design solution to identify the relationship between influencing factors and macro phenomena;3)Predictive module:Building a meta-model that is equivalent to artificial society to explore its operating laws.Finally,a case study of crowd-sourcing platforms is presented to illustrate the application process and effectiveness of the proposed approach,which can reveal the social impact of algorithmic behavior on“rider race”.展开更多
With the rapid advancement of Internet of Vehicles(IoV)technology,the demands for real-time navigation,advanced driver-assistance systems(ADAS),vehicle-to-vehicle(V2V)and vehicle-to-infrastructure(V2I)communications,a...With the rapid advancement of Internet of Vehicles(IoV)technology,the demands for real-time navigation,advanced driver-assistance systems(ADAS),vehicle-to-vehicle(V2V)and vehicle-to-infrastructure(V2I)communications,and multimedia entertainment systems have made in-vehicle applications increasingly computingintensive and delay-sensitive.These applications require significant computing resources,which can overwhelm the limited computing capabilities of vehicle terminals despite advancements in computing hardware due to the complexity of tasks,energy consumption,and cost constraints.To address this issue in IoV-based edge computing,particularly in scenarios where available computing resources in vehicles are scarce,a multi-master and multi-slave double-layer game model is proposed,which is based on task offloading and pricing strategies.The establishment of Nash equilibrium of the game is proven,and a distributed artificial bee colonies algorithm is employed to achieve game equilibrium.Our proposed solution addresses these bottlenecks by leveraging a game-theoretic approach for task offloading and resource allocation in mobile edge computing(MEC)-enabled IoV environments.Simulation results demonstrate that the proposed scheme outperforms existing solutions in terms of convergence speed and system utility.Specifically,the total revenue achieved by our scheme surpasses other algorithms by at least 8.98%.展开更多
Practical real-world scenarios such as the Internet,social networks,and biological networks present the challenges of data scarcity and complex correlations,which limit the applications of artificial intelligence.The ...Practical real-world scenarios such as the Internet,social networks,and biological networks present the challenges of data scarcity and complex correlations,which limit the applications of artificial intelligence.The graph structure is a typical tool used to formulate such correlations,it is incapable of modeling highorder correlations among different objects in systems;thus,the graph structure cannot fully convey the intricate correlations among objects.Confronted with the aforementioned two challenges,hypergraph computation models high-order correlations among data,knowledge,and rules through hyperedges and leverages these high-order correlations to enhance the data.Additionally,hypergraph computation achieves collaborative computation using data and high-order correlations,thereby offering greater modeling flexibility.In particular,we introduce three types of hypergraph computation methods:①hypergraph structure modeling,②hypergraph semantic computing,and③efficient hypergraph computing.We then specify how to adopt hypergraph computation in practice by focusing on specific tasks such as three-dimensional(3D)object recognition,revealing that hypergraph computation can reduce the data requirement by 80%while achieving comparable performance or improve the performance by 52%given the same data,compared with a traditional data-based method.A comprehensive overview of the applications of hypergraph computation in diverse domains,such as intelligent medicine and computer vision,is also provided.Finally,we introduce an open-source deep learning library,DeepHypergraph(DHG),which can serve as a tool for the practical usage of hypergraph computation.展开更多
Turbulent fluidized bed possesses a distinct advantage over bubbling fluidized bed in high solids contact efficiency and thus exerts great potential in applications to many industrial processes.Simulation for fluidiza...Turbulent fluidized bed possesses a distinct advantage over bubbling fluidized bed in high solids contact efficiency and thus exerts great potential in applications to many industrial processes.Simulation for fluidization of fluid catalytic cracking(FCC)particles and the catalytic reaction of ozone decomposition in turbulent fluidized bed is conducted using the EulerianeEulerian approach,where the recently developed two-equation turbulent(TET)model is introduced to describe the turbulent mass diffusion.The energy minimization multi-scale(EMMS)drag model and the kinetic theory of granular flow(KTGF)are adopted to describe gaseparticles interaction and particleeparticle interaction respectively.The TET model features the rigorous closure for the turbulent mass transfer equations and thus enables more reliable simulation.With this model,distributions of ozone concentration and gaseparticles two-phase velocity as well as volume fraction are obtained and compared against experimental data.The average absolute relative deviation for the simulated ozone concentration is 9.67%which confirms the validity of the proposed model.Moreover,it is found that the transition velocity from bubbling fluidization to turbulent fluidization for FCC particles is about 0.5 m$se1 which is consistent with experimental observation.展开更多
The growing development of the Internet of Things(IoT)is accelerating the emergence and growth of new IoT services and applications,which will result in massive amounts of data being generated,transmitted and pro-cess...The growing development of the Internet of Things(IoT)is accelerating the emergence and growth of new IoT services and applications,which will result in massive amounts of data being generated,transmitted and pro-cessed in wireless communication networks.Mobile Edge Computing(MEC)is a desired paradigm to timely process the data from IoT for value maximization.In MEC,a number of computing-capable devices are deployed at the network edge near data sources to support edge computing,such that the long network transmission delay in cloud computing paradigm could be avoided.Since an edge device might not always have sufficient resources to process the massive amount of data,computation offloading is significantly important considering the coop-eration among edge devices.However,the dynamic traffic characteristics and heterogeneous computing capa-bilities of edge devices challenge the offloading.In addition,different scheduling schemes might provide different computation delays to the offloaded tasks.Thus,offloading in mobile nodes and scheduling in the MEC server are coupled to determine service delay.This paper seeks to guarantee low delay for computation intensive applica-tions by jointly optimizing the offloading and scheduling in such an MEC system.We propose a Delay-Greedy Computation Offloading(DGCO)algorithm to make offloading decisions for new tasks in distributed computing-enabled mobile devices.A Reinforcement Learning-based Parallel Scheduling(RLPS)algorithm is further designed to schedule offloaded tasks in the multi-core MEC server.With an offloading delay broadcast mechanism,the DGCO and RLPS cooperate to achieve the goal of delay-guarantee-ratio maximization.Finally,the simulation results show that our proposal can bound the end-to-end delay of various tasks.Even under slightly heavy task load,the delay-guarantee-ratio given by DGCO-RLPS can still approximate 95%,while that given by benchmarked algorithms is reduced to intolerable value.The simulation results are demonstrated the effective-ness of DGCO-RLPS for delay guarantee in MEC.展开更多
Secure and efficient outsourced computation in cloud computing environments is crucial for ensuring data confidentiality, integrity, and resource optimization. In this research, we propose novel algorithms and methodo...Secure and efficient outsourced computation in cloud computing environments is crucial for ensuring data confidentiality, integrity, and resource optimization. In this research, we propose novel algorithms and methodologies to address these challenges. Through a series of experiments, we evaluate the performance, security, and efficiency of the proposed algorithms in real-world cloud environments. Our results demonstrate the effectiveness of homomorphic encryption-based secure computation, secure multiparty computation, and trusted execution environment-based approaches in mitigating security threats while ensuring efficient resource utilization. Specifically, our homomorphic encryption-based algorithm exhibits encryption times ranging from 20 to 1000 milliseconds and decryption times ranging from 25 to 1250 milliseconds for payload sizes varying from 100 KB to 5000 KB. Furthermore, our comparative analysis against state-of-the-art solutions reveals the strengths of our proposed algorithms in terms of security guarantees, encryption overhead, and communication latency.展开更多
This paper presents a comprehensive exploration into the integration of Internet of Things(IoT),big data analysis,cloud computing,and Artificial Intelligence(AI),which has led to an unprecedented era of connectivity.W...This paper presents a comprehensive exploration into the integration of Internet of Things(IoT),big data analysis,cloud computing,and Artificial Intelligence(AI),which has led to an unprecedented era of connectivity.We delve into the emerging trend of machine learning on embedded devices,enabling tasks in resource-limited environ-ments.However,the widespread adoption of machine learning raises significant privacy concerns,necessitating the development of privacy-preserving techniques.One such technique,secure multi-party computation(MPC),allows collaborative computations without exposing private inputs.Despite its potential,complex protocols and communication interactions hinder performance,especially on resource-constrained devices.Efforts to enhance efficiency have been made,but scalability remains a challenge.Given the success of GPUs in deep learning,lever-aging embedded GPUs,such as those offered by NVIDIA,emerges as a promising solution.Therefore,we propose an Embedded GPU-based Secure Two-party Computation(EG-STC)framework for Artificial Intelligence(AI)systems.To the best of our knowledge,this work represents the first endeavor to fully implement machine learning model training based on secure two-party computing on the Embedded GPU platform.Our experimental results demonstrate the effectiveness of EG-STC.On an embedded GPU with a power draw of 5 W,our implementation achieved a secure two-party matrix multiplication throughput of 5881.5 kilo-operations per millisecond(kops/ms),with an energy efficiency ratio of 1176.3 kops/ms/W.Furthermore,leveraging our EG-STC framework,we achieved an overall time acceleration ratio of 5–6 times compared to solutions running on server-grade CPUs.Our solution also exhibited a reduced runtime,requiring only 60%to 70%of the runtime of previously best-known methods on the same platform.In summary,our research contributes to the advancement of secure and efficient machine learning implementations on resource-constrained embedded devices,paving the way for broader adoption of AI technologies in various applications.展开更多
In this article,the secure computation efficiency(SCE)problem is studied in a massive multipleinput multiple-output(mMIMO)-assisted mobile edge computing(MEC)network.We first derive the secure transmission rate based ...In this article,the secure computation efficiency(SCE)problem is studied in a massive multipleinput multiple-output(mMIMO)-assisted mobile edge computing(MEC)network.We first derive the secure transmission rate based on the mMIMO under imperfect channel state information.Based on this,the SCE maximization problem is formulated by jointly optimizing the local computation frequency,the offloading time,the downloading time,the users and the base station transmit power.Due to its difficulty to directly solve the formulated problem,we first transform the fractional objective function into the subtractive form one via the dinkelbach method.Next,the original problem is transformed into a convex one by applying the successive convex approximation technique,and an iteration algorithm is proposed to obtain the solutions.Finally,the stimulations are conducted to show that the performance of the proposed schemes is superior to that of the other schemes.展开更多
Mobile edge computing (MEC) has a vital role in various delay-sensitive applications. With the increasing popularity of low-computing-capability Internet of Things (IoT) devices in industry 4.0 technology, MEC also fa...Mobile edge computing (MEC) has a vital role in various delay-sensitive applications. With the increasing popularity of low-computing-capability Internet of Things (IoT) devices in industry 4.0 technology, MEC also facilitates wireless power transfer, enhancing efficiency and sustainability for these devices. The most related studies concerning the computation rate in MEC are based on the coordinate descent method, the alternating direction method of multipliers (ADMMs) and Lyapunov optimization. Nevertheless, these studies do not consider the buffer queue size. This research work concerns the computation rate maximization for wireless-powered and multiple-user MEC systems, specifically focusing on the computation rate of end devices and managing the task buffer queue before computation at the terminal devices. A deep reinforcement learning (RL)-based task offloading algorithm is proposed to maximize the computation rate of end devices and minimizes the buffer queue size at the terminal devices.Precisely, considering the channel gain, the buffer queue size and wireless power transfer, it further formalizes the task offloading problem. The mode selection for task offloading is based on the individual channel gain, the buffer queue size and wireless power transfer maximization in a particular time slot.The central idea of this work is to explore the best optimal mode selection for IoT devices connected to the MEC system. The proposed algorithm optimizes computation delay by maximizing the computation rate of end devices and minimizing the buffer queue size before computation at the terminal devices. Then, the current study presents a deep RL-based task offloading algorithm to solve such a mixed-integer and non-convex optimization problem, aiming to get a better trade-off between the buffer queue size and the computation rate. The extensive simulation results reveal that the presented algorithm is much more efficient than the existing work to maintain a small buffer queue for terminal devices while simultaneously achieving a high-level computation rate.展开更多
Photocatalysis,a critical strategy for harvesting sunlight to address energy demand and environmental concerns,is underpinned by the discovery of high-performance photocatalysts,thereby how to design photocatalysts is...Photocatalysis,a critical strategy for harvesting sunlight to address energy demand and environmental concerns,is underpinned by the discovery of high-performance photocatalysts,thereby how to design photocatalysts is now generating widespread interest in boosting the conversion effi-ciency of solar energy.In the past decade,computational technologies and theoretical simulations have led to a major leap in the development of high-throughput computational screening strategies for novel high-efficiency photocatalysts.In this viewpoint,we started with introducing the challenges of photocatalysis from the view of experimental practice,especially the inefficiency of the traditional“trial and error”method.Sub-sequently,a cross-sectional comparison between experimental and high-throughput computational screening for photocatalysis is presented and discussed in detail.On the basis of the current experimental progress in photocatalysis,we also exemplified the various challenges associated with high-throughput computational screening strategies.Finally,we offered a preferred high-throughput computational screening procedure for pho-tocatalysts from an experimental practice perspective(model construction and screening,standardized experiments,assessment and revision),with the aim of a better correlation of high-throughput simulations and experimental practices,motivating to search for better descriptors.展开更多
文摘The purpose of this review is to explore the intersection of computational engineering and biomedical science,highlighting the transformative potential this convergence holds for innovation in healthcare and medical research.The review covers key topics such as computational modelling,bioinformatics,machine learning in medical diagnostics,and the integration of wearable technology for real-time health monitoring.Major findings indicate that computational models have significantly enhanced the understanding of complex biological systems,while machine learning algorithms have improved the accuracy of disease prediction and diagnosis.The synergy between bioinformatics and computational techniques has led to breakthroughs in personalized medicine,enabling more precise treatment strategies.Additionally,the integration of wearable devices with advanced computational methods has opened new avenues for continuous health monitoring and early disease detection.The review emphasizes the need for interdisciplinary collaboration to further advance this field.Future research should focus on developing more robust and scalable computational models,enhancing data integration techniques,and addressing ethical considerations related to data privacy and security.By fostering innovation at the intersection of these disciplines,the potential to revolutionize healthcare delivery and outcomes becomes increasingly attainable.
文摘We present a comprehensive mathematical framework establishing the foundations of holographic quantum computing, a novel paradigm that leverages holographic phenomena to achieve superior error correction and algorithmic efficiency. We rigorously demonstrate that quantum information can be encoded and processed using holographic principles, establishing fundamental theorems characterizing the error-correcting properties of holographic codes. We develop a complete set of universal quantum gates with explicit constructions and prove exponential speedups for specific classes of computational problems. Our framework demonstrates that holographic quantum codes achieve a code rate scaling as O(1/logn), superior to traditional quantum LDPC codes, while providing inherent protection against errors via geometric properties of the code structures. We prove a threshold theorem establishing that arbitrary quantum computations can be performed reliably when physical error rates fall below a constant threshold. Notably, our analysis suggests certain algorithms, including those involving high-dimensional state spaces and long-range interactions, achieve exponential speedups over both classical and conventional quantum approaches. This work establishes the theoretical foundations for a new approach to quantum computation that provides natural fault tolerance and scalability, directly addressing longstanding challenges of the field.
基金supported by National Natural Science Foundation of China No.62231012Natural Science Foundation for Outstanding Young Scholars of Heilongjiang Province under Grant YQ2020F001Heilongjiang Province Postdoctoral General Foundation under Grant AUGA4110004923.
文摘Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for the global ground users.In this paper,the computation offloading problem and resource allocation problem are formulated as a mixed integer nonlinear program(MINLP)problem.This paper proposes a computation offloading algorithm based on deep deterministic policy gradient(DDPG)to obtain the user offloading decisions and user uplink transmission power.This paper uses the convex optimization algorithm based on Lagrange multiplier method to obtain the optimal MEC server resource allocation scheme.In addition,the expression of suboptimal user local CPU cycles is derived by relaxation method.Simulation results show that the proposed algorithm can achieve excellent convergence effect,and the proposed algorithm significantly reduces the system utility values at considerable time cost compared with other algorithms.
基金supported in part by Sub Project of National Key Research and Development plan in 2020 NO.2020YFC1511704Beijing Information Science and Technology University NO.2020KYNH212,NO.2021CGZH302+1 种基金Beijing Science and Technology Project(Grant No.Z211100004421009)in part by the National Natural Science Foundation of China(Grant No.62301058).
文摘Low Earth orbit(LEO)satellite networks have the advantages of low transmission delay and low deployment cost,playing an important role in providing reliable services to ground users.This paper studies an efficient inter-satellite cooperative computation offloading(ICCO)algorithm for LEO satellite networks.Specifically,an ICCO system model is constructed,which considers using neighboring satellites in the LEO satellite networks to collaboratively process tasks generated by ground user terminals,effectively improving resource utilization efficiency.Additionally,the optimization objective of minimizing the system task computation offloading delay and energy consumption is established,which is decoupled into two sub-problems.In terms of computational resource allocation,the convexity of the problem is proved through theoretical derivation,and the Lagrange multiplier method is used to obtain the optimal solution of computational resources.To deal with the task offloading decision,a dynamic sticky binary particle swarm optimization algorithm is designed to obtain the offloading decision by iteration.Simulation results show that the ICCO algorithm can effectively reduce the delay and energy consumption.
基金supported in part by the Natural Science Foundation of China (62171110,U19B2028 and U20B2070)。
文摘Recently,the Fog-Radio Access Network(F-RAN)has gained considerable attention,because of its flexible architecture that allows rapid response to user requirements.In this paper,computational offloading in F-RAN is considered,where multiple User Equipments(UEs)offload their computational tasks to the F-RAN through fog nodes.Each UE can select one of the fog nodes to offload its task,and each fog node may serve multiple UEs.The tasks are computed by the fog nodes or further offloaded to the cloud via a capacity-limited fronhaul link.In order to compute all UEs'tasks quickly,joint optimization of UE-Fog association,radio and computation resources of F-RAN is proposed to minimize the maximum latency of all UEs.This min-max problem is formulated as a Mixed Integer Nonlinear Program(MINP).To tackle it,first,MINP is reformulated as a continuous optimization problem,and then the Majorization Minimization(MM)method is used to find a solution.The MM approach that we develop is unconventional in that each MM subproblem is solved inexactly with the same provable convergence guarantee as the exact MM,thereby reducing the complexity of MM iteration.In addition,a cooperative offloading model is considered,where the fog nodes compress-and-forward their received signals to the cloud.Under this model,a similar min-max latency optimization problem is formulated and tackled by the inexact MM.Simulation results show that the proposed algorithms outperform some offloading strategies,and that the cooperative offloading can exploit transmission diversity better than noncooperative offloading to achieve better latency performance.
文摘Over-the-air computation(AirComp)enables federated learning(FL)to rapidly aggregate local models at the central server using waveform superposition property of wireless channel.In this paper,a robust transmission scheme for an AirCompbased FL system with imperfect channel state information(CSI)is proposed.To model CSI uncertainty,an expectation-based error model is utilized.The main objective is to maximize the number of selected devices that meet mean-squared error(MSE)requirements for model broadcast and model aggregation.The problem is formulated as a combinatorial optimization problem and is solved in two steps.First,the priority order of devices is determined by a sparsity-inducing procedure.Then,a feasibility detection scheme is used to select the maximum number of devices to guarantee that the MSE requirements are met.An alternating optimization(AO)scheme is used to transform the resulting nonconvex problem into two convex subproblems.Numerical results illustrate the effectiveness and robustness of the proposed scheme.
文摘This paper develops a comprehensive computational modeling and simulation framework based on Complex Adaptive Systems(CAS)theory to unveil the underlying mechanisms of self-organization,nonlinear evolution,and emergence in social systems.By integrating mathematical models,agent-based modeling,network dynamic analysis,and hybrid modeling approaches,the study applies CAS theory to case studies in economic markets,political decision-making,and social interactions.The experimental results demonstrate that local interactions among individual agents can give rise to complex global phenomena,such as market fluctuations,opinion polarization,and sudden outbreaks of social movements.This framework not only provides a more robust explanation for the nonlinear dynamics and abrupt transitions that traditional models often fail to capture,but also offers valuable decision-support tools for public policy formulation,social governance,and risk management.Emphasizing the importance of interdisciplinary approaches,this work outlines future research directions in high-performance computing,artificial intelligence,and real-time data integration to further advance the theoretical and practical applications of CAS in the social sciences.
基金Incubation Program of Youth Innovation in Shandong ProvinceKey Research and Development Program of Shandong Province(2021TZXD007)。
文摘Food allergy has become a global concern.Spleen tyrosine kinase(SYK)inhibitors are promising therapeutics against allergic disorders.In this study,a total of 300 natural phenolic compounds were firstly subjected to virtual screening.Sesamin and its metabolites,sesamin monocatechol(SC-1)and sesamin dicatechol(SC-2),were identified as potential SYK inhibitors,showing high binding affinity and inhibition efficiency towards SYK.Compared with R406(a traditional SYK inhibitor),sesamin,SC-1,and SC-2 had lower binding energy and inhibition constant(Ki)during molecular docking,exhibited higher bioavailability,safety,metabolism/clearance rate,and distribution uniformity ADMET predictions,and showed high stability in occupying the ATP-binding pocket of SYK during molecular dynamics simulations.In anti-dinitrophenyl-immunoglobulin E(Anti-DNP-Ig E)/dinitrophenyl-human serum albumin(DNP-HSA)-stimulated rat basophilic leukemia(RBL-2H3)cells,sesamin in the concentration range of 5-80μmol/L influenced significantly the degranulation and cytokine release,with 54.00%inhibition againstβ-hexosaminidase release and 58.45%decrease in histamine.In BALB/c mice,sesamin could ameliorate Anti-DNP-Ig E/DNP-HSA-induced passive cutaneous anaphylaxis(PCA)and ovalbumin(OVA)-induced active systemic anaphylaxis(ASA)reactions,reduce the levels of allergic mediators(immunoglobulins and pro-inflammatory cytokines),partially correct the imbalance of T helper(Th)cells differentiation in the spleen,and inhibit the phosphorylation of SYK and its downstream signaling proteins,including p38 mitogen-activated protein kinases(p38 MAPK),extracellular signalregulated kinases(ERK),and p65 nuclear factor-κB(p65 NF-κB)in the spleen.Thus,sesamin may be a safe and versatile SYK inhibitor that can alleviate Ig E-mediated food allergies.
文摘This paper proposes an innovative approach to social science research based on quantum theory,integrating quantum probability,quantum game theory,and quantum statistical methods into a comprehensive interdisciplinary framework for both theoretical and empirical investigation.The study elaborates on how core quantum concepts such as superposition,interference,and measurement collapse can be applied to model social decision making,cognition,and interactions.Advanced quantum computational methods and algorithms are employed to transition from theoretical model development to simulation and experimental validation.Through case studies in international relations,economic games,and political decision making,the research demonstrates that quantum models possess significant advantages in explaining irrational and context-dependent behaviors that traditional methods often fail to capture.The paper also explores the potential applications of quantum social science in policy formulation and public decision making,addresses the ethical,privacy,and social equity challenges posed by quantum artificial intelligence,and outlines future research directions at the convergence of quantum AI,quantum machine learning,and big data analytics.The findings suggest that quantum social science not only offers a novel perspective for understanding complex social phenomena but also lays the foundation for more accurate and efficient systems in social forecasting and decision support.
文摘The rapid evolution of international trade necessitates the adoption of intelligent digital solutions to enhance trade facilitation.The Single Window System(SWS)has emerged as a key mechanism for streamlining trade documentation,customs clearance,and regulatory compliance.However,traditional SWS implementations face challenges such as data fragmentation,inefficient processing,and limited real-time intelligence.This study proposes a computational social science framework that integrates artificial intelligence(AI),machine learning,network analytics,and blockchain to optimize SWS operations.By employing predictive modeling,agentbased simulations,and algorithmic governance,this research demonstrates how computational methodologies improve trade efficiency,enhance regulatory compliance,and reduce transaction costs.Empirical case studies on AI-driven customs clearance,blockchain-enabled trade transparency,and network-based trade policy simulation illustrate the practical applications of these techniques.The study concludes that interdisciplinary collaboration and algorithmic governance are essential for advancing digital trade facilitation,ensuring resilience,transparency,and adaptability in global trade ecosystems.
基金the National Key Research and Development Program of China(2021YFF0900800)the National Natural Science Foundation of China(61972276,62206116,62032016)+2 种基金the New Liberal Arts Reform and Practice Project of National Ministry of Education(2021170002)the Open Research Fund of the State Key Laboratory for Management and Control of Complex Systems(20210101)Tianjin University Talent Innovation Reward Program for Literature and Science Graduate Student(C1-2022-010)。
文摘Powered by advanced information technology,more and more complex systems are exhibiting characteristics of the cyber-physical-social systems(CPSS).In this context,computational experiments method has emerged as a novel approach for the design,analysis,management,control,and integration of CPSS,which can realize the causal analysis of complex systems by means of“algorithmization”of“counterfactuals”.However,because CPSS involve human and social factors(e.g.,autonomy,initiative,and sociality),it is difficult for traditional design of experiment(DOE)methods to achieve the generative explanation of system emergence.To address this challenge,this paper proposes an integrated approach to the design of computational experiments,incorporating three key modules:1)Descriptive module:Determining the influencing factors and response variables of the system by means of the modeling of an artificial society;2)Interpretative module:Selecting factorial experimental design solution to identify the relationship between influencing factors and macro phenomena;3)Predictive module:Building a meta-model that is equivalent to artificial society to explore its operating laws.Finally,a case study of crowd-sourcing platforms is presented to illustrate the application process and effectiveness of the proposed approach,which can reveal the social impact of algorithmic behavior on“rider race”.
基金supported by the Central University Basic Research Business Fee Fund Project(J2023-027)China Postdoctoral Science Foundation(No.2022M722248).
文摘With the rapid advancement of Internet of Vehicles(IoV)technology,the demands for real-time navigation,advanced driver-assistance systems(ADAS),vehicle-to-vehicle(V2V)and vehicle-to-infrastructure(V2I)communications,and multimedia entertainment systems have made in-vehicle applications increasingly computingintensive and delay-sensitive.These applications require significant computing resources,which can overwhelm the limited computing capabilities of vehicle terminals despite advancements in computing hardware due to the complexity of tasks,energy consumption,and cost constraints.To address this issue in IoV-based edge computing,particularly in scenarios where available computing resources in vehicles are scarce,a multi-master and multi-slave double-layer game model is proposed,which is based on task offloading and pricing strategies.The establishment of Nash equilibrium of the game is proven,and a distributed artificial bee colonies algorithm is employed to achieve game equilibrium.Our proposed solution addresses these bottlenecks by leveraging a game-theoretic approach for task offloading and resource allocation in mobile edge computing(MEC)-enabled IoV environments.Simulation results demonstrate that the proposed scheme outperforms existing solutions in terms of convergence speed and system utility.Specifically,the total revenue achieved by our scheme surpasses other algorithms by at least 8.98%.
文摘Practical real-world scenarios such as the Internet,social networks,and biological networks present the challenges of data scarcity and complex correlations,which limit the applications of artificial intelligence.The graph structure is a typical tool used to formulate such correlations,it is incapable of modeling highorder correlations among different objects in systems;thus,the graph structure cannot fully convey the intricate correlations among objects.Confronted with the aforementioned two challenges,hypergraph computation models high-order correlations among data,knowledge,and rules through hyperedges and leverages these high-order correlations to enhance the data.Additionally,hypergraph computation achieves collaborative computation using data and high-order correlations,thereby offering greater modeling flexibility.In particular,we introduce three types of hypergraph computation methods:①hypergraph structure modeling,②hypergraph semantic computing,and③efficient hypergraph computing.We then specify how to adopt hypergraph computation in practice by focusing on specific tasks such as three-dimensional(3D)object recognition,revealing that hypergraph computation can reduce the data requirement by 80%while achieving comparable performance or improve the performance by 52%given the same data,compared with a traditional data-based method.A comprehensive overview of the applications of hypergraph computation in diverse domains,such as intelligent medicine and computer vision,is also provided.Finally,we introduce an open-source deep learning library,DeepHypergraph(DHG),which can serve as a tool for the practical usage of hypergraph computation.
基金financial support from the National Natural Science Foundation of China(22078230)the National Key Research and Development Program of China(2023YFB4103600)the State Key Laboratory of Heavy Oil Processing(SKLHOP202202008).
文摘Turbulent fluidized bed possesses a distinct advantage over bubbling fluidized bed in high solids contact efficiency and thus exerts great potential in applications to many industrial processes.Simulation for fluidization of fluid catalytic cracking(FCC)particles and the catalytic reaction of ozone decomposition in turbulent fluidized bed is conducted using the EulerianeEulerian approach,where the recently developed two-equation turbulent(TET)model is introduced to describe the turbulent mass diffusion.The energy minimization multi-scale(EMMS)drag model and the kinetic theory of granular flow(KTGF)are adopted to describe gaseparticles interaction and particleeparticle interaction respectively.The TET model features the rigorous closure for the turbulent mass transfer equations and thus enables more reliable simulation.With this model,distributions of ozone concentration and gaseparticles two-phase velocity as well as volume fraction are obtained and compared against experimental data.The average absolute relative deviation for the simulated ozone concentration is 9.67%which confirms the validity of the proposed model.Moreover,it is found that the transition velocity from bubbling fluidization to turbulent fluidization for FCC particles is about 0.5 m$se1 which is consistent with experimental observation.
基金supported in part by the National Natural Science Foundation of China under Grant 61901128,62273109the Natural Science Foundation of the Jiangsu Higher Education Institutions of China(21KJB510032).
文摘The growing development of the Internet of Things(IoT)is accelerating the emergence and growth of new IoT services and applications,which will result in massive amounts of data being generated,transmitted and pro-cessed in wireless communication networks.Mobile Edge Computing(MEC)is a desired paradigm to timely process the data from IoT for value maximization.In MEC,a number of computing-capable devices are deployed at the network edge near data sources to support edge computing,such that the long network transmission delay in cloud computing paradigm could be avoided.Since an edge device might not always have sufficient resources to process the massive amount of data,computation offloading is significantly important considering the coop-eration among edge devices.However,the dynamic traffic characteristics and heterogeneous computing capa-bilities of edge devices challenge the offloading.In addition,different scheduling schemes might provide different computation delays to the offloaded tasks.Thus,offloading in mobile nodes and scheduling in the MEC server are coupled to determine service delay.This paper seeks to guarantee low delay for computation intensive applica-tions by jointly optimizing the offloading and scheduling in such an MEC system.We propose a Delay-Greedy Computation Offloading(DGCO)algorithm to make offloading decisions for new tasks in distributed computing-enabled mobile devices.A Reinforcement Learning-based Parallel Scheduling(RLPS)algorithm is further designed to schedule offloaded tasks in the multi-core MEC server.With an offloading delay broadcast mechanism,the DGCO and RLPS cooperate to achieve the goal of delay-guarantee-ratio maximization.Finally,the simulation results show that our proposal can bound the end-to-end delay of various tasks.Even under slightly heavy task load,the delay-guarantee-ratio given by DGCO-RLPS can still approximate 95%,while that given by benchmarked algorithms is reduced to intolerable value.The simulation results are demonstrated the effective-ness of DGCO-RLPS for delay guarantee in MEC.
文摘Secure and efficient outsourced computation in cloud computing environments is crucial for ensuring data confidentiality, integrity, and resource optimization. In this research, we propose novel algorithms and methodologies to address these challenges. Through a series of experiments, we evaluate the performance, security, and efficiency of the proposed algorithms in real-world cloud environments. Our results demonstrate the effectiveness of homomorphic encryption-based secure computation, secure multiparty computation, and trusted execution environment-based approaches in mitigating security threats while ensuring efficient resource utilization. Specifically, our homomorphic encryption-based algorithm exhibits encryption times ranging from 20 to 1000 milliseconds and decryption times ranging from 25 to 1250 milliseconds for payload sizes varying from 100 KB to 5000 KB. Furthermore, our comparative analysis against state-of-the-art solutions reveals the strengths of our proposed algorithms in terms of security guarantees, encryption overhead, and communication latency.
基金supported in part by Major Science and Technology Demonstration Project of Jiangsu Provincial Key R&D Program under Grant No.BE2023025in part by the National Natural Science Foundation of China under Grant No.62302238+2 种基金in part by the Natural Science Foundation of Jiangsu Province under Grant No.BK20220388in part by the Natural Science Research Project of Colleges and Universities in Jiangsu Province under Grant No.22KJB520004in part by the China Postdoctoral Science Foundation under Grant No.2022M711689.
文摘This paper presents a comprehensive exploration into the integration of Internet of Things(IoT),big data analysis,cloud computing,and Artificial Intelligence(AI),which has led to an unprecedented era of connectivity.We delve into the emerging trend of machine learning on embedded devices,enabling tasks in resource-limited environ-ments.However,the widespread adoption of machine learning raises significant privacy concerns,necessitating the development of privacy-preserving techniques.One such technique,secure multi-party computation(MPC),allows collaborative computations without exposing private inputs.Despite its potential,complex protocols and communication interactions hinder performance,especially on resource-constrained devices.Efforts to enhance efficiency have been made,but scalability remains a challenge.Given the success of GPUs in deep learning,lever-aging embedded GPUs,such as those offered by NVIDIA,emerges as a promising solution.Therefore,we propose an Embedded GPU-based Secure Two-party Computation(EG-STC)framework for Artificial Intelligence(AI)systems.To the best of our knowledge,this work represents the first endeavor to fully implement machine learning model training based on secure two-party computing on the Embedded GPU platform.Our experimental results demonstrate the effectiveness of EG-STC.On an embedded GPU with a power draw of 5 W,our implementation achieved a secure two-party matrix multiplication throughput of 5881.5 kilo-operations per millisecond(kops/ms),with an energy efficiency ratio of 1176.3 kops/ms/W.Furthermore,leveraging our EG-STC framework,we achieved an overall time acceleration ratio of 5–6 times compared to solutions running on server-grade CPUs.Our solution also exhibited a reduced runtime,requiring only 60%to 70%of the runtime of previously best-known methods on the same platform.In summary,our research contributes to the advancement of secure and efficient machine learning implementations on resource-constrained embedded devices,paving the way for broader adoption of AI technologies in various applications.
基金The Natural Science Foundation of Henan Province(No.232300421097)the Program for Science&Technology Innovation Talents in Universities of Henan Province(No.23HASTIT019,24HASTIT038)+2 种基金the China Postdoctoral Science Foundation(No.2023T160596,2023M733251)the Open Research Fund of National Mobile Communications Research Laboratory,Southeast University(No.2023D11)the Song Shan Laboratory Foundation(No.YYJC022022003)。
文摘In this article,the secure computation efficiency(SCE)problem is studied in a massive multipleinput multiple-output(mMIMO)-assisted mobile edge computing(MEC)network.We first derive the secure transmission rate based on the mMIMO under imperfect channel state information.Based on this,the SCE maximization problem is formulated by jointly optimizing the local computation frequency,the offloading time,the downloading time,the users and the base station transmit power.Due to its difficulty to directly solve the formulated problem,we first transform the fractional objective function into the subtractive form one via the dinkelbach method.Next,the original problem is transformed into a convex one by applying the successive convex approximation technique,and an iteration algorithm is proposed to obtain the solutions.Finally,the stimulations are conducted to show that the performance of the proposed schemes is superior to that of the other schemes.
基金National Natural Science Foundation of China(No.61902060)Shanghai Sailing Program,China(No.19YF1402100)+1 种基金Fundamental Research Funds for the Central Universities,China(No.2232019D3-51)Open Foundation of State Key Laboratory of Networking and Switching Technology(Beijing University of Posts and Telecommunications,China)(No.SKLNST-2021-1-06)。
文摘Mobile edge computing (MEC) has a vital role in various delay-sensitive applications. With the increasing popularity of low-computing-capability Internet of Things (IoT) devices in industry 4.0 technology, MEC also facilitates wireless power transfer, enhancing efficiency and sustainability for these devices. The most related studies concerning the computation rate in MEC are based on the coordinate descent method, the alternating direction method of multipliers (ADMMs) and Lyapunov optimization. Nevertheless, these studies do not consider the buffer queue size. This research work concerns the computation rate maximization for wireless-powered and multiple-user MEC systems, specifically focusing on the computation rate of end devices and managing the task buffer queue before computation at the terminal devices. A deep reinforcement learning (RL)-based task offloading algorithm is proposed to maximize the computation rate of end devices and minimizes the buffer queue size at the terminal devices.Precisely, considering the channel gain, the buffer queue size and wireless power transfer, it further formalizes the task offloading problem. The mode selection for task offloading is based on the individual channel gain, the buffer queue size and wireless power transfer maximization in a particular time slot.The central idea of this work is to explore the best optimal mode selection for IoT devices connected to the MEC system. The proposed algorithm optimizes computation delay by maximizing the computation rate of end devices and minimizing the buffer queue size before computation at the terminal devices. Then, the current study presents a deep RL-based task offloading algorithm to solve such a mixed-integer and non-convex optimization problem, aiming to get a better trade-off between the buffer queue size and the computation rate. The extensive simulation results reveal that the presented algorithm is much more efficient than the existing work to maintain a small buffer queue for terminal devices while simultaneously achieving a high-level computation rate.
基金The authors are grateful for financial support from the National Key Projects for Fundamental Research and Development of China(2021YFA1500803)the National Natural Science Foundation of China(51825205,52120105002,22102202,22088102,U22A20391)+1 种基金the DNL Cooperation Fund,CAS(DNL202016)the CAS Project for Young Scientists in Basic Research(YSBR-004).
文摘Photocatalysis,a critical strategy for harvesting sunlight to address energy demand and environmental concerns,is underpinned by the discovery of high-performance photocatalysts,thereby how to design photocatalysts is now generating widespread interest in boosting the conversion effi-ciency of solar energy.In the past decade,computational technologies and theoretical simulations have led to a major leap in the development of high-throughput computational screening strategies for novel high-efficiency photocatalysts.In this viewpoint,we started with introducing the challenges of photocatalysis from the view of experimental practice,especially the inefficiency of the traditional“trial and error”method.Sub-sequently,a cross-sectional comparison between experimental and high-throughput computational screening for photocatalysis is presented and discussed in detail.On the basis of the current experimental progress in photocatalysis,we also exemplified the various challenges associated with high-throughput computational screening strategies.Finally,we offered a preferred high-throughput computational screening procedure for pho-tocatalysts from an experimental practice perspective(model construction and screening,standardized experiments,assessment and revision),with the aim of a better correlation of high-throughput simulations and experimental practices,motivating to search for better descriptors.