Peta-scale high-perfomlance computing systems are increasingly built with heterogeneous CPU and GPU nodes to achieve higher power efficiency and computation throughput. While providing unprecedented capabilities to co...Peta-scale high-perfomlance computing systems are increasingly built with heterogeneous CPU and GPU nodes to achieve higher power efficiency and computation throughput. While providing unprecedented capabilities to conduct computational experiments of historic significance, these systems are presently difficult to program. The users, who are domain experts rather than computer experts, prefer to use programming models closer to their domains (e.g., physics and biology) rather than MPI and OpenME This has led the development of domain-specific programming that provides domain-specific programming interfaces but abstracts away some performance-critical architecture details. Based on experience in designing large-scale computing systems, a hybrid programming framework for scientific computing on heterogeneous architectures is proposed in this work. Its design philosophy is to provide a collaborative mechanism for domain experts and computer experts so that both domain-specific knowledge and performance-critical architecture details can be adequately exploited. Two real-world scientific applications have been evaluated on TH-IA, a peta-scale CPU-GPU heterogeneous system that is currently the 5th fastest supercomputer in the world. The experimental results show that the proposed framework is well suited for developing large-scale scientific computing applications on peta-scale heterogeneous CPU/GPU systems.展开更多
Processing large-scale 3-D gravity data is an important topic in geophysics field. Many existing inversion methods lack the competence of processing massive data and practical application capacity. This study proposes...Processing large-scale 3-D gravity data is an important topic in geophysics field. Many existing inversion methods lack the competence of processing massive data and practical application capacity. This study proposes the application of GPU parallel processing technology to the focusing inversion method, aiming at improving the inversion accuracy while speeding up calculation and reducing the memory consumption, thus obtaining the fast and reliable inversion results for large complex model. In this paper, equivalent storage of geometric trellis is used to calculate the sensitivity matrix, and the inversion is based on GPU parallel computing technology. The parallel computing program that is optimized by reducing data transfer, access restrictions and instruction restrictions as well as latency hiding greatly reduces the memory usage, speeds up the calculation, and makes the fast inversion of large models possible. By comparing and analyzing the computing speed of traditional single thread CPU method and CUDA-based GPU parallel technology, the excellent acceleration performance of GPU parallel computing is verified, which provides ideas for practical application of some theoretical inversion methods restricted by computing speed and computer memory. The model test verifies that the focusing inversion method can overcome the problem of severe skin effect and ambiguity of geological body boundary. Moreover, the increase of the model cells and inversion data can more clearly depict the boundary position of the abnormal body and delineate its specific shape.展开更多
Cloud computing is an advance computing model using which several applications,data and countless IT services are provided over the Internet.Task scheduling plays a crucial role in cloud computing systems.The issue of...Cloud computing is an advance computing model using which several applications,data and countless IT services are provided over the Internet.Task scheduling plays a crucial role in cloud computing systems.The issue of task scheduling can be viewed as the finding or searching an optimal mapping/assignment of set of subtasks of different tasks over the available set of resources so that we can achieve the desired goals for tasks.With the enlargement of users of cloud the tasks need to be scheduled.Cloud’s performance depends on the task scheduling algorithms used.Numerous algorithms have been submitted in the past to solve the task scheduling problem for heterogeneous network of computers.The existing research work proposes different methods for data intensive applications which are energy and deadline aware task scheduling method.As scientific workflow is combination of fine grain and coarse grain task.Every task scheduled to VM has system overhead.If multiple fine grain task are executing in scientific workflow,it increase the scheduling overhead.To overcome the scheduling overhead,multiple small tasks has been combined to large task,which decrease the scheduling overhead and improve the execution time of the workflow.Horizontal clustering has been used to cluster the fine grained task further replication technique has been combined.The proposed scheduling algorithm improves the performance metrics such as execution time and cost.Further this research can be extended with improved clustering technique and replication methods.展开更多
The rise of scientific computing was one of the most important advances in the S&T progress during the second half of the 20th century. Parallel with theoretical exploration and scientific experiments,scientific c...The rise of scientific computing was one of the most important advances in the S&T progress during the second half of the 20th century. Parallel with theoretical exploration and scientific experiments,scientific computing has become the 'third means' for scientific activities in the world today. The article gives a panoramic review of the subject during the past 50 years in China and lists the contributions made by Chinese scientists in this field. In addition, it reveals some key contents of related projects in the national research plan and looks into the development vista for the subject in China at the dawning years of the new century.展开更多
Large-scale multi-objective optimization problems(LSMOPs)pose challenges to existing optimizers since a set of well-converged and diverse solutions should be found in huge search spaces.While evolutionary algorithms a...Large-scale multi-objective optimization problems(LSMOPs)pose challenges to existing optimizers since a set of well-converged and diverse solutions should be found in huge search spaces.While evolutionary algorithms are good at solving small-scale multi-objective optimization problems,they are criticized for low efficiency in converging to the optimums of LSMOPs.By contrast,mathematical programming methods offer fast convergence speed on large-scale single-objective optimization problems,but they have difficulties in finding diverse solutions for LSMOPs.Currently,how to integrate evolutionary algorithms with mathematical programming methods to solve LSMOPs remains unexplored.In this paper,a hybrid algorithm is tailored for LSMOPs by coupling differential evolution and a conjugate gradient method.On the one hand,conjugate gradients and differential evolution are used to update different decision variables of a set of solutions,where the former drives the solutions to quickly converge towards the Pareto front and the latter promotes the diversity of the solutions to cover the whole Pareto front.On the other hand,objective decomposition strategy of evolutionary multi-objective optimization is used to differentiate the conjugate gradients of solutions,and the line search strategy of mathematical programming is used to ensure the higher quality of each offspring than its parent.In comparison with state-of-the-art evolutionary algorithms,mathematical programming methods,and hybrid algorithms,the proposed algorithm exhibits better convergence and diversity performance on a variety of benchmark and real-world LSMOPs.展开更多
Protein-protein interactions are of great significance for human to understand the functional mechanisms of proteins.With the rapid development of high-throughput genomic technologies,massive protein-protein interacti...Protein-protein interactions are of great significance for human to understand the functional mechanisms of proteins.With the rapid development of high-throughput genomic technologies,massive protein-protein interaction(PPI)data have been generated,making it very difficult to analyze them efficiently.To address this problem,this paper presents a distributed framework by reimplementing one of state-of-the-art algorithms,i.e.,CoFex,using MapReduce.To do so,an in-depth analysis of its limitations is conducted from the perspectives of efficiency and memory consumption when applying it for large-scale PPI data analysis and prediction.Respective solutions are then devised to overcome these limitations.In particular,we adopt a novel tree-based data structure to reduce the heavy memory consumption caused by the huge sequence information of proteins.After that,its procedure is modified by following the MapReduce framework to take the prediction task distributively.A series of extensive experiments have been conducted to evaluate the performance of our framework in terms of both efficiency and accuracy.Experimental results well demonstrate that the proposed framework can considerably improve its computational efficiency by more than two orders of magnitude while retaining the same high accuracy.展开更多
Our primary research hypothesis stands on a simple idea:The evolution of top-rated publications on a particular theme depends heavily on the progress and maturity of related topics.And this even when there are no clea...Our primary research hypothesis stands on a simple idea:The evolution of top-rated publications on a particular theme depends heavily on the progress and maturity of related topics.And this even when there are no clear relations or some concepts appear to cease to exist and leave place for newer ones starting many years ago.We implemented our model based on Computer Science Ontology(CSO)and analyzed 44 years of publications.Then we derived the most important concepts related to Cloud Computing(CC)from the scientific collection offered by Clarivate Analytics.Our methodology includes data extraction using advanced web crawling techniques,data preparation,statistical data analysis,and graphical representations.We obtained related concepts after aggregating the scores using the Jaccard coefficient and CSO Ontology.Our article reveals the contribution of Cloud Computing topics in research papers in leading scientific journals and the relationships between the field of Cloud Computing and the interdependent subdivisions identified in the broader framework of Computer Science.展开更多
Cloud computing is considered to facilitate a more cost-effective way to deploy scientific workflows.The individual tasks of a scientific work-flow necessitate a diversified number of large states that are spatially l...Cloud computing is considered to facilitate a more cost-effective way to deploy scientific workflows.The individual tasks of a scientific work-flow necessitate a diversified number of large states that are spatially located in different datacenters,thereby resulting in huge delays during data transmis-sion.Edge computing minimizes the delays in data transmission and supports the fixed storage strategy for scientific workflow private datasets.Therefore,this fixed storage strategy creates huge amount of bottleneck in its storage capacity.At this juncture,integrating the merits of cloud computing and edge computing during the process of rationalizing the data placement of scientific workflows and optimizing the energy and time incurred in data transmission across different datacentres remains a challenge.In this paper,Adaptive Cooperative Foraging and Dispersed Foraging Strategies-Improved Harris Hawks Optimization Algorithm(ACF-DFS-HHOA)is proposed for optimizing the energy and data transmission time in the event of placing data for a specific scientific workflow.This ACF-DFS-HHOA considered the factors influencing transmission delay and energy consumption of data centers into account during the process of rationalizing the data placement of scientific workflows.The adaptive cooperative and dispersed foraging strategy is included in HHOA to guide the position updates that improve population diversity and effectively prevent the algorithm from being trapped into local optimality points.The experimental results of ACF-DFS-HHOA confirmed its predominance in minimizing energy and data transmission time incurred during workflow execution.展开更多
The article introduces the main practices and achievements of the Environment and Plant Protection Institute of Chinese Academy of Tropical Agricultural Sciences in promoting the sharing of large-scale instruments and...The article introduces the main practices and achievements of the Environment and Plant Protection Institute of Chinese Academy of Tropical Agricultural Sciences in promoting the sharing of large-scale instruments and equipment in recent years,analyzes the existing problems in the management system,management team,assessment incentives and maintenance guarantee,and proposes improvement measures and suggestions from aspects of improving the sharing management system,strengthening management team building,strengthening sharing assessment and incentives,improving maintenance capabilities and expanding external publicity,to further improve the sharing management of large-scale instruments and equipment.展开更多
Scientific computing libraries,whether in-house or open-source,have witnessed enormous progress in both engineering and scientific research.Therefore,it is important to ensure that modifications to the source code,pro...Scientific computing libraries,whether in-house or open-source,have witnessed enormous progress in both engineering and scientific research.Therefore,it is important to ensure that modifications to the source code,prompted by bug fixing or new feature development,do not compromise the accuracy and functionality that have been already validated and verified.This paper introduces a method for establishing and implementing an automatic regression test environment,using the open-source multi-physics library SPHinXsys as an illustrative example.Initially,a reference database for each benchmark test is generated from observed data across multiple executions.This comprehensive database encapsulates the maximum variation range of metrics for different strategies,including the time-averaged,ensemble-averaged,and dynamic time warping methods.It accounts for uncertainties arising from parallel computing,particle relaxation,physical instabilities,and more.Subsequently,new results obtained after source code modifications undergo testing based on a curve-similarity comparison against the reference database.Whenever the source code is updated,the regression test is automatically executed for all test cases,providing a comprehensive assessment of the validity of the current results.This regression test environment has been successfully implemented in all dynamic test cases within SPHinXsys,including fluid dynamics,solid mechanics,fluid-structure interaction,thermal and mass diffusion,reaction-diffusion,and their multi-physics couplings,and demonstrates robust capabilities in testing different problems.It is noted that while the current test environment is built and implemented for a particular scientific computing library,its underlying principles are generic and can be easily adapted for use with other libraries,achieving equal effectiveness.展开更多
目的通过分析全球脑机接口领域的科研竞争与合作态势为相关从业者提供参考。方法在Web of Science核心集合中检索脑机接口相关论文,采用文献计量学分析法和社会网络分析法对全球脑机接口科学研究竞争与合作态势进行分析,识别该领域的领...目的通过分析全球脑机接口领域的科研竞争与合作态势为相关从业者提供参考。方法在Web of Science核心集合中检索脑机接口相关论文,采用文献计量学分析法和社会网络分析法对全球脑机接口科学研究竞争与合作态势进行分析,识别该领域的领先国家、机构和研究者。结果1990—2023年,全球脑机接口领域SCI论文共计产出9037篇,其中2014—2023年产出占78.9%,十年复合增长率达13.7%。中美两国的发文量居全球第一梯队,相较之下中国起步晚但增长快,2023年的发文量达美国的3倍。合作方面,美国无论是合作对象(64个)还是合作次数(1507次)均高于中国。其他国家则以德国(968篇)和英国(725篇)的发文量相对突出。从机构和研究者来看,德国图宾根大学和奥地利格拉茨大学是脑机接口研究的早期引领者,但近几年论文产出明显减少。相比之下,中国清华大学高小榕、中国科学院王毅军、天津大学明东、华东理工大学金晶和华南理工大学李远清等研究者则在近几年产出了较多论文。结论脑机接口领域在近十余年快速发展,中美两国在全球竞争与合作中处于领先地位,美国起步较早但近几年增速减缓,中国起步较晚但近几年增速较快,其中尤以中国科学院、清华大学、天津大学等机构表现最为突出。展开更多
Cost effective separation of acetylene(C_2H_2)and ethylene(C_2H_4)is of key importance to obtain essential chemical raw materials for polymer industry.Due to the low compression limit of C_2H_2,there is an urgent dema...Cost effective separation of acetylene(C_2H_2)and ethylene(C_2H_4)is of key importance to obtain essential chemical raw materials for polymer industry.Due to the low compression limit of C_2H_2,there is an urgent demand to develop suitable materials for efficiently separating the two gases under ambient conditions.In this paper,we provided a high-throughput screening strategy to study porous metal-organic frameworks(MOFs)containing open metal sites(OMS)for C_2H_2/C_2H_4 separation,followed by a rational design of novel MOFs in-silico.A set of accurate force fields was established from ab initio calculations to describe the critical role of OMS towards vip molecules.From a large-scale computational screening of 916 experimental Cu-paddlewheel-based MOFs,three materials were identified with excellent separation performance.The structure-performance relationships revealed that the optimal materials should have the largest cavity diameter around 5-10?and pore volume in-between 0.3-1.0 cm^3 g^(-1).Based on the systematic screening study result,three novel MOFs were further designed with the incorporation of fluorine functional group.The results showed that Cu-OMS and the-F group on the aromatic rings close to Cu sites could generate a synergistic effect on the preferential adsorption of C_2H_2 over C_2H_4,leading to a remarkable improvement of C_2H_2 separation performance of the materials.The findings could provide insight for future experimental design and synthesis of high-performance nanostructured materials for C_2H_2/C_2H_4 separation.展开更多
This paper presents the mathematical analysis of the dynamical system for avian influenza.The proposed model considers a nonlinear dynamical model of birds and human.The half-saturated incidence rate is used for the t...This paper presents the mathematical analysis of the dynamical system for avian influenza.The proposed model considers a nonlinear dynamical model of birds and human.The half-saturated incidence rate is used for the transmission of avian influenza infection.Rigorous mathematical results are presented for the proposed models.The local and global dynamics of each model are presented and proven that when R0<1,then the disease-free equilibrium of each model is stable both locally and globally,and when R0>1,then the endemic equilibrium is stable both locally and globally.The numerical results obtained for the proposed model shows that influenza could be eliminated from the community if the threshold is not greater than unity.展开更多
Computational psychiatry is an emerging field that not only explores the biological basis of mental illness but also considers the diagnoses and identifies the underlying mechanisms.One of the key strengths of computa...Computational psychiatry is an emerging field that not only explores the biological basis of mental illness but also considers the diagnoses and identifies the underlying mechanisms.One of the key strengths of computational psychiatry is that it may identify patterns in large datasets that are not easily identifiable.This may help researchers develop more effective treatments and interventions for mental health problems.This paper is a narrative review that reviews the literature and produces an artificial intelligence ecosystem for computational psychiatry.The artificial intelligence ecosystem for computational psychiatry includes data acquisition,preparation,modeling,application,and evaluation.This approach allows researchers to integrate data from a variety of sources,such as brain imaging,genetics,and behavioral experiments,to obtain a more complete understanding of mental health conditions.Through the process of data preprocessing,training,and testing,the data that are required for model building can be prepared.By using machine learning,neural networks,artificial intelligence,and other methods,researchers have been able to develop diagnostic tools that can accurately identify mental health conditions based on a patient’s symptoms and other factors.Despite the continuous development and breakthrough of computational psychiatry,it has not yet influenced routine clinical practice and still faces many challenges,such as data availability and quality,biological risks,equity,and data protection.As we move progress in this field,it is vital to ensure that computational psychiatry remains accessible and inclusive so that all researchers may contribute to this significant and exciting field.展开更多
基金Project(61170049) supported by the National Natural Science Foundation of ChinaProject(2012AA010903) supported by the National High Technology Research and Development Program of China
文摘Peta-scale high-perfomlance computing systems are increasingly built with heterogeneous CPU and GPU nodes to achieve higher power efficiency and computation throughput. While providing unprecedented capabilities to conduct computational experiments of historic significance, these systems are presently difficult to program. The users, who are domain experts rather than computer experts, prefer to use programming models closer to their domains (e.g., physics and biology) rather than MPI and OpenME This has led the development of domain-specific programming that provides domain-specific programming interfaces but abstracts away some performance-critical architecture details. Based on experience in designing large-scale computing systems, a hybrid programming framework for scientific computing on heterogeneous architectures is proposed in this work. Its design philosophy is to provide a collaborative mechanism for domain experts and computer experts so that both domain-specific knowledge and performance-critical architecture details can be adequately exploited. Two real-world scientific applications have been evaluated on TH-IA, a peta-scale CPU-GPU heterogeneous system that is currently the 5th fastest supercomputer in the world. The experimental results show that the proposed framework is well suited for developing large-scale scientific computing applications on peta-scale heterogeneous CPU/GPU systems.
基金Supported by Project of National Natural Science Foundation(No.41874134)
文摘Processing large-scale 3-D gravity data is an important topic in geophysics field. Many existing inversion methods lack the competence of processing massive data and practical application capacity. This study proposes the application of GPU parallel processing technology to the focusing inversion method, aiming at improving the inversion accuracy while speeding up calculation and reducing the memory consumption, thus obtaining the fast and reliable inversion results for large complex model. In this paper, equivalent storage of geometric trellis is used to calculate the sensitivity matrix, and the inversion is based on GPU parallel computing technology. The parallel computing program that is optimized by reducing data transfer, access restrictions and instruction restrictions as well as latency hiding greatly reduces the memory usage, speeds up the calculation, and makes the fast inversion of large models possible. By comparing and analyzing the computing speed of traditional single thread CPU method and CUDA-based GPU parallel technology, the excellent acceleration performance of GPU parallel computing is verified, which provides ideas for practical application of some theoretical inversion methods restricted by computing speed and computer memory. The model test verifies that the focusing inversion method can overcome the problem of severe skin effect and ambiguity of geological body boundary. Moreover, the increase of the model cells and inversion data can more clearly depict the boundary position of the abnormal body and delineate its specific shape.
文摘Cloud computing is an advance computing model using which several applications,data and countless IT services are provided over the Internet.Task scheduling plays a crucial role in cloud computing systems.The issue of task scheduling can be viewed as the finding or searching an optimal mapping/assignment of set of subtasks of different tasks over the available set of resources so that we can achieve the desired goals for tasks.With the enlargement of users of cloud the tasks need to be scheduled.Cloud’s performance depends on the task scheduling algorithms used.Numerous algorithms have been submitted in the past to solve the task scheduling problem for heterogeneous network of computers.The existing research work proposes different methods for data intensive applications which are energy and deadline aware task scheduling method.As scientific workflow is combination of fine grain and coarse grain task.Every task scheduled to VM has system overhead.If multiple fine grain task are executing in scientific workflow,it increase the scheduling overhead.To overcome the scheduling overhead,multiple small tasks has been combined to large task,which decrease the scheduling overhead and improve the execution time of the workflow.Horizontal clustering has been used to cluster the fine grained task further replication technique has been combined.The proposed scheduling algorithm improves the performance metrics such as execution time and cost.Further this research can be extended with improved clustering technique and replication methods.
文摘The rise of scientific computing was one of the most important advances in the S&T progress during the second half of the 20th century. Parallel with theoretical exploration and scientific experiments,scientific computing has become the 'third means' for scientific activities in the world today. The article gives a panoramic review of the subject during the past 50 years in China and lists the contributions made by Chinese scientists in this field. In addition, it reveals some key contents of related projects in the national research plan and looks into the development vista for the subject in China at the dawning years of the new century.
基金supported in part by the National Key Research and Development Program of China(2018AAA0100100)the National Natural Science Foundation of China(61906001,62136008,U21A20512)+1 种基金the Key Program of Natural Science Project of Educational Commission of Anhui Province(KJ2020A0036)Alexander von Humboldt Professorship for Artificial Intelligence Funded by the Federal Ministry of Education and Research,Germany。
文摘Large-scale multi-objective optimization problems(LSMOPs)pose challenges to existing optimizers since a set of well-converged and diverse solutions should be found in huge search spaces.While evolutionary algorithms are good at solving small-scale multi-objective optimization problems,they are criticized for low efficiency in converging to the optimums of LSMOPs.By contrast,mathematical programming methods offer fast convergence speed on large-scale single-objective optimization problems,but they have difficulties in finding diverse solutions for LSMOPs.Currently,how to integrate evolutionary algorithms with mathematical programming methods to solve LSMOPs remains unexplored.In this paper,a hybrid algorithm is tailored for LSMOPs by coupling differential evolution and a conjugate gradient method.On the one hand,conjugate gradients and differential evolution are used to update different decision variables of a set of solutions,where the former drives the solutions to quickly converge towards the Pareto front and the latter promotes the diversity of the solutions to cover the whole Pareto front.On the other hand,objective decomposition strategy of evolutionary multi-objective optimization is used to differentiate the conjugate gradients of solutions,and the line search strategy of mathematical programming is used to ensure the higher quality of each offspring than its parent.In comparison with state-of-the-art evolutionary algorithms,mathematical programming methods,and hybrid algorithms,the proposed algorithm exhibits better convergence and diversity performance on a variety of benchmark and real-world LSMOPs.
基金This work was supported in part by the National Natural Science Foundation of China(61772493)the CAAI-Huawei MindSpore Open Fund(CAAIXSJLJJ-2020-004B)+4 种基金the Natural Science Foundation of Chongqing(China)(cstc2019jcyjjqX0013)Chongqing Research Program of Technology Innovation and Application(cstc2019jscx-fxydX0024,cstc2019jscx-fxydX0027,cstc2018jszx-cyzdX0041)Guangdong Province Universities and College Pearl River Scholar Funded Scheme(2019)the Pioneer Hundred Talents Program of Chinese Academy of Sciencesthe Deanship of Scientific Research(DSR)at King Abdulaziz University(G-21-135-38).
文摘Protein-protein interactions are of great significance for human to understand the functional mechanisms of proteins.With the rapid development of high-throughput genomic technologies,massive protein-protein interaction(PPI)data have been generated,making it very difficult to analyze them efficiently.To address this problem,this paper presents a distributed framework by reimplementing one of state-of-the-art algorithms,i.e.,CoFex,using MapReduce.To do so,an in-depth analysis of its limitations is conducted from the perspectives of efficiency and memory consumption when applying it for large-scale PPI data analysis and prediction.Respective solutions are then devised to overcome these limitations.In particular,we adopt a novel tree-based data structure to reduce the heavy memory consumption caused by the huge sequence information of proteins.After that,its procedure is modified by following the MapReduce framework to take the prediction task distributively.A series of extensive experiments have been conducted to evaluate the performance of our framework in terms of both efficiency and accuracy.Experimental results well demonstrate that the proposed framework can considerably improve its computational efficiency by more than two orders of magnitude while retaining the same high accuracy.
基金Pawel Lula’s participation in the research has been carried out as part of a research initiative financed by Ministry of Science and Higher Education within“Regional Initiative of Excellence”Programme for 2019-2022.Project no.:021/RID/2018/19.Total financing 11897131.40 PLN.The other authors received no specific funding for this study.
文摘Our primary research hypothesis stands on a simple idea:The evolution of top-rated publications on a particular theme depends heavily on the progress and maturity of related topics.And this even when there are no clear relations or some concepts appear to cease to exist and leave place for newer ones starting many years ago.We implemented our model based on Computer Science Ontology(CSO)and analyzed 44 years of publications.Then we derived the most important concepts related to Cloud Computing(CC)from the scientific collection offered by Clarivate Analytics.Our methodology includes data extraction using advanced web crawling techniques,data preparation,statistical data analysis,and graphical representations.We obtained related concepts after aggregating the scores using the Jaccard coefficient and CSO Ontology.Our article reveals the contribution of Cloud Computing topics in research papers in leading scientific journals and the relationships between the field of Cloud Computing and the interdependent subdivisions identified in the broader framework of Computer Science.
文摘Cloud computing is considered to facilitate a more cost-effective way to deploy scientific workflows.The individual tasks of a scientific work-flow necessitate a diversified number of large states that are spatially located in different datacenters,thereby resulting in huge delays during data transmis-sion.Edge computing minimizes the delays in data transmission and supports the fixed storage strategy for scientific workflow private datasets.Therefore,this fixed storage strategy creates huge amount of bottleneck in its storage capacity.At this juncture,integrating the merits of cloud computing and edge computing during the process of rationalizing the data placement of scientific workflows and optimizing the energy and time incurred in data transmission across different datacentres remains a challenge.In this paper,Adaptive Cooperative Foraging and Dispersed Foraging Strategies-Improved Harris Hawks Optimization Algorithm(ACF-DFS-HHOA)is proposed for optimizing the energy and data transmission time in the event of placing data for a specific scientific workflow.This ACF-DFS-HHOA considered the factors influencing transmission delay and energy consumption of data centers into account during the process of rationalizing the data placement of scientific workflows.The adaptive cooperative and dispersed foraging strategy is included in HHOA to guide the position updates that improve population diversity and effectively prevent the algorithm from being trapped into local optimality points.The experimental results of ACF-DFS-HHOA confirmed its predominance in minimizing energy and data transmission time incurred during workflow execution.
文摘The article introduces the main practices and achievements of the Environment and Plant Protection Institute of Chinese Academy of Tropical Agricultural Sciences in promoting the sharing of large-scale instruments and equipment in recent years,analyzes the existing problems in the management system,management team,assessment incentives and maintenance guarantee,and proposes improvement measures and suggestions from aspects of improving the sharing management system,strengthening management team building,strengthening sharing assessment and incentives,improving maintenance capabilities and expanding external publicity,to further improve the sharing management of large-scale instruments and equipment.
基金supported by the China Scholarship Council(Grant No.202006230071)the Deutsche Forschungsgemeinschaft(DFG)(Grant No.DFG HU1527/12-4).
文摘Scientific computing libraries,whether in-house or open-source,have witnessed enormous progress in both engineering and scientific research.Therefore,it is important to ensure that modifications to the source code,prompted by bug fixing or new feature development,do not compromise the accuracy and functionality that have been already validated and verified.This paper introduces a method for establishing and implementing an automatic regression test environment,using the open-source multi-physics library SPHinXsys as an illustrative example.Initially,a reference database for each benchmark test is generated from observed data across multiple executions.This comprehensive database encapsulates the maximum variation range of metrics for different strategies,including the time-averaged,ensemble-averaged,and dynamic time warping methods.It accounts for uncertainties arising from parallel computing,particle relaxation,physical instabilities,and more.Subsequently,new results obtained after source code modifications undergo testing based on a curve-similarity comparison against the reference database.Whenever the source code is updated,the regression test is automatically executed for all test cases,providing a comprehensive assessment of the validity of the current results.This regression test environment has been successfully implemented in all dynamic test cases within SPHinXsys,including fluid dynamics,solid mechanics,fluid-structure interaction,thermal and mass diffusion,reaction-diffusion,and their multi-physics couplings,and demonstrates robust capabilities in testing different problems.It is noted that while the current test environment is built and implemented for a particular scientific computing library,its underlying principles are generic and can be easily adapted for use with other libraries,achieving equal effectiveness.
文摘目的通过分析全球脑机接口领域的科研竞争与合作态势为相关从业者提供参考。方法在Web of Science核心集合中检索脑机接口相关论文,采用文献计量学分析法和社会网络分析法对全球脑机接口科学研究竞争与合作态势进行分析,识别该领域的领先国家、机构和研究者。结果1990—2023年,全球脑机接口领域SCI论文共计产出9037篇,其中2014—2023年产出占78.9%,十年复合增长率达13.7%。中美两国的发文量居全球第一梯队,相较之下中国起步晚但增长快,2023年的发文量达美国的3倍。合作方面,美国无论是合作对象(64个)还是合作次数(1507次)均高于中国。其他国家则以德国(968篇)和英国(725篇)的发文量相对突出。从机构和研究者来看,德国图宾根大学和奥地利格拉茨大学是脑机接口研究的早期引领者,但近几年论文产出明显减少。相比之下,中国清华大学高小榕、中国科学院王毅军、天津大学明东、华东理工大学金晶和华南理工大学李远清等研究者则在近几年产出了较多论文。结论脑机接口领域在近十余年快速发展,中美两国在全球竞争与合作中处于领先地位,美国起步较早但近几年增速减缓,中国起步较晚但近几年增速较快,其中尤以中国科学院、清华大学、天津大学等机构表现最为突出。
基金Financial support by the Fundamental Research Funds for the Central Universities(No.buctrc201727)the Natural Science Foundation of China(No.21536001,21722602,and 21322603)。
文摘Cost effective separation of acetylene(C_2H_2)and ethylene(C_2H_4)is of key importance to obtain essential chemical raw materials for polymer industry.Due to the low compression limit of C_2H_2,there is an urgent demand to develop suitable materials for efficiently separating the two gases under ambient conditions.In this paper,we provided a high-throughput screening strategy to study porous metal-organic frameworks(MOFs)containing open metal sites(OMS)for C_2H_2/C_2H_4 separation,followed by a rational design of novel MOFs in-silico.A set of accurate force fields was established from ab initio calculations to describe the critical role of OMS towards vip molecules.From a large-scale computational screening of 916 experimental Cu-paddlewheel-based MOFs,three materials were identified with excellent separation performance.The structure-performance relationships revealed that the optimal materials should have the largest cavity diameter around 5-10?and pore volume in-between 0.3-1.0 cm^3 g^(-1).Based on the systematic screening study result,three novel MOFs were further designed with the incorporation of fluorine functional group.The results showed that Cu-OMS and the-F group on the aromatic rings close to Cu sites could generate a synergistic effect on the preferential adsorption of C_2H_2 over C_2H_4,leading to a remarkable improvement of C_2H_2 separation performance of the materials.The findings could provide insight for future experimental design and synthesis of high-performance nanostructured materials for C_2H_2/C_2H_4 separation.
基金The corresponding authors extend their appreciation to the Deanship of Scientific Research,University of Hafr Al Batin for funding this work through the research group project no.(G-108-2020).
文摘This paper presents the mathematical analysis of the dynamical system for avian influenza.The proposed model considers a nonlinear dynamical model of birds and human.The half-saturated incidence rate is used for the transmission of avian influenza infection.Rigorous mathematical results are presented for the proposed models.The local and global dynamics of each model are presented and proven that when R0<1,then the disease-free equilibrium of each model is stable both locally and globally,and when R0>1,then the endemic equilibrium is stable both locally and globally.The numerical results obtained for the proposed model shows that influenza could be eliminated from the community if the threshold is not greater than unity.
文摘Computational psychiatry is an emerging field that not only explores the biological basis of mental illness but also considers the diagnoses and identifies the underlying mechanisms.One of the key strengths of computational psychiatry is that it may identify patterns in large datasets that are not easily identifiable.This may help researchers develop more effective treatments and interventions for mental health problems.This paper is a narrative review that reviews the literature and produces an artificial intelligence ecosystem for computational psychiatry.The artificial intelligence ecosystem for computational psychiatry includes data acquisition,preparation,modeling,application,and evaluation.This approach allows researchers to integrate data from a variety of sources,such as brain imaging,genetics,and behavioral experiments,to obtain a more complete understanding of mental health conditions.Through the process of data preprocessing,training,and testing,the data that are required for model building can be prepared.By using machine learning,neural networks,artificial intelligence,and other methods,researchers have been able to develop diagnostic tools that can accurately identify mental health conditions based on a patient’s symptoms and other factors.Despite the continuous development and breakthrough of computational psychiatry,it has not yet influenced routine clinical practice and still faces many challenges,such as data availability and quality,biological risks,equity,and data protection.As we move progress in this field,it is vital to ensure that computational psychiatry remains accessible and inclusive so that all researchers may contribute to this significant and exciting field.