Sentiment analysis,a cornerstone of natural language processing,has witnessed remarkable advancements driven by deep learning models which demonstrated impressive accuracy in discerning sentiment from text across vari...Sentiment analysis,a cornerstone of natural language processing,has witnessed remarkable advancements driven by deep learning models which demonstrated impressive accuracy in discerning sentiment from text across various domains.However,the deployment of such models in resource-constrained environments presents a unique set of challenges that require innovative solutions.Resource-constrained environments encompass scenarios where computing resources,memory,and energy availability are restricted.To empower sentiment analysis in resource-constrained environments,we address the crucial need by leveraging lightweight pre-trained models.These models,derived from popular architectures such as DistilBERT,MobileBERT,ALBERT,TinyBERT,ELECTRA,and SqueezeBERT,offer a promising solution to the resource limitations imposed by these environments.By distilling the knowledge from larger models into smaller ones and employing various optimization techniques,these lightweight models aim to strike a balance between performance and resource efficiency.This paper endeavors to explore the performance of multiple lightweight pre-trained models in sentiment analysis tasks specific to such environments and provide insights into their viability for practical deployment.展开更多
The Internet of Things(IoT)has orchestrated various domains in numerous applications,contributing significantly to the growth of the smart world,even in regions with low literacy rates,boosting socio-economic developm...The Internet of Things(IoT)has orchestrated various domains in numerous applications,contributing significantly to the growth of the smart world,even in regions with low literacy rates,boosting socio-economic development.This study provides valuable insights into optimizing wireless communication,paving the way for a more connected and productive future in the mining industry.The IoT revolution is advancing across industries,but harsh geometric environments,including open-pit mines,pose unique challenges for reliable communication.The advent of IoT in the mining industry has significantly improved communication for critical operations through the use of Radio Frequency(RF)protocols such as Bluetooth,Wi-Fi,GSM/GPRS,Narrow Band(NB)-IoT,SigFox,ZigBee,and Long Range Wireless Area Network(LoRaWAN).This study addresses the optimization of network implementations by comparing two leading free-spreading IoT-based RF protocols such as ZigBee and LoRaWAN.Intensive field tests are conducted in various opencast mines to investigate coverage potential and signal attenuation.ZigBee is tested in the Tadicherla open-cast coal mine in India.Similarly,LoRaWAN field tests are conducted at one of the associated cement companies(ACC)in the limestone mine in Bargarh,India,covering both Indoor-toOutdoor(I2O)and Outdoor-to-Outdoor(O2O)environments.A robust framework of path-loss models,referred to as Free space,Egli,Okumura-Hata,Cost231-Hata and Ericsson models,combined with key performance metrics,is employed to evaluate the patterns of signal attenuation.Extensive field testing and careful data analysis revealed that the Egli model is the most consistent path-loss model for the ZigBee protocol in an I2O environment,with a coefficient of determination(R^(2))of 0.907,balanced error metrics such as Normalized Root Mean Square Error(NRMSE)of 0.030,Mean Square Error(MSE)of 4.950,Mean Absolute Percentage Error(MAPE)of 0.249 and Scatter Index(SI)of 2.723.In the O2O scenario,the Ericsson model showed superior performance,with the highest R^(2)value of 0.959,supported by strong correlation metrics:NRMSE of 0.026,MSE of 8.685,MAPE of 0.685,Mean Absolute Deviation(MAD)of 20.839 and SI of 2.194.For the LoRaWAN protocol,the Cost-231 model achieved the highest R^(2)value of 0.921 in the I2O scenario,complemented by the lowest metrics:NRMSE of 0.018,MSE of 1.324,MAPE of 0.217,MAD of 9.218 and SI of 1.238.In the O2O environment,the Okumura-Hata model achieved the highest R^(2)value of 0.978,indicating a strong fit with metrics NRMSE of 0.047,MSE of 27.807,MAPE of 27.494,MAD of 37.287 and SI of 3.927.This advancement in reliable communication networks promises to transform the opencast landscape into networked signal attenuation.These results support decision-making for mining needs and ensure reliable communications even in the face of formidable obstacles.展开更多
The spread of an advantageous mutation through a population is of fundamental interest in population genetics. While the classical Moran model is formulated for a well-mixed population, it has long been recognized tha...The spread of an advantageous mutation through a population is of fundamental interest in population genetics. While the classical Moran model is formulated for a well-mixed population, it has long been recognized that in real-world applications, the population usually has an explicit spatial structure which can significantly influence the dynamics. In the context of cancer initiation in epithelial tissue, several recent works have analyzed the dynamics of advantageous mutant spread on integer lattices, using the biased voter model from particle systems theory. In this spatial version of the Moran model, individuals first reproduce according to their fitness and then replace a neighboring individual. From a biological standpoint, the opposite dynamics, where individuals first die and are then replaced by a neighboring individual according to its fitness, are equally relevant. Here, we investigate this death-birth analogue of the biased voter model. We construct the process mathematically, derive the associated dual process, establish bounds on the survival probability of a single mutant, and prove that the process has an asymptotic shape. We also briefly discuss alternative birth-death and death-birth dynamics, depending on how the mutant fitness advantage affects the dynamics. We show that birth-death and death-birth formulations of the biased voter model are equivalent when fitness affects the former event of each update of the model, whereas the birth-death model is fundamentally different from the death-birth model when fitness affects the latter event.展开更多
The significant threat of wildfires to forest ecology and biodiversity,particularly in tropical and subtropical regions,underscores the necessity for advanced predictive models amidst shifting climate patterns.There i...The significant threat of wildfires to forest ecology and biodiversity,particularly in tropical and subtropical regions,underscores the necessity for advanced predictive models amidst shifting climate patterns.There is a need to evaluate and enhance wildfire prediction methods,focusing on their application during extended periods of intense heat and drought.This study reviews various wildfire modelling approaches,including traditional physical,semi-empirical,numerical,and emerging machine learning(ML)-based models.We critically assess these models’capabilities in predicting fire susceptibility and post-ignition spread,highlighting their strengths and limitations.Our findings indicate that while traditional models provide foundational insights,they often fall short in dynamically estimating parameters and predicting ignition events.Cellular automata models,despite their potential,face challenges in data integration and computational demands.Conversely,ML models demonstrate superior efficiency and accuracy by leveraging diverse datasets,though they encounter interpretability issues.This review recommends hybrid modelling approaches that integrate multiple methods to harness their combined strengths.By incorporating data assimilation techniques with dynamic forecasting models,the predictive capabilities of ML-based predictions can be significantly enhanced.This review underscores the necessity for continued refinement of these models to ensure their reliability in real-world applications,ultimately contributing to more effective wildfire mitigation and management strategies.Future research should focus on improving hybrid models and exploring new data integration methods to advance predictive capabilities.展开更多
Foundation models(FMs)have rapidly evolved and have achieved signicant accomplishments in computer vision tasks.Specically,the prompt mechanism conveniently allows users to integrate image prior information into the m...Foundation models(FMs)have rapidly evolved and have achieved signicant accomplishments in computer vision tasks.Specically,the prompt mechanism conveniently allows users to integrate image prior information into the model,making it possible to apply models without any training.Therefore,we proposed a workflow based on foundation models and zero training to solve the tasks of photoacoustic(PA)image processing.We employed the Segment Anything Model(SAM)by setting simple prompts and integrating the model's outputs with prior knowledge of the imaged objects to accomplish various tasks,including:(1)removing the skin signal in three-dimensional PA image rendering;(2)dual speed-of-sound reconstruction,and(3)segmentation ofnger blood vessels.Through these demonstrations,we have concluded that FMs can be directly applied in PA imaging without the requirement for network design and training.This potentially allows for a hands-on,convenient approach to achieving efficient and accurate segmentation of PA images.This paper serves as a comprehensive tutorial,facilitating the mastery of the technique through the provision of code and sample datasets.展开更多
Purpose:Evaluating the quality of academic journal articles is a time consuming but critical task for national research evaluation exercises,appointments and promotion.It is therefore important to investigate whether ...Purpose:Evaluating the quality of academic journal articles is a time consuming but critical task for national research evaluation exercises,appointments and promotion.It is therefore important to investigate whether Large Language Models(LLMs)can play a role in this process.Design/methodology/approach:This article assesses which ChatGPT inputs(full text without tables,figures,and references;title and abstract;title only)produce better quality score estimates,and the extent to which scores are affected by ChatGPT models and system prompts.Findings:The optimal input is the article title and abstract,with average ChatGPT scores based on these(30 iterations on a dataset of 51 papers)correlating at 0.67 with human scores,the highest ever reported.ChatGPT 4o is slightly better than 3.5-turbo(0.66),and 4o-mini(0.66).Research limitations:The data is a convenience sample of the work of a single author,it only includes one field,and the scores are self-evaluations.Practical implications:The results suggest that article full texts might confuse LLM research quality evaluations,even though complex system instructions for the task are more effective than simple ones.Thus,whilst abstracts contain insufficient information for a thorough assessment of rigour,they may contain strong pointers about originality and significance.Finally,linear regression can be used to convert the model scores into the human scale scores,which is 31%more accurate than guessing.Originality/value:This is the first systematic comparison of the impact of different prompts,parameters and inputs for ChatGPT research quality evaluations.展开更多
AIM:To assess the possibility of using different large language models(LLMs)in ocular surface diseases by selecting five different LLMS to test their accuracy in answering specialized questions related to ocular surfa...AIM:To assess the possibility of using different large language models(LLMs)in ocular surface diseases by selecting five different LLMS to test their accuracy in answering specialized questions related to ocular surface diseases:ChatGPT-4,ChatGPT-3.5,Claude 2,PaLM2,and SenseNova.METHODS:A group of experienced ophthalmology professors were asked to develop a 100-question singlechoice question on ocular surface diseases designed to assess the performance of LLMs and human participants in answering ophthalmology specialty exam questions.The exam includes questions on the following topics:keratitis disease(20 questions),keratoconus,keratomalaciac,corneal dystrophy,corneal degeneration,erosive corneal ulcers,and corneal lesions associated with systemic diseases(20 questions),conjunctivitis disease(20 questions),trachoma,pterygoid and conjunctival tumor diseases(20 questions),and dry eye disease(20 questions).Then the total score of each LLMs and compared their mean score,mean correlation,variance,and confidence were calculated.RESULTS:GPT-4 exhibited the highest performance in terms of LLMs.Comparing the average scores of the LLMs group with the four human groups,chief physician,attending physician,regular trainee,and graduate student,it was found that except for ChatGPT-4,the total score of the rest of the LLMs is lower than that of the graduate student group,which had the lowest score in the human group.Both ChatGPT-4 and PaLM2 were more likely to give exact and correct answers,giving very little chance of an incorrect answer.ChatGPT-4 showed higher credibility when answering questions,with a success rate of 59%,but gave the wrong answer to the question 28% of the time.CONCLUSION:GPT-4 model exhibits excellent performance in both answer relevance and confidence.PaLM2 shows a positive correlation(up to 0.8)in terms of answer accuracy during the exam.In terms of answer confidence,PaLM2 is second only to GPT4 and surpasses Claude 2,SenseNova,and GPT-3.5.Despite the fact that ocular surface disease is a highly specialized discipline,GPT-4 still exhibits superior performance,suggesting that its potential and ability to be applied in this field is enormous,perhaps with the potential to be a valuable resource for medical students and clinicians in the future.展开更多
Deterministic compartment models(CMs)and stochastic models,including stochastic CMs and agent-based models,are widely utilized in epidemic modeling.However,the relationship between CMs and their corresponding stochast...Deterministic compartment models(CMs)and stochastic models,including stochastic CMs and agent-based models,are widely utilized in epidemic modeling.However,the relationship between CMs and their corresponding stochastic models is not well understood.The present study aimed to address this gap by conducting a comparative study using the susceptible,exposed,infectious,and recovered(SEIR)model and its extended CMs from the coronavirus disease 2019 modeling literature.We demonstrated the equivalence of the numerical solution of CMs using the Euler scheme and their stochastic counterparts through theoretical analysis and simulations.Based on this equivalence,we proposed an efficient model calibration method that could replicate the exact solution of CMs in the corresponding stochastic models through parameter adjustment.The advancement in calibration techniques enhanced the accuracy of stochastic modeling in capturing the dynamics of epidemics.However,it should be noted that discrete-time stochastic models cannot perfectly reproduce the exact solution of continuous-time CMs.Additionally,we proposed a new stochastic compartment and agent mixed model as an alternative to agent-based models for large-scale population simulations with a limited number of agents.This model offered a balance between computational efficiency and accuracy.The results of this research contributed to the comparison and unification of deterministic CMs and stochastic models in epidemic modeling.Furthermore,the results had implications for the development of hybrid models that integrated the strengths of both frameworks.Overall,the present study has provided valuable epidemic modeling techniques and their practical applications for understanding and controlling the spread of infectious diseases.展开更多
Since the 1950s,when the Turing Test was introduced,there has been notable progress in machine language intelligence.Language modeling,crucial for AI development,has evolved from statistical to neural models over the ...Since the 1950s,when the Turing Test was introduced,there has been notable progress in machine language intelligence.Language modeling,crucial for AI development,has evolved from statistical to neural models over the last two decades.Recently,transformer-based Pre-trained Language Models(PLM)have excelled in Natural Language Processing(NLP)tasks by leveraging large-scale training corpora.Increasing the scale of these models enhances performance significantly,introducing abilities like context learning that smaller models lack.The advancement in Large Language Models,exemplified by the development of ChatGPT,has made significant impacts both academically and industrially,capturing widespread societal interest.This survey provides an overview of the development and prospects from Large Language Models(LLM)to Large Multimodal Models(LMM).It first discusses the contributions and technological advancements of LLMs in the field of natural language processing,especially in text generation and language understanding.Then,it turns to the discussion of LMMs,which integrates various data modalities such as text,images,and sound,demonstrating advanced capabilities in understanding and generating cross-modal content,paving new pathways for the adaptability and flexibility of AI systems.Finally,the survey highlights the prospects of LMMs in terms of technological development and application potential,while also pointing out challenges in data integration,cross-modal understanding accuracy,providing a comprehensive perspective on the latest developments in this field.展开更多
Sporadic E(Es)layers in the ionosphere are characterized by intense plasma irregularities in the E region at altitudes of 90-130 km.Because they can significantly influence radio communications and navigation systems,...Sporadic E(Es)layers in the ionosphere are characterized by intense plasma irregularities in the E region at altitudes of 90-130 km.Because they can significantly influence radio communications and navigation systems,accurate forecasting of Es layers is crucial for ensuring the precision and dependability of navigation satellite systems.In this study,we present Es predictions made by an empirical model and by a deep learning model,and analyze their differences comprehensively by comparing the model predictions to satellite RO measurements and ground-based ionosonde observations.The deep learning model exhibited significantly better performance,as indicated by its high coefficient of correlation(r=0.87)with RO observations and predictions,than did the empirical model(r=0.53).This study highlights the importance of integrating artificial intelligence technology into ionosphere modelling generally,and into predicting Es layer occurrences and characteristics,in particular.展开更多
Neuromyelitis optica spectrum disorders are neuroinflammatory demyelinating disorders that lead to permanent visual loss and motor dysfunction.To date,no effective treatment exists as the exact causative mechanism rem...Neuromyelitis optica spectrum disorders are neuroinflammatory demyelinating disorders that lead to permanent visual loss and motor dysfunction.To date,no effective treatment exists as the exact causative mechanism remains unknown.Therefore,experimental models of neuromyelitis optica spectrum disorders are essential for exploring its pathogenesis and in screening for therapeutic targets.Since most patients with neuromyelitis optica spectrum disorders are seropositive for IgG autoantibodies against aquaporin-4,which is highly expressed on the membrane of astrocyte endfeet,most current experimental models are based on aquaporin-4-IgG that initially targets astrocytes.These experimental models have successfully simulated many pathological features of neuromyelitis optica spectrum disorders,such as aquaporin-4 loss,astrocytopathy,granulocyte and macrophage infiltration,complement activation,demyelination,and neuronal loss;however,they do not fully capture the pathological process of human neuromyelitis optica spectrum disorders.In this review,we summarize the currently known pathogenic mechanisms and the development of associated experimental models in vitro,ex vivo,and in vivo for neuromyelitis optica spectrum disorders,suggest potential pathogenic mechanisms for further investigation,and provide guidance on experimental model choices.In addition,this review summarizes the latest information on pathologies and therapies for neuromyelitis optica spectrum disorders based on experimental models of aquaporin-4-IgG-seropositive neuromyelitis optica spectrum disorders,offering further therapeutic targets and a theoretical basis for clinical trials.展开更多
Rare neurological diseases,while individually are rare,collectively impact millions globally,leading to diverse and often severe neurological symptoms.Often attributed to genetic mutations that disrupt protein functio...Rare neurological diseases,while individually are rare,collectively impact millions globally,leading to diverse and often severe neurological symptoms.Often attributed to genetic mutations that disrupt protein function or structure,understanding their genetic basis is crucial for accurate diagnosis and targeted therapies.To investigate the underlying pathogenesis of these conditions,researchers often use non-mammalian model organisms,such as Drosophila(fruit flies),which is valued for their genetic manipulability,cost-efficiency,and preservation of genes and biological functions across evolutionary time.Genetic tools available in Drosophila,including CRISPR-Cas9,offer a means to manipulate gene expression,allowing for a deep exploration of the genetic underpinnings of rare neurological diseases.Drosophila boasts a versatile genetic toolkit,rapid generation turnover,and ease of large-scale experimentation,making it an invaluable resource for identifying potential drug candidates.Researchers can expose flies carrying disease-associated mutations to various compounds,rapidly pinpointing promising therapeutic agents for further investigation in mammalian models and,ultimately,clinical trials.In this comprehensive review,we explore rare neurological diseases where fly research has significantly contributed to our understanding of their genetic basis,pathophysiology,and potential therapeutic implications.We discuss rare diseases associated with both neuron-expressed and glial-expressed genes.Specific cases include mutations in CDK19 resulting in epilepsy and developmental delay,mutations in TIAM1 leading to a neurodevelopmental disorder with seizures and language delay,and mutations in IRF2BPL causing seizures,a neurodevelopmental disorder with regression,loss of speech,and abnormal movements.And we explore mutations in EMC1 related to cerebellar atrophy,visual impairment,psychomotor retardation,and gain-of-function mutations in ACOX1 causing Mitchell syndrome.Loss-of-function mutations in ACOX1 result in ACOX1 deficiency,characterized by very-long-chain fatty acid accumulation and glial degeneration.Notably,this review highlights how modeling these diseases in Drosophila has provided valuable insights into their pathophysiology,offering a platform for the rapid identification of potential therapeutic interventions.Rare neurological diseases involve a wide range of expression systems,and sometimes common phenotypes can be found among different genes that cause abnormalities in neurons or glia.Furthermore,mutations within the same gene may result in varying functional outcomes,such as complete loss of function,partial loss of function,or gain-of-function mutations.The phenotypes observed in patients can differ significantly,underscoring the complexity of these conditions.In conclusion,Drosophila represents an indispensable and cost-effective tool for investigating rare neurological diseases.By facilitating the modeling of these conditions,Drosophila contributes to a deeper understanding of their genetic basis,pathophysiology,and potential therapies.This approach accelerates the discovery of promising drug candidates,ultimately benefiting patients affected by these complex and understudied diseases.展开更多
Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in speci...Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in specific tasks with reduced training costs,the substantial memory requirements during fine-tuning present a barrier to broader deployment.Parameter-Efficient Fine-Tuning(PEFT)techniques,such as Low-Rank Adaptation(LoRA),and parameter quantization methods have emerged as solutions to address these challenges by optimizing memory usage and computational efficiency.Among these,QLoRA,which combines PEFT and quantization,has demonstrated notable success in reducing memory footprints during fine-tuning,prompting the development of various QLoRA variants.Despite these advancements,the quantitative impact of key variables on the fine-tuning performance of quantized LLMs remains underexplored.This study presents a comprehensive analysis of these key variables,focusing on their influence across different layer types and depths within LLM architectures.Our investigation uncovers several critical findings:(1)Larger layers,such as MLP layers,can maintain performance despite reductions in adapter rank,while smaller layers,like self-attention layers,aremore sensitive to such changes;(2)The effectiveness of balancing factors depends more on specific values rather than layer type or depth;(3)In quantization-aware fine-tuning,larger layers can effectively utilize smaller adapters,whereas smaller layers struggle to do so.These insights suggest that layer type is a more significant determinant of fine-tuning success than layer depth when optimizing quantized LLMs.Moreover,for the same discount of trainable parameters,reducing the trainable parameters in a larger layer is more effective in preserving fine-tuning accuracy than in a smaller one.This study provides valuable guidance for more efficient fine-tuning strategies and opens avenues for further research into optimizing LLM fine-tuning in resource-constrained environments.展开更多
This study was aimed to prepare landslide susceptibility maps for the Pithoragarh district in Uttarakhand,India,using advanced ensemble models that combined Radial Basis Function Networks(RBFN)with three ensemble lear...This study was aimed to prepare landslide susceptibility maps for the Pithoragarh district in Uttarakhand,India,using advanced ensemble models that combined Radial Basis Function Networks(RBFN)with three ensemble learning techniques:DAGGING(DG),MULTIBOOST(MB),and ADABOOST(AB).This combination resulted in three distinct ensemble models:DG-RBFN,MB-RBFN,and AB-RBFN.Additionally,a traditional weighted method,Information Value(IV),and a benchmark machine learning(ML)model,Multilayer Perceptron Neural Network(MLP),were employed for comparison and validation.The models were developed using ten landslide conditioning factors,which included slope,aspect,elevation,curvature,land cover,geomorphology,overburden depth,lithology,distance to rivers and distance to roads.These factors were instrumental in predicting the output variable,which was the probability of landslide occurrence.Statistical analysis of the models’performance indicated that the DG-RBFN model,with an Area Under ROC Curve(AUC)of 0.931,outperformed the other models.The AB-RBFN model achieved an AUC of 0.929,the MB-RBFN model had an AUC of 0.913,and the MLP model recorded an AUC of 0.926.These results suggest that the advanced ensemble ML model DG-RBFN was more accurate than traditional statistical model,single MLP model,and other ensemble models in preparing trustworthy landslide susceptibility maps,thereby enhancing land use planning and decision-making.展开更多
The kinetic characteristics of plasma-assisted oxidative pyrolysis of ammonia are studied by using the global/fluid models hybrid solution method.Firstly,the stable products of plasma-assisted oxidative pyrolysis of a...The kinetic characteristics of plasma-assisted oxidative pyrolysis of ammonia are studied by using the global/fluid models hybrid solution method.Firstly,the stable products of plasma-assisted oxidative pyrolysis of ammonia are measured.The results show that the consumption of NH_(3)/O_(2)and the production of N_(2)/H_(2)change linearly with the increase of voltage,which indicates the decoupling of nonequilibrium molecular excitation and oxidative pyrolysis of ammonia at low temperatures.Secondly,the detailed reaction kinetics mechanism of ammonia oxidative pyrolysis stimulated by a nanosecond pulse voltage at low pressure and room temperature is established.Based on the reaction path analysis,the simplified mechanism is obtained.The detailed and simplified mechanism simulation results are compared with experimental data to verify the accuracy of the simplified mechanism.Finally,based on the simplified mechanism,the fluid model of ammonia oxidative pyrolysis stimulated by the nanosecond pulse plasma is established to study the pre-sheath/sheath behavior and the resultant consumption and formation of key species.The results show that the generation,development,and propagation of the pre-sheath have a great influence on the formation and consumption of species.The consumption of NH_(3)by the cathode pre-sheath is greater than that by the anode pre-sheath,but the opposite is true for OH and O(1S).However,within the sheath,almost all reactions do not occur.Further,by changing the parameters of nanosecond pulse power supply voltage,it is found that the electron number density,electron current density,and applied peak voltages are not the direct reasons for the structural changes of the sheath and pre-sheath.Furthermore,the discharge interval has little effect on the sheath structure and gas mixture breakdown.The research results of this paper not only help to understand the kinetic promotion of non-equilibrium excitation in the process of oxidative pyrolysis but also help to explore the influence of transport and chemical reaction kinetics on the oxidative pyrolysis of ammonia.展开更多
Frozen shoulder(FS),also known as adhesive capsulitis,is a condition that causes contraction and stiffness of the shoulder joint capsule.The main symptoms are per-sistent shoulder pain and a limited range of motion in...Frozen shoulder(FS),also known as adhesive capsulitis,is a condition that causes contraction and stiffness of the shoulder joint capsule.The main symptoms are per-sistent shoulder pain and a limited range of motion in all directions.These symp-toms and poor prognosis affect people's physical health and quality of life.Currently,the specific mechanisms of FS remain unclear,and there is variability in treatment methods and their efficacy.Additionally,the early symptoms of FS are difficult to distinguish from those of other shoulder diseases,complicating early diagnosis and treatment.Therefore,it is necessary to develop and utilize animal models to under-stand the pathogenesis of FS and to explore treatment strategies,providing insights into the prevention and treatment of human FS.This paper reviews the rat models available for FS research,including external immobilization models,surgical internal immobilization models,injection modeling models,and endocrine modeling models.It introduces the basic procedures for these models and compares and analyzes the advantages,disadvantages,and applicability of each modeling method.Finally,our paper summarizes the common methods for evaluating FS rat models.展开更多
This paper presents a comparative study of ARIMA and Neural Network AutoRegressive (NNAR) models for time series forecasting. The study focuses on simulated data generated using ARIMA(1, 1, 0) and applies both models ...This paper presents a comparative study of ARIMA and Neural Network AutoRegressive (NNAR) models for time series forecasting. The study focuses on simulated data generated using ARIMA(1, 1, 0) and applies both models for training and forecasting. Model performance is evaluated using MSE, AIC, and BIC. The models are further applied to neonatal mortality data from Saudi Arabia to assess their predictive capabilities. The results indicate that the NNAR model outperforms ARIMA in both training and forecasting.展开更多
Modeling HIV/AIDS progression is critical for understanding disease dynamics and improving patient care. This study compares the Exponential and Weibull survival models, focusing on their ability to capture state-spec...Modeling HIV/AIDS progression is critical for understanding disease dynamics and improving patient care. This study compares the Exponential and Weibull survival models, focusing on their ability to capture state-specific failure rates in HIV/AIDS progression. While the Exponential model offers simplicity with a constant hazard rate, it often fails to accommodate the complexities of dynamic disease progression. In contrast, the Weibull model provides flexibility by allowing hazard rates to vary over time. Both models are evaluated within the frameworks of the Cox Proportional Hazards (Cox PH) and Accelerated Failure Time (AFT) models, incorporating critical covariates such as age, gender, CD4 count, and ART status. Statistical evaluation metrics, including Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC), log-likelihood, and Pseudo-R2, were employed to assess model performance across diverse patient subgroups. Results indicate that the Weibull model consistently outperforms the Exponential model in dynamic scenarios, such as younger patients and those with co-infections, while maintaining robustness in stable contexts. This study highlights the trade-off between flexibility and simplicity in survival modeling, advocating for tailored model selection to balance interpretability and predictive accuracy. These findings provide valuable insights for optimizing HIV/AIDS management strategies and advancing survival analysis methodologies.展开更多
The UK’s economic growth has witnessed instability over these years. While some sectors recorded positive performances, some recorded negative performances, and these unstable economic performances led to technical r...The UK’s economic growth has witnessed instability over these years. While some sectors recorded positive performances, some recorded negative performances, and these unstable economic performances led to technical recession for the third and fourth quarters of the year 2023. This study assessed the efficacy of the Generalised Additive Model for Location, Scale and Shape (GAMLSS) as a flexible distributional regression with smoothing additive terms in forecasting the UK economic growth in-sample and out-of-sample over the conventional Autoregressive Distributed Lag (ARDL) and Error Correction Model (ECM). The aim was to investigate the effectiveness and efficiency of GAMLSS models using a machine learning framework over the conventional time series econometric models by a rolling window. It is quantitative research which adopts a dataset obtained from the Office for National Statistics, covering 105 monthly observations of major economic indicators in the UK from January 2015 to September 2023. It consists of eleven variables, which include economic growth (Econ), consumer price index (CPI), inflation (Infl), manufacturing (Manuf), electricity and gas (ElGas), construction (Const), industries (Ind), wholesale and retail (WRet), real estate (REst), education (Edu) and health (Health). All computations and graphics in this study are obtained using R software version 4.4.1. The study revealed that GAMLSS models demonstrate superior outperformance in forecast accuracy over the ARDL and ECM models. Unlike other models used in the literature, the GAMLSS models were able to forecast both the future economic growth and the future distribution of the growth, thereby contributing to the empirical literature. The study identified manufacturing, electricity and gas, construction, industries, wholesale and retail, real estate, education, and health as key drivers of UK economic growth.展开更多
Influenced by complex external factors,the displacement-time curve of reservoir landslides demonstrates both short-term and long-term diversity and dynamic complexity.It is difficult for existing methods,including Reg...Influenced by complex external factors,the displacement-time curve of reservoir landslides demonstrates both short-term and long-term diversity and dynamic complexity.It is difficult for existing methods,including Regression models and Neural network models,to perform multi-characteristic coupled displacement prediction because they fail to consider landslide creep characteristics.This paper integrates the creep characteristics of landslides with non-linear intelligent algorithms and proposes a dynamic intelligent landslide displacement prediction method based on a combination of the Biological Growth model(BG),Convolutional Neural Network(CNN),and Long ShortTerm Memory Network(LSTM).This prediction approach improves three different biological growth models,thereby effectively extracting landslide creep characteristic parameters.Simultaneously,it integrates external factors(rainfall and reservoir water level)to construct an internal and external comprehensive dataset for data augmentation,which is input into the improved CNN-LSTM model.Thereafter,harnessing the robust feature extraction capabilities and spatial translation invariance of CNN,the model autonomously captures short-term local fluctuation characteristics of landslide displacement,and combines LSTM's efficient handling of long-term nonlinear temporal data to improve prediction performance.An evaluation of the Liangshuijing landslide in the Three Gorges Reservoir Area indicates that BG-CNN-LSTM exhibits high prediction accuracy,excellent generalization capabilities when dealing with various types of landslides.The research provides an innovative approach to achieving the whole-process,realtime,high-precision displacement predictions for multicharacteristic coupled landslides.展开更多
文摘Sentiment analysis,a cornerstone of natural language processing,has witnessed remarkable advancements driven by deep learning models which demonstrated impressive accuracy in discerning sentiment from text across various domains.However,the deployment of such models in resource-constrained environments presents a unique set of challenges that require innovative solutions.Resource-constrained environments encompass scenarios where computing resources,memory,and energy availability are restricted.To empower sentiment analysis in resource-constrained environments,we address the crucial need by leveraging lightweight pre-trained models.These models,derived from popular architectures such as DistilBERT,MobileBERT,ALBERT,TinyBERT,ELECTRA,and SqueezeBERT,offer a promising solution to the resource limitations imposed by these environments.By distilling the knowledge from larger models into smaller ones and employing various optimization techniques,these lightweight models aim to strike a balance between performance and resource efficiency.This paper endeavors to explore the performance of multiple lightweight pre-trained models in sentiment analysis tasks specific to such environments and provide insights into their viability for practical deployment.
文摘The Internet of Things(IoT)has orchestrated various domains in numerous applications,contributing significantly to the growth of the smart world,even in regions with low literacy rates,boosting socio-economic development.This study provides valuable insights into optimizing wireless communication,paving the way for a more connected and productive future in the mining industry.The IoT revolution is advancing across industries,but harsh geometric environments,including open-pit mines,pose unique challenges for reliable communication.The advent of IoT in the mining industry has significantly improved communication for critical operations through the use of Radio Frequency(RF)protocols such as Bluetooth,Wi-Fi,GSM/GPRS,Narrow Band(NB)-IoT,SigFox,ZigBee,and Long Range Wireless Area Network(LoRaWAN).This study addresses the optimization of network implementations by comparing two leading free-spreading IoT-based RF protocols such as ZigBee and LoRaWAN.Intensive field tests are conducted in various opencast mines to investigate coverage potential and signal attenuation.ZigBee is tested in the Tadicherla open-cast coal mine in India.Similarly,LoRaWAN field tests are conducted at one of the associated cement companies(ACC)in the limestone mine in Bargarh,India,covering both Indoor-toOutdoor(I2O)and Outdoor-to-Outdoor(O2O)environments.A robust framework of path-loss models,referred to as Free space,Egli,Okumura-Hata,Cost231-Hata and Ericsson models,combined with key performance metrics,is employed to evaluate the patterns of signal attenuation.Extensive field testing and careful data analysis revealed that the Egli model is the most consistent path-loss model for the ZigBee protocol in an I2O environment,with a coefficient of determination(R^(2))of 0.907,balanced error metrics such as Normalized Root Mean Square Error(NRMSE)of 0.030,Mean Square Error(MSE)of 4.950,Mean Absolute Percentage Error(MAPE)of 0.249 and Scatter Index(SI)of 2.723.In the O2O scenario,the Ericsson model showed superior performance,with the highest R^(2)value of 0.959,supported by strong correlation metrics:NRMSE of 0.026,MSE of 8.685,MAPE of 0.685,Mean Absolute Deviation(MAD)of 20.839 and SI of 2.194.For the LoRaWAN protocol,the Cost-231 model achieved the highest R^(2)value of 0.921 in the I2O scenario,complemented by the lowest metrics:NRMSE of 0.018,MSE of 1.324,MAPE of 0.217,MAD of 9.218 and SI of 1.238.In the O2O environment,the Okumura-Hata model achieved the highest R^(2)value of 0.978,indicating a strong fit with metrics NRMSE of 0.047,MSE of 27.807,MAPE of 27.494,MAD of 37.287 and SI of 3.927.This advancement in reliable communication networks promises to transform the opencast landscape into networked signal attenuation.These results support decision-making for mining needs and ensure reliable communications even in the face of formidable obstacles.
基金supported in part by the NIH grant R01CA241134supported in part by the NSF grant CMMI-1552764+3 种基金supported in part by the NSF grants DMS-1349724 and DMS-2052465supported in part by the NSF grant CCF-1740761supported in part by the U.S.-Norway Fulbright Foundation and the Research Council of Norway R&D Grant 309273supported in part by the Norwegian Centennial Chair grant and the Doctoral Dissertation Fellowship from the University of Minnesota.
文摘The spread of an advantageous mutation through a population is of fundamental interest in population genetics. While the classical Moran model is formulated for a well-mixed population, it has long been recognized that in real-world applications, the population usually has an explicit spatial structure which can significantly influence the dynamics. In the context of cancer initiation in epithelial tissue, several recent works have analyzed the dynamics of advantageous mutant spread on integer lattices, using the biased voter model from particle systems theory. In this spatial version of the Moran model, individuals first reproduce according to their fitness and then replace a neighboring individual. From a biological standpoint, the opposite dynamics, where individuals first die and are then replaced by a neighboring individual according to its fitness, are equally relevant. Here, we investigate this death-birth analogue of the biased voter model. We construct the process mathematically, derive the associated dual process, establish bounds on the survival probability of a single mutant, and prove that the process has an asymptotic shape. We also briefly discuss alternative birth-death and death-birth dynamics, depending on how the mutant fitness advantage affects the dynamics. We show that birth-death and death-birth formulations of the biased voter model are equivalent when fitness affects the former event of each update of the model, whereas the birth-death model is fundamentally different from the death-birth model when fitness affects the latter event.
基金funding enabled and organized by CAUL and its Member Institutions.
文摘The significant threat of wildfires to forest ecology and biodiversity,particularly in tropical and subtropical regions,underscores the necessity for advanced predictive models amidst shifting climate patterns.There is a need to evaluate and enhance wildfire prediction methods,focusing on their application during extended periods of intense heat and drought.This study reviews various wildfire modelling approaches,including traditional physical,semi-empirical,numerical,and emerging machine learning(ML)-based models.We critically assess these models’capabilities in predicting fire susceptibility and post-ignition spread,highlighting their strengths and limitations.Our findings indicate that while traditional models provide foundational insights,they often fall short in dynamically estimating parameters and predicting ignition events.Cellular automata models,despite their potential,face challenges in data integration and computational demands.Conversely,ML models demonstrate superior efficiency and accuracy by leveraging diverse datasets,though they encounter interpretability issues.This review recommends hybrid modelling approaches that integrate multiple methods to harness their combined strengths.By incorporating data assimilation techniques with dynamic forecasting models,the predictive capabilities of ML-based predictions can be significantly enhanced.This review underscores the necessity for continued refinement of these models to ensure their reliability in real-world applications,ultimately contributing to more effective wildfire mitigation and management strategies.Future research should focus on improving hybrid models and exploring new data integration methods to advance predictive capabilities.
基金support from Strategic Project of Precision Surgery,Tsinghua UniversityInitiative Scientific Research Program,Institute for Intelligent Healthcare,Tsinghua University+5 种基金Tsinghua-Foshan Institute of Advanced ManufacturingNational Natural Science Foundation of China(61735016)Beijing Nova Program(20230484308)Young Elite Scientists Sponsorship Program by CAST(2023QNRC001)Youth Elite Program of Beijing Friendship Hospital(YYQCJH2022-9)Science and Technology Program of Beijing Tongzhou District(KJ2023CX012).
文摘Foundation models(FMs)have rapidly evolved and have achieved signicant accomplishments in computer vision tasks.Specically,the prompt mechanism conveniently allows users to integrate image prior information into the model,making it possible to apply models without any training.Therefore,we proposed a workflow based on foundation models and zero training to solve the tasks of photoacoustic(PA)image processing.We employed the Segment Anything Model(SAM)by setting simple prompts and integrating the model's outputs with prior knowledge of the imaged objects to accomplish various tasks,including:(1)removing the skin signal in three-dimensional PA image rendering;(2)dual speed-of-sound reconstruction,and(3)segmentation ofnger blood vessels.Through these demonstrations,we have concluded that FMs can be directly applied in PA imaging without the requirement for network design and training.This potentially allows for a hands-on,convenient approach to achieving efficient and accurate segmentation of PA images.This paper serves as a comprehensive tutorial,facilitating the mastery of the technique through the provision of code and sample datasets.
文摘Purpose:Evaluating the quality of academic journal articles is a time consuming but critical task for national research evaluation exercises,appointments and promotion.It is therefore important to investigate whether Large Language Models(LLMs)can play a role in this process.Design/methodology/approach:This article assesses which ChatGPT inputs(full text without tables,figures,and references;title and abstract;title only)produce better quality score estimates,and the extent to which scores are affected by ChatGPT models and system prompts.Findings:The optimal input is the article title and abstract,with average ChatGPT scores based on these(30 iterations on a dataset of 51 papers)correlating at 0.67 with human scores,the highest ever reported.ChatGPT 4o is slightly better than 3.5-turbo(0.66),and 4o-mini(0.66).Research limitations:The data is a convenience sample of the work of a single author,it only includes one field,and the scores are self-evaluations.Practical implications:The results suggest that article full texts might confuse LLM research quality evaluations,even though complex system instructions for the task are more effective than simple ones.Thus,whilst abstracts contain insufficient information for a thorough assessment of rigour,they may contain strong pointers about originality and significance.Finally,linear regression can be used to convert the model scores into the human scale scores,which is 31%more accurate than guessing.Originality/value:This is the first systematic comparison of the impact of different prompts,parameters and inputs for ChatGPT research quality evaluations.
基金Supported by National Natural Science Foundation of China(No.82160195,No.82460203)Degree and Postgraduate Education Teaching Reform Project of Jiangxi Province(No.JXYJG-2020-026).
文摘AIM:To assess the possibility of using different large language models(LLMs)in ocular surface diseases by selecting five different LLMS to test their accuracy in answering specialized questions related to ocular surface diseases:ChatGPT-4,ChatGPT-3.5,Claude 2,PaLM2,and SenseNova.METHODS:A group of experienced ophthalmology professors were asked to develop a 100-question singlechoice question on ocular surface diseases designed to assess the performance of LLMs and human participants in answering ophthalmology specialty exam questions.The exam includes questions on the following topics:keratitis disease(20 questions),keratoconus,keratomalaciac,corneal dystrophy,corneal degeneration,erosive corneal ulcers,and corneal lesions associated with systemic diseases(20 questions),conjunctivitis disease(20 questions),trachoma,pterygoid and conjunctival tumor diseases(20 questions),and dry eye disease(20 questions).Then the total score of each LLMs and compared their mean score,mean correlation,variance,and confidence were calculated.RESULTS:GPT-4 exhibited the highest performance in terms of LLMs.Comparing the average scores of the LLMs group with the four human groups,chief physician,attending physician,regular trainee,and graduate student,it was found that except for ChatGPT-4,the total score of the rest of the LLMs is lower than that of the graduate student group,which had the lowest score in the human group.Both ChatGPT-4 and PaLM2 were more likely to give exact and correct answers,giving very little chance of an incorrect answer.ChatGPT-4 showed higher credibility when answering questions,with a success rate of 59%,but gave the wrong answer to the question 28% of the time.CONCLUSION:GPT-4 model exhibits excellent performance in both answer relevance and confidence.PaLM2 shows a positive correlation(up to 0.8)in terms of answer accuracy during the exam.In terms of answer confidence,PaLM2 is second only to GPT4 and surpasses Claude 2,SenseNova,and GPT-3.5.Despite the fact that ocular surface disease is a highly specialized discipline,GPT-4 still exhibits superior performance,suggesting that its potential and ability to be applied in this field is enormous,perhaps with the potential to be a valuable resource for medical students and clinicians in the future.
基金supported by the National Natural Science Foundation of China(Grant Nos.82173620 to Yang Zhao and 82041024 to Feng Chen)partially supported by the Bill&Melinda Gates Foundation(Grant No.INV-006371 to Feng Chen)Priority Academic Program Development of Jiangsu Higher Education Institutions.
文摘Deterministic compartment models(CMs)and stochastic models,including stochastic CMs and agent-based models,are widely utilized in epidemic modeling.However,the relationship between CMs and their corresponding stochastic models is not well understood.The present study aimed to address this gap by conducting a comparative study using the susceptible,exposed,infectious,and recovered(SEIR)model and its extended CMs from the coronavirus disease 2019 modeling literature.We demonstrated the equivalence of the numerical solution of CMs using the Euler scheme and their stochastic counterparts through theoretical analysis and simulations.Based on this equivalence,we proposed an efficient model calibration method that could replicate the exact solution of CMs in the corresponding stochastic models through parameter adjustment.The advancement in calibration techniques enhanced the accuracy of stochastic modeling in capturing the dynamics of epidemics.However,it should be noted that discrete-time stochastic models cannot perfectly reproduce the exact solution of continuous-time CMs.Additionally,we proposed a new stochastic compartment and agent mixed model as an alternative to agent-based models for large-scale population simulations with a limited number of agents.This model offered a balance between computational efficiency and accuracy.The results of this research contributed to the comparison and unification of deterministic CMs and stochastic models in epidemic modeling.Furthermore,the results had implications for the development of hybrid models that integrated the strengths of both frameworks.Overall,the present study has provided valuable epidemic modeling techniques and their practical applications for understanding and controlling the spread of infectious diseases.
基金We acknowledge funding from NSFC Grant 62306283.
文摘Since the 1950s,when the Turing Test was introduced,there has been notable progress in machine language intelligence.Language modeling,crucial for AI development,has evolved from statistical to neural models over the last two decades.Recently,transformer-based Pre-trained Language Models(PLM)have excelled in Natural Language Processing(NLP)tasks by leveraging large-scale training corpora.Increasing the scale of these models enhances performance significantly,introducing abilities like context learning that smaller models lack.The advancement in Large Language Models,exemplified by the development of ChatGPT,has made significant impacts both academically and industrially,capturing widespread societal interest.This survey provides an overview of the development and prospects from Large Language Models(LLM)to Large Multimodal Models(LMM).It first discusses the contributions and technological advancements of LLMs in the field of natural language processing,especially in text generation and language understanding.Then,it turns to the discussion of LMMs,which integrates various data modalities such as text,images,and sound,demonstrating advanced capabilities in understanding and generating cross-modal content,paving new pathways for the adaptability and flexibility of AI systems.Finally,the survey highlights the prospects of LMMs in terms of technological development and application potential,while also pointing out challenges in data integration,cross-modal understanding accuracy,providing a comprehensive perspective on the latest developments in this field.
基金supported by the Project of Stable Support for Youth Team in Basic Research Field,CAS(grant No.YSBR-018)the National Natural Science Foundation of China(grant Nos.42188101,42130204)+4 种基金the B-type Strategic Priority Program of CAS(grant no.XDB41000000)the National Natural Science Foundation of China(NSFC)Distinguished Overseas Young Talents Program,Innovation Program for Quantum Science and Technology(2021ZD0300301)the Open Research Project of Large Research Infrastructures of CAS-“Study on the interaction between low/mid-latitude atmosphere and ionosphere based on the Chinese Meridian Project”.The project was supported also by the National Key Laboratory of Deep Space Exploration(Grant No.NKLDSE2023A002)the Open Fund of Anhui Provincial Key Laboratory of Intelligent Underground Detection(Grant No.APKLIUD23KF01)the China National Space Administration(CNSA)pre-research Project on Civil Aerospace Technologies No.D010305,D010301.
文摘Sporadic E(Es)layers in the ionosphere are characterized by intense plasma irregularities in the E region at altitudes of 90-130 km.Because they can significantly influence radio communications and navigation systems,accurate forecasting of Es layers is crucial for ensuring the precision and dependability of navigation satellite systems.In this study,we present Es predictions made by an empirical model and by a deep learning model,and analyze their differences comprehensively by comparing the model predictions to satellite RO measurements and ground-based ionosonde observations.The deep learning model exhibited significantly better performance,as indicated by its high coefficient of correlation(r=0.87)with RO observations and predictions,than did the empirical model(r=0.53).This study highlights the importance of integrating artificial intelligence technology into ionosphere modelling generally,and into predicting Es layer occurrences and characteristics,in particular.
文摘Neuromyelitis optica spectrum disorders are neuroinflammatory demyelinating disorders that lead to permanent visual loss and motor dysfunction.To date,no effective treatment exists as the exact causative mechanism remains unknown.Therefore,experimental models of neuromyelitis optica spectrum disorders are essential for exploring its pathogenesis and in screening for therapeutic targets.Since most patients with neuromyelitis optica spectrum disorders are seropositive for IgG autoantibodies against aquaporin-4,which is highly expressed on the membrane of astrocyte endfeet,most current experimental models are based on aquaporin-4-IgG that initially targets astrocytes.These experimental models have successfully simulated many pathological features of neuromyelitis optica spectrum disorders,such as aquaporin-4 loss,astrocytopathy,granulocyte and macrophage infiltration,complement activation,demyelination,and neuronal loss;however,they do not fully capture the pathological process of human neuromyelitis optica spectrum disorders.In this review,we summarize the currently known pathogenic mechanisms and the development of associated experimental models in vitro,ex vivo,and in vivo for neuromyelitis optica spectrum disorders,suggest potential pathogenic mechanisms for further investigation,and provide guidance on experimental model choices.In addition,this review summarizes the latest information on pathologies and therapies for neuromyelitis optica spectrum disorders based on experimental models of aquaporin-4-IgG-seropositive neuromyelitis optica spectrum disorders,offering further therapeutic targets and a theoretical basis for clinical trials.
基金supported by Warren Alpert Foundation and Houston Methodist Academic Institute Laboratory Operating Fund(to HLC).
文摘Rare neurological diseases,while individually are rare,collectively impact millions globally,leading to diverse and often severe neurological symptoms.Often attributed to genetic mutations that disrupt protein function or structure,understanding their genetic basis is crucial for accurate diagnosis and targeted therapies.To investigate the underlying pathogenesis of these conditions,researchers often use non-mammalian model organisms,such as Drosophila(fruit flies),which is valued for their genetic manipulability,cost-efficiency,and preservation of genes and biological functions across evolutionary time.Genetic tools available in Drosophila,including CRISPR-Cas9,offer a means to manipulate gene expression,allowing for a deep exploration of the genetic underpinnings of rare neurological diseases.Drosophila boasts a versatile genetic toolkit,rapid generation turnover,and ease of large-scale experimentation,making it an invaluable resource for identifying potential drug candidates.Researchers can expose flies carrying disease-associated mutations to various compounds,rapidly pinpointing promising therapeutic agents for further investigation in mammalian models and,ultimately,clinical trials.In this comprehensive review,we explore rare neurological diseases where fly research has significantly contributed to our understanding of their genetic basis,pathophysiology,and potential therapeutic implications.We discuss rare diseases associated with both neuron-expressed and glial-expressed genes.Specific cases include mutations in CDK19 resulting in epilepsy and developmental delay,mutations in TIAM1 leading to a neurodevelopmental disorder with seizures and language delay,and mutations in IRF2BPL causing seizures,a neurodevelopmental disorder with regression,loss of speech,and abnormal movements.And we explore mutations in EMC1 related to cerebellar atrophy,visual impairment,psychomotor retardation,and gain-of-function mutations in ACOX1 causing Mitchell syndrome.Loss-of-function mutations in ACOX1 result in ACOX1 deficiency,characterized by very-long-chain fatty acid accumulation and glial degeneration.Notably,this review highlights how modeling these diseases in Drosophila has provided valuable insights into their pathophysiology,offering a platform for the rapid identification of potential therapeutic interventions.Rare neurological diseases involve a wide range of expression systems,and sometimes common phenotypes can be found among different genes that cause abnormalities in neurons or glia.Furthermore,mutations within the same gene may result in varying functional outcomes,such as complete loss of function,partial loss of function,or gain-of-function mutations.The phenotypes observed in patients can differ significantly,underscoring the complexity of these conditions.In conclusion,Drosophila represents an indispensable and cost-effective tool for investigating rare neurological diseases.By facilitating the modeling of these conditions,Drosophila contributes to a deeper understanding of their genetic basis,pathophysiology,and potential therapies.This approach accelerates the discovery of promising drug candidates,ultimately benefiting patients affected by these complex and understudied diseases.
基金supported by the National Key R&D Program of China(No.2021YFB0301200)National Natural Science Foundation of China(No.62025208).
文摘Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in specific tasks with reduced training costs,the substantial memory requirements during fine-tuning present a barrier to broader deployment.Parameter-Efficient Fine-Tuning(PEFT)techniques,such as Low-Rank Adaptation(LoRA),and parameter quantization methods have emerged as solutions to address these challenges by optimizing memory usage and computational efficiency.Among these,QLoRA,which combines PEFT and quantization,has demonstrated notable success in reducing memory footprints during fine-tuning,prompting the development of various QLoRA variants.Despite these advancements,the quantitative impact of key variables on the fine-tuning performance of quantized LLMs remains underexplored.This study presents a comprehensive analysis of these key variables,focusing on their influence across different layer types and depths within LLM architectures.Our investigation uncovers several critical findings:(1)Larger layers,such as MLP layers,can maintain performance despite reductions in adapter rank,while smaller layers,like self-attention layers,aremore sensitive to such changes;(2)The effectiveness of balancing factors depends more on specific values rather than layer type or depth;(3)In quantization-aware fine-tuning,larger layers can effectively utilize smaller adapters,whereas smaller layers struggle to do so.These insights suggest that layer type is a more significant determinant of fine-tuning success than layer depth when optimizing quantized LLMs.Moreover,for the same discount of trainable parameters,reducing the trainable parameters in a larger layer is more effective in preserving fine-tuning accuracy than in a smaller one.This study provides valuable guidance for more efficient fine-tuning strategies and opens avenues for further research into optimizing LLM fine-tuning in resource-constrained environments.
基金the University of Transport Technology under the project entitled“Application of Machine Learning Algorithms in Landslide Susceptibility Mapping in Mountainous Areas”with grant number DTTD2022-16.
文摘This study was aimed to prepare landslide susceptibility maps for the Pithoragarh district in Uttarakhand,India,using advanced ensemble models that combined Radial Basis Function Networks(RBFN)with three ensemble learning techniques:DAGGING(DG),MULTIBOOST(MB),and ADABOOST(AB).This combination resulted in three distinct ensemble models:DG-RBFN,MB-RBFN,and AB-RBFN.Additionally,a traditional weighted method,Information Value(IV),and a benchmark machine learning(ML)model,Multilayer Perceptron Neural Network(MLP),were employed for comparison and validation.The models were developed using ten landslide conditioning factors,which included slope,aspect,elevation,curvature,land cover,geomorphology,overburden depth,lithology,distance to rivers and distance to roads.These factors were instrumental in predicting the output variable,which was the probability of landslide occurrence.Statistical analysis of the models’performance indicated that the DG-RBFN model,with an Area Under ROC Curve(AUC)of 0.931,outperformed the other models.The AB-RBFN model achieved an AUC of 0.929,the MB-RBFN model had an AUC of 0.913,and the MLP model recorded an AUC of 0.926.These results suggest that the advanced ensemble ML model DG-RBFN was more accurate than traditional statistical model,single MLP model,and other ensemble models in preparing trustworthy landslide susceptibility maps,thereby enhancing land use planning and decision-making.
基金Fundamental Research Funds for the Central Universities(M23JBZY00050)National Natural Science Foundation of China(22278032)。
文摘The kinetic characteristics of plasma-assisted oxidative pyrolysis of ammonia are studied by using the global/fluid models hybrid solution method.Firstly,the stable products of plasma-assisted oxidative pyrolysis of ammonia are measured.The results show that the consumption of NH_(3)/O_(2)and the production of N_(2)/H_(2)change linearly with the increase of voltage,which indicates the decoupling of nonequilibrium molecular excitation and oxidative pyrolysis of ammonia at low temperatures.Secondly,the detailed reaction kinetics mechanism of ammonia oxidative pyrolysis stimulated by a nanosecond pulse voltage at low pressure and room temperature is established.Based on the reaction path analysis,the simplified mechanism is obtained.The detailed and simplified mechanism simulation results are compared with experimental data to verify the accuracy of the simplified mechanism.Finally,based on the simplified mechanism,the fluid model of ammonia oxidative pyrolysis stimulated by the nanosecond pulse plasma is established to study the pre-sheath/sheath behavior and the resultant consumption and formation of key species.The results show that the generation,development,and propagation of the pre-sheath have a great influence on the formation and consumption of species.The consumption of NH_(3)by the cathode pre-sheath is greater than that by the anode pre-sheath,but the opposite is true for OH and O(1S).However,within the sheath,almost all reactions do not occur.Further,by changing the parameters of nanosecond pulse power supply voltage,it is found that the electron number density,electron current density,and applied peak voltages are not the direct reasons for the structural changes of the sheath and pre-sheath.Furthermore,the discharge interval has little effect on the sheath structure and gas mixture breakdown.The research results of this paper not only help to understand the kinetic promotion of non-equilibrium excitation in the process of oxidative pyrolysis but also help to explore the influence of transport and chemical reaction kinetics on the oxidative pyrolysis of ammonia.
基金National Key R&D Program of China,Grant/Award Number:2021YFC2502100,2023YFC3603404 and 2019YFA0111900The National Natural Science Foundation of China,Grant/Award Number:82072506,82272611 and 92268115+7 种基金The Hunan Provincial Science Fund for Distinguished Young Scholars,Grant/Award Number:2024JJ2089The Hunan Young Talents of Science and Technology,Grant/Award Number:2021RC3025The Provincial Clinical Medical Technology Innovation Project of Hunan,Grant/Award Number:2023SK2024 and 2020SK53709The Provincial Natural Science Foundation of Hunan,Grant/Award Number:2020JJ3060The National Natural Science Foundation of Hunan Province,Grant/Award Number:2023JJ30949The National Clinical Research Center for Geriatric Disorders,Xiangya Hospital,Grant/Award Number:2021KFJJ02 and 2021LNJJ05The Hunan Provincial Innovation Foundation for Postgraduate,Grant/Award Number:CX20230308 and CX20230312The Independent Exploration and Innovation Project for Postgraduate Students of Central South University,Grant/Award Number:2024ZZTS0163。
文摘Frozen shoulder(FS),also known as adhesive capsulitis,is a condition that causes contraction and stiffness of the shoulder joint capsule.The main symptoms are per-sistent shoulder pain and a limited range of motion in all directions.These symp-toms and poor prognosis affect people's physical health and quality of life.Currently,the specific mechanisms of FS remain unclear,and there is variability in treatment methods and their efficacy.Additionally,the early symptoms of FS are difficult to distinguish from those of other shoulder diseases,complicating early diagnosis and treatment.Therefore,it is necessary to develop and utilize animal models to under-stand the pathogenesis of FS and to explore treatment strategies,providing insights into the prevention and treatment of human FS.This paper reviews the rat models available for FS research,including external immobilization models,surgical internal immobilization models,injection modeling models,and endocrine modeling models.It introduces the basic procedures for these models and compares and analyzes the advantages,disadvantages,and applicability of each modeling method.Finally,our paper summarizes the common methods for evaluating FS rat models.
文摘This paper presents a comparative study of ARIMA and Neural Network AutoRegressive (NNAR) models for time series forecasting. The study focuses on simulated data generated using ARIMA(1, 1, 0) and applies both models for training and forecasting. Model performance is evaluated using MSE, AIC, and BIC. The models are further applied to neonatal mortality data from Saudi Arabia to assess their predictive capabilities. The results indicate that the NNAR model outperforms ARIMA in both training and forecasting.
文摘Modeling HIV/AIDS progression is critical for understanding disease dynamics and improving patient care. This study compares the Exponential and Weibull survival models, focusing on their ability to capture state-specific failure rates in HIV/AIDS progression. While the Exponential model offers simplicity with a constant hazard rate, it often fails to accommodate the complexities of dynamic disease progression. In contrast, the Weibull model provides flexibility by allowing hazard rates to vary over time. Both models are evaluated within the frameworks of the Cox Proportional Hazards (Cox PH) and Accelerated Failure Time (AFT) models, incorporating critical covariates such as age, gender, CD4 count, and ART status. Statistical evaluation metrics, including Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC), log-likelihood, and Pseudo-R2, were employed to assess model performance across diverse patient subgroups. Results indicate that the Weibull model consistently outperforms the Exponential model in dynamic scenarios, such as younger patients and those with co-infections, while maintaining robustness in stable contexts. This study highlights the trade-off between flexibility and simplicity in survival modeling, advocating for tailored model selection to balance interpretability and predictive accuracy. These findings provide valuable insights for optimizing HIV/AIDS management strategies and advancing survival analysis methodologies.
文摘The UK’s economic growth has witnessed instability over these years. While some sectors recorded positive performances, some recorded negative performances, and these unstable economic performances led to technical recession for the third and fourth quarters of the year 2023. This study assessed the efficacy of the Generalised Additive Model for Location, Scale and Shape (GAMLSS) as a flexible distributional regression with smoothing additive terms in forecasting the UK economic growth in-sample and out-of-sample over the conventional Autoregressive Distributed Lag (ARDL) and Error Correction Model (ECM). The aim was to investigate the effectiveness and efficiency of GAMLSS models using a machine learning framework over the conventional time series econometric models by a rolling window. It is quantitative research which adopts a dataset obtained from the Office for National Statistics, covering 105 monthly observations of major economic indicators in the UK from January 2015 to September 2023. It consists of eleven variables, which include economic growth (Econ), consumer price index (CPI), inflation (Infl), manufacturing (Manuf), electricity and gas (ElGas), construction (Const), industries (Ind), wholesale and retail (WRet), real estate (REst), education (Edu) and health (Health). All computations and graphics in this study are obtained using R software version 4.4.1. The study revealed that GAMLSS models demonstrate superior outperformance in forecast accuracy over the ARDL and ECM models. Unlike other models used in the literature, the GAMLSS models were able to forecast both the future economic growth and the future distribution of the growth, thereby contributing to the empirical literature. The study identified manufacturing, electricity and gas, construction, industries, wholesale and retail, real estate, education, and health as key drivers of UK economic growth.
基金the funding support from the National Natural Science Foundation of China(Grant No.52308340)Chongqing Talent Innovation and Entrepreneurship Demonstration Team Project(Grant No.cstc2024ycjh-bgzxm0012)the Science and Technology Projects supported by China Coal Technology and Engineering Chongqing Design and Research Institute(Group)Co.,Ltd..(Grant No.H20230317)。
文摘Influenced by complex external factors,the displacement-time curve of reservoir landslides demonstrates both short-term and long-term diversity and dynamic complexity.It is difficult for existing methods,including Regression models and Neural network models,to perform multi-characteristic coupled displacement prediction because they fail to consider landslide creep characteristics.This paper integrates the creep characteristics of landslides with non-linear intelligent algorithms and proposes a dynamic intelligent landslide displacement prediction method based on a combination of the Biological Growth model(BG),Convolutional Neural Network(CNN),and Long ShortTerm Memory Network(LSTM).This prediction approach improves three different biological growth models,thereby effectively extracting landslide creep characteristic parameters.Simultaneously,it integrates external factors(rainfall and reservoir water level)to construct an internal and external comprehensive dataset for data augmentation,which is input into the improved CNN-LSTM model.Thereafter,harnessing the robust feature extraction capabilities and spatial translation invariance of CNN,the model autonomously captures short-term local fluctuation characteristics of landslide displacement,and combines LSTM's efficient handling of long-term nonlinear temporal data to improve prediction performance.An evaluation of the Liangshuijing landslide in the Three Gorges Reservoir Area indicates that BG-CNN-LSTM exhibits high prediction accuracy,excellent generalization capabilities when dealing with various types of landslides.The research provides an innovative approach to achieving the whole-process,realtime,high-precision displacement predictions for multicharacteristic coupled landslides.