In computer vision and artificial intelligence,automatic facial expression-based emotion identification of humans has become a popular research and industry problem.Recent demonstrations and applications in several fi...In computer vision and artificial intelligence,automatic facial expression-based emotion identification of humans has become a popular research and industry problem.Recent demonstrations and applications in several fields,including computer games,smart homes,expression analysis,gesture recognition,surveillance films,depression therapy,patientmonitoring,anxiety,and others,have brought attention to its significant academic and commercial importance.This study emphasizes research that has only employed facial images for face expression recognition(FER),because facial expressions are a basic way that people communicate meaning to each other.The immense achievement of deep learning has resulted in a growing use of its much architecture to enhance efficiency.This review is on machine learning,deep learning,and hybrid methods’use of preprocessing,augmentation techniques,and feature extraction for temporal properties of successive frames of data.The following section gives a brief summary of assessment criteria that are accessible to the public and then compares them with benchmark results the most trustworthy way to assess FER-related research topics statistically.In this review,a brief synopsis of the subject matter may be beneficial for novices in the field of FER as well as seasoned scholars seeking fruitful avenues for further investigation.The information conveys fundamental knowledge and provides a comprehensive understanding of the most recent state-of-the-art research.展开更多
Early identification and treatment of stroke can greatly improve patient outcomes and quality of life.Although clinical tests such as the Cincinnati Pre-hospital Stroke Scale(CPSS)and the Face Arm Speech Test(FAST)are...Early identification and treatment of stroke can greatly improve patient outcomes and quality of life.Although clinical tests such as the Cincinnati Pre-hospital Stroke Scale(CPSS)and the Face Arm Speech Test(FAST)are commonly used for stroke screening,accurate administration is dependent on specialized training.In this study,we proposed a novel multimodal deep learning approach,based on the FAST,for assessing suspected stroke patients exhibiting symptoms such as limb weakness,facial paresis,and speech disorders in acute settings.We collected a dataset comprising videos and audio recordings of emergency room patients performing designated limb movements,facial expressions,and speech tests based on the FAST.We compared the constructed deep learning model,which was designed to process multi-modal datasets,with six prior models that achieved good action classification performance,including the I3D,SlowFast,X3D,TPN,TimeSformer,and MViT.We found that the findings of our deep learning model had a higher clinical value compared with the other approaches.Moreover,the multi-modal model outperformed its single-module variants,highlighting the benefit of utilizing multiple types of patient data,such as action videos and speech audio.These results indicate that a multi-modal deep learning model combined with the FAST could greatly improve the accuracy and sensitivity of early stroke identification of stroke,thus providing a practical and powerful tool for assessing stroke patients in an emergency clinical setting.展开更多
Liver cancer remains a leading cause of mortality worldwide,and precise diagnostic tools are essential for effective treatment planning.Liver Tumors(LTs)vary significantly in size,shape,and location,and can present wi...Liver cancer remains a leading cause of mortality worldwide,and precise diagnostic tools are essential for effective treatment planning.Liver Tumors(LTs)vary significantly in size,shape,and location,and can present with tissues of similar intensities,making automatically segmenting and classifying LTs from abdominal tomography images crucial and challenging.This review examines recent advancements in Liver Segmentation(LS)and Tumor Segmentation(TS)algorithms,highlighting their strengths and limitations regarding precision,automation,and resilience.Performance metrics are utilized to assess key detection algorithms and analytical methods,emphasizing their effectiveness and relevance in clinical contexts.The review also addresses ongoing challenges in liver tumor segmentation and identification,such as managing high variability in patient data and ensuring robustness across different imaging conditions.It suggests directions for future research,with insights into technological advancements that can enhance surgical planning and diagnostic accuracy by comparing popular methods.This paper contributes to a comprehensive understanding of current liver tumor detection techniques,provides a roadmap for future innovations,and improves diagnostic and therapeutic outcomes for liver cancer by integrating recent progress with remaining challenges.展开更多
App reviews are crucial in influencing user decisions and providing essential feedback for developers to improve their products.Automating the analysis of these reviews is vital for efficient review management.While t...App reviews are crucial in influencing user decisions and providing essential feedback for developers to improve their products.Automating the analysis of these reviews is vital for efficient review management.While traditional machine learning(ML)models rely on basic word-based feature extraction,deep learning(DL)methods,enhanced with advanced word embeddings,have shown superior performance.This research introduces a novel aspectbased sentiment analysis(ABSA)framework to classify app reviews based on key non-functional requirements,focusing on usability factors:effectiveness,efficiency,and satisfaction.We propose a hybrid DL model,combining BERT(Bidirectional Encoder Representations from Transformers)with BiLSTM(Bidirectional Long Short-Term Memory)and CNN(Convolutional Neural Networks)layers,to enhance classification accuracy.Comparative analysis against state-of-the-art models demonstrates that our BERT-BiLSTM-CNN model achieves exceptional performance,with precision,recall,F1-score,and accuracy of 96%,87%,91%,and 94%,respectively.Thesignificant contributions of this work include a refined ABSA-based relabeling framework,the development of a highperformance classifier,and the comprehensive relabeling of the Instagram App Reviews dataset.These advancements provide valuable insights for software developers to enhance usability and drive user-centric application development.展开更多
As more and more devices in Cyber-Physical Systems(CPS)are connected to the Internet,physical components such as programmable logic controller(PLC),sensors,and actuators are facing greater risks of network attacks,and...As more and more devices in Cyber-Physical Systems(CPS)are connected to the Internet,physical components such as programmable logic controller(PLC),sensors,and actuators are facing greater risks of network attacks,and fast and accurate attack detection techniques are crucial.The key problem in distinguishing between normal and abnormal sequences is to model sequential changes in a large and diverse field of time series.To address this issue,we propose an anomaly detection method based on distributed deep learning.Our method uses a bilateral filtering algorithm for sequential sequences to remove noise in the time series,which can maintain the edge of discrete features.We use a distributed linear deep learning model to establish a sequential prediction model and adjust the threshold for anomaly detection based on the prediction error of the validation set.Our method can not only detect abnormal attacks but also locate the sensors that cause anomalies.We conducted experiments on the Secure Water Treatment(SWAT)and Water Distribution(WADI)public datasets.The experimental results show that our method is superior to the baseline method in identifying the types of attacks and detecting efficiency.展开更多
Lung cancer continues to be a leading cause of cancer-related deaths worldwide,emphasizing the critical need for improved diagnostic techniques.Early detection of lung tumors significantly increases the chances of suc...Lung cancer continues to be a leading cause of cancer-related deaths worldwide,emphasizing the critical need for improved diagnostic techniques.Early detection of lung tumors significantly increases the chances of successful treatment and survival.However,current diagnostic methods often fail to detect tumors at an early stage or to accurately pinpoint their location within the lung tissue.Single-model deep learning technologies for lung cancer detection,while beneficial,cannot capture the full range of features present in medical imaging data,leading to incomplete or inaccurate detection.Furthermore,it may not be robust enough to handle the wide variability in medical images due to different imaging conditions,patient anatomy,and tumor characteristics.To overcome these disadvantages,dual-model or multi-model approaches can be employed.This research focuses on enhancing the detection of lung cancer by utilizing a combination of two learning models:a Convolutional Neural Network(CNN)for categorization and the You Only Look Once(YOLOv8)architecture for real-time identification and pinpointing of tumors.CNNs automatically learn to extract hierarchical features from raw image data,capturing patterns such as edges,textures,and complex structures that are crucial for identifying lung cancer.YOLOv8 incorporates multiscale feature extraction,enabling the detection of tumors of varying sizes and scales within a single image.This is particularly beneficial for identifying small or irregularly shaped tumors that may be challenging to detect.Furthermore,through the utilization of cutting-edge data augmentation methods,such as Deep Convolutional Generative Adversarial Networks(DCGAN),the suggested approach can handle the issue of limited data and boost the models’ability to learn from diverse and comprehensive datasets.The combined method not only improved accuracy and localization but also ensured efficient real-time processing,which is crucial for practical clinical applications.The CNN achieved an accuracy of 97.67%in classifying lung tissues into healthy and cancerous categories.The YOLOv8 model achieved an Intersection over Union(IoU)score of 0.85 for tumor localization,reflecting high precision in detecting and marking tumor boundaries within the images.Finally,the incorporation of synthetic images generated by DCGAN led to a 10%improvement in both the CNN classification accuracy and YOLOv8 detection performance.展开更多
Sporadic E(Es)layers in the ionosphere are characterized by intense plasma irregularities in the E region at altitudes of 90-130 km.Because they can significantly influence radio communications and navigation systems,...Sporadic E(Es)layers in the ionosphere are characterized by intense plasma irregularities in the E region at altitudes of 90-130 km.Because they can significantly influence radio communications and navigation systems,accurate forecasting of Es layers is crucial for ensuring the precision and dependability of navigation satellite systems.In this study,we present Es predictions made by an empirical model and by a deep learning model,and analyze their differences comprehensively by comparing the model predictions to satellite RO measurements and ground-based ionosonde observations.The deep learning model exhibited significantly better performance,as indicated by its high coefficient of correlation(r=0.87)with RO observations and predictions,than did the empirical model(r=0.53).This study highlights the importance of integrating artificial intelligence technology into ionosphere modelling generally,and into predicting Es layer occurrences and characteristics,in particular.展开更多
Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay di...Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay differential equations(DDEs)and delay integrodifferential equations(DIDEs)with constant delays,primarily due to their low regularity at delayinduced breaking points.In this paper,a DNN method that combines multi-task learning(MTL)which is proposed to solve both the forward and inverse problems of DIDEs.The core idea of this approach is to divide the original equation into multiple tasks based on the delay,using auxiliary outputs to represent the integral terms,followed by the use of MTL to seamlessly incorporate the properties at the breaking points into the loss function.Furthermore,given the increased training dificulty associated with multiple tasks and outputs,we employ a sequential training scheme to reduce training complexity and provide reference solutions for subsequent tasks.This approach significantly enhances the approximation accuracy of solving DIDEs with DNNs,as demonstrated by comparisons with traditional DNN methods.We validate the effectiveness of this method through several numerical experiments,test various parameter sharing structures in MTL and compare the testing results of these structures.Finally,this method is implemented to solve the inverse problem of nonlinear DIDE and the results show that the unknown parameters of DIDE can be discovered with sparse or noisy data.展开更多
In the article“Deep Learning-Enhanced Brain Tumor Prediction via Entropy-Coded BPSO in CIELAB Color Space”by Mudassir Khalil,Muhammad Imran Sharif,Ahmed Naeem,Muhammad Umar Chaudhry,Hafiz Tayyab Rauf,Adham E.Ragab C...In the article“Deep Learning-Enhanced Brain Tumor Prediction via Entropy-Coded BPSO in CIELAB Color Space”by Mudassir Khalil,Muhammad Imran Sharif,Ahmed Naeem,Muhammad Umar Chaudhry,Hafiz Tayyab Rauf,Adham E.Ragab Computers,Materials&Continua,2023,Vol.77,No.2,pp.2031–2047.DOI:10.32604/cmc.2023.043687,URL:https://www.techscience.com/cmc/v77n2/54831,there was an error regarding the affiliation for the author Hafiz Tayyab Rauf.Instead of“Centre for Smart Systems,AI and Cybersecurity,Staffordshire University,Stoke-on-Trent,ST42DE,UK”,the affiliation should be“Independent Researcher,Bradford,BD80HS,UK”.展开更多
The fast increase of online communities has brought about an increase in cyber threats inclusive of cyberbullying, hate speech, misinformation, and online harassment, making content moderation a pressing necessity. Tr...The fast increase of online communities has brought about an increase in cyber threats inclusive of cyberbullying, hate speech, misinformation, and online harassment, making content moderation a pressing necessity. Traditional single-modal AI-based detection systems, which analyze both text, photos, or movies in isolation, have established useless at taking pictures multi-modal threats, in which malicious actors spread dangerous content throughout a couple of formats. To cope with these demanding situations, we advise a multi-modal deep mastering framework that integrates Natural Language Processing (NLP), Convolutional Neural Networks (CNNs), and Long Short-Term Memory (LSTM) networks to become aware of and mitigate online threats effectively. Our proposed model combines BERT for text class, ResNet50 for photograph processing, and a hybrid LSTM-3-d CNN community for video content material analysis. We constructed a large-scale dataset comprising 500,000 textual posts, 200,000 offensive images, and 50,000 annotated motion pictures from more than one platform, which includes Twitter, Reddit, YouTube, and online gaming forums. The system became carefully evaluated using trendy gadget mastering metrics which include accuracy, precision, remember, F1-score, and ROC-AUC curves. Experimental outcomes demonstrate that our multi-modal method extensively outperforms single-modal AI classifiers, achieving an accuracy of 92.3%, precision of 91.2%, do not forget of 90.1%, and an AUC rating of 0.95. The findings validate the necessity of integrating multi-modal AI for actual-time, high-accuracy online chance detection and moderation. Future paintings will have consciousness on improving hostile robustness, enhancing scalability for real-world deployment, and addressing ethical worries associated with AI-driven content moderation.展开更多
Deep learning algorithms have been rapidly incorporated into many different applications due to the increase in computational power and the availability of massive amounts of data.Recently,both deep learning and ensem...Deep learning algorithms have been rapidly incorporated into many different applications due to the increase in computational power and the availability of massive amounts of data.Recently,both deep learning and ensemble learning have been used to recognize underlying structures and patterns from high-level features to make predictions/decisions.With the growth in popularity of deep learning and ensemble learning algorithms,they have received significant attention from both scientists and the industrial community due to their superior ability to learn features from big data.Ensemble deep learning has exhibited significant performance in enhancing learning generalization through the use of multiple deep learning algorithms.Although ensemble deep learning has large quantities of training parameters,which results in time and space overheads,it performs much better than traditional ensemble learning.Ensemble deep learning has been successfully used in several areas,such as bioinformatics,finance,and health care.In this paper,we review and investigate recent ensemble deep learning algorithms and techniques in health care domains,medical imaging,health care data analytics,genomics,diagnosis,disease prevention,and drug discovery.We cover several widely used deep learning algorithms along with their architectures,including deep neural networks(DNNs),convolutional neural networks(CNNs),recurrent neural networks(RNNs),and generative adversarial networks(GANs).Common healthcare tasks,such as medical imaging,electronic health records,and genomics,are also demonstrated.Furthermore,in this review,the challenges inherent in reducing the burden on the healthcare system are discussed and explored.Finally,future directions and opportunities for enhancing healthcare model performance are discussed.展开更多
This article is devoted to developing a deep learning method for the numerical solution of the partial differential equations (PDEs). Graph kernel neural networks (GKNN) approach to embedding graphs into a computation...This article is devoted to developing a deep learning method for the numerical solution of the partial differential equations (PDEs). Graph kernel neural networks (GKNN) approach to embedding graphs into a computationally numerical format has been used. In particular, for investigation mathematical models of the dynamical system of cancer cell invasion in inhomogeneous areas of human tissues have been considered. Neural operators were initially proposed to model the differential operator of PDEs. The GKNN mapping features between input data to the PDEs and their solutions have been constructed. The boundary integral method in combination with Green’s functions for a large number of boundary conditions is used. The tools applied in this development are based on the Fourier neural operators (FNOs), graph theory, theory elasticity, and singular integral equations.展开更多
Deep learning has significantly transformed personalized education by enabling intelligent adaptation to indi-vidual learning needs.This study explores deep learning-based modeling methods that enhance personalized le...Deep learning has significantly transformed personalized education by enabling intelligent adaptation to indi-vidual learning needs.This study explores deep learning-based modeling methods that enhance personalized learning experiences,optimize instructional content,and predict student progress.We examine key techniques,including recurrent neural networks(RNNs),transformers,reinforcement learning,and multimodal learning analytics,to demonstrate their roles in personalized learning path recommendations and adaptive content gen-eration.Case studies of AI-driven tutoring systems and learning management platforms illustrate real-world applications.Additionally,we address challenges related to data privacy,algorithmic bias,and model inter-pretability.The paper concludes with future directions for deep learning in education,emphasizing its potential for enhancing immersive and intelligent learning environments.展开更多
The precise identification of quartz minerals is crucial in mineralogy and geology due to their widespread occurrence and industrial significance.Traditional methods of quartz identification in thin sections are labor...The precise identification of quartz minerals is crucial in mineralogy and geology due to their widespread occurrence and industrial significance.Traditional methods of quartz identification in thin sections are labor-intensive and require significant expertise,often complicated by the coexistence of other minerals.This study presents a novel approach leveraging deep learning techniques combined with hyperspectral imaging to automate the identification process of quartz minerals.The utilizied four advanced deep learning models—PSPNet,U-Net,FPN,and LinkNet—has significant advancements in efficiency and accuracy.Among these models,PSPNet exhibited superior performance,achieving the highest intersection over union(IoU)scores and demonstrating exceptional reliability in segmenting quartz minerals,even in complex scenarios.The study involved a comprehensive dataset of 120 thin sections,encompassing 2470 hyperspectral images prepared from 20 rock samples.Expert-reviewed masks were used for model training,ensuring robust segmentation results.This automated approach not only expedites the recognition process but also enhances reliability,providing a valuable tool for geologists and advancing the field of mineralogical analysis.展开更多
The field of biometric identification has seen significant advancements over the years,with research focusing on enhancing the accuracy and security of these systems.One of the key developments is the integration of d...The field of biometric identification has seen significant advancements over the years,with research focusing on enhancing the accuracy and security of these systems.One of the key developments is the integration of deep learning techniques in biometric systems.However,despite these advancements,certain challenges persist.One of the most significant challenges is scalability over growing complexity.Traditional methods either require maintaining and securing a growing database,introducing serious security challenges,or relying on retraining the entiremodelwhen new data is introduced-a process that can be computationally expensive and complex.This challenge underscores the need for more efficient methods to scale securely.To this end,we introduce a novel approach that addresses these challenges by integrating multimodal biometrics,cancelable biometrics,and incremental learning techniques.This work is among the first attempts to seamlessly incorporate deep cancelable biometrics with dynamic architectural updates,applied incrementally to the deep learning model as new users are enrolled,achieving high performance with minimal catastrophic forgetting.By leveraging a One-Dimensional Convolutional Neural Network(1D-CNN)architecture combined with a hybrid incremental learning approach,our system achieves high recognition accuracy,averaging 98.98% over incrementing datasets,while ensuring user privacy through cancelable templates generated via a pre-trained CNN model and random projection.The approach demonstrates remarkable adaptability,utilizing the least intrusive biometric traits like facial features and fingerprints,ensuring not only robust performance but also long-term serviceability.展开更多
Automatic detection of Leukemia or blood cancer is one of the most challenging tasks that need to be addressed in the healthcare system.Analysis of white blood cells(WBCs)in the blood or bone marrow microscopic slide ...Automatic detection of Leukemia or blood cancer is one of the most challenging tasks that need to be addressed in the healthcare system.Analysis of white blood cells(WBCs)in the blood or bone marrow microscopic slide images play a crucial part in early identification to facilitate medical experts.For Acute Lymphocytic Leukemia(ALL),the most preferred part of the blood or marrow is to be analyzed by the experts before it spreads in the whole body and the condition becomes worse.The researchers have done a lot of work in this field,to demonstrate a comprehensive analysis few literature reviews have been published focusing on various artificial intelligence-based techniques like machine and deep learning detection of ALL.The systematic review has been done in this article under the PRISMA guidelines which presents the most recent advancements in this field.Different image segmentation techniques were broadly studied and categorized from various online databases like Google Scholar,Science Direct,and PubMed as image processing-based,traditional machine and deep learning-based,and advanced deep learning-based models were presented.Convolutional Neural Networks(CNN)based on traditional models and then the recent advancements in CNN used for the classification of ALL into its subtypes.A critical analysis of the existing methods is provided to offer clarity on the current state of the field.Finally,the paper concludes with insights and suggestions for future research,aiming to guide new researchers in the development of advanced automated systems for detecting life-threatening diseases.展开更多
The current deep learning models for braced excavation cannot predict deformation from the beginning of excavation due to the need for a substantial corpus of sufficient historical data for training purposes.To addres...The current deep learning models for braced excavation cannot predict deformation from the beginning of excavation due to the need for a substantial corpus of sufficient historical data for training purposes.To address this issue,this study proposes a transfer learning model based on a sequence-to-sequence twodimensional(2D)convolutional long short-term memory neural network(S2SCL2D).The model can use the existing data from other adjacent similar excavations to achieve wall deflection prediction once a limited amount of monitoring data from the target excavation has been recorded.In the absence of adjacent excavation data,numerical simulation data from the target project can be employed instead.A weight update strategy is proposed to improve the prediction accuracy by integrating the stochastic gradient masking with an early stopping mechanism.To illustrate the proposed methodology,an excavation project in Hangzhou,China is adopted.The proposed deep transfer learning model,which uses either adjacent excavation data or numerical simulation data as the source domain,shows a significant improvement in performance when compared to the non-transfer learning model.Using the simulation data from the target project even leads to better prediction performance than using the actual monitoring data from other adjacent excavations.The results demonstrate that the proposed model can reasonably predict the deformation with limited data from the target project.展开更多
Segmenting a breast ultrasound image is still challenging due to the presence of speckle noise,dependency on the operator,and the variation of image quality.This paper presents the UltraSegNet architecture that addres...Segmenting a breast ultrasound image is still challenging due to the presence of speckle noise,dependency on the operator,and the variation of image quality.This paper presents the UltraSegNet architecture that addresses these challenges through three key technical innovations:This work adds three things:(1)a changed ResNet-50 backbone with sequential 3×3 convolutions to keep fine anatomical details that are needed for finding lesion boundaries;(2)a computationally efficient regional attention mechanism that works on high-resolution features without using a transformer’s extra memory;and(3)an adaptive feature fusion strategy that changes local and global featuresbasedonhowthe image isbeing used.Extensive evaluation on two distinct datasets demonstrates UltraSegNet’s superior performance:On the BUSI dataset,it obtains a precision of 0.915,a recall of 0.908,and an F1 score of 0.911.In the UDAIT dataset,it achieves robust performance across the board,with a precision of 0.901 and recall of 0.894.Importantly,these improvements are achieved at clinically feasible computation times,taking 235 ms per image on standard GPU hardware.Notably,UltraSegNet does amazingly well on difficult small lesions(less than 10 mm),achieving a detection accuracy of 0.891.This is a huge improvement over traditional methods that have a hard time with small-scale features,as standard models can only achieve 0.63–0.71 accuracy.This improvement in small lesion detection is particularly crucial for early-stage breast cancer identification.Results from this work demonstrate that UltraSegNet can be practically deployable in clinical workflows to improve breast cancer screening accuracy.展开更多
As the trend to use the latestmachine learning models to automate requirements engineering processes continues,security requirements classification is tuning into the most researched field in the software engineering ...As the trend to use the latestmachine learning models to automate requirements engineering processes continues,security requirements classification is tuning into the most researched field in the software engineering community.Previous literature studies have proposed numerousmodels for the classification of security requirements.However,adopting those models is constrained due to the lack of essential datasets permitting the repetition and generalization of studies employing more advanced machine learning algorithms.Moreover,most of the researchers focus only on the classification of requirements with security keywords.They did not consider other nonfunctional requirements(NFR)directly or indirectly related to security.This has been identified as a significant research gap in security requirements engineering.The major objective of this study is to propose a security requirements classification model that categorizes security and other relevant security requirements.We use PROMISE_exp and DOSSPRE,the two most commonly used datasets in the software engineering community.The proposed methodology consists of two steps.In the first step,we analyze all the nonfunctional requirements and their relation with security requirements.We found 10 NFRs that have a strong relationship with security requirements.In the second step,we categorize those NFRs in the security requirements category.Our proposedmethodology is a hybridmodel based on the ConvolutionalNeural Network(CNN)and Extreme Gradient Boosting(XGBoost)models.Moreover,we evaluate the model by updating the requirement type column with a binary classification column in the dataset to classify the requirements into security and non-security categories.The performance is evaluated using four metrics:recall,precision,accuracy,and F1 Score with 20 and 28 epochs number and batch size of 32 for PROMISE_exp and DOSSPRE datasets and achieved 87.3%and 85.3%accuracy,respectively.The proposed study shows an enhancement in metrics values compared to the previous literature studies.This is a proof of concept for systematizing the evaluation of security recognition in software systems during the early phases of software development.展开更多
Heart disease prediction is a critical issue in healthcare,where accurate early diagnosis can save lives and reduce healthcare costs.The problem is inherently complex due to the high dimensionality of medical data,irr...Heart disease prediction is a critical issue in healthcare,where accurate early diagnosis can save lives and reduce healthcare costs.The problem is inherently complex due to the high dimensionality of medical data,irrelevant or redundant features,and the variability in risk factors such as age,lifestyle,andmedical history.These challenges often lead to inefficient and less accuratemodels.Traditional predictionmethodologies face limitations in effectively handling large feature sets and optimizing classification performance,which can result in overfitting poor generalization,and high computational cost.This work proposes a novel classification model for heart disease prediction that addresses these challenges by integrating feature selection through a Genetic Algorithm(GA)with an ensemble deep learning approach optimized using the Tunicate Swarm Algorithm(TSA).GA selects the most relevant features,reducing dimensionality and improvingmodel efficiency.Theselected features are then used to train an ensemble of deep learning models,where the TSA optimizes the weight of each model in the ensemble to enhance prediction accuracy.This hybrid approach addresses key challenges in the field,such as high dimensionality,redundant features,and classification performance,by introducing an efficient feature selection mechanism and optimizing the weighting of deep learning models in the ensemble.These enhancements result in a model that achieves superior accuracy,generalization,and efficiency compared to traditional methods.The proposed model demonstrated notable advancements in both prediction accuracy and computational efficiency over traditionalmodels.Specifically,it achieved an accuracy of 97.5%,a sensitivity of 97.2%,and a specificity of 97.8%.Additionally,with a 60-40 data split and 5-fold cross-validation,the model showed a significant reduction in training time(90 s),memory consumption(950 MB),and CPU usage(80%),highlighting its effectiveness in processing large,complex medical datasets for heart disease prediction.展开更多
文摘In computer vision and artificial intelligence,automatic facial expression-based emotion identification of humans has become a popular research and industry problem.Recent demonstrations and applications in several fields,including computer games,smart homes,expression analysis,gesture recognition,surveillance films,depression therapy,patientmonitoring,anxiety,and others,have brought attention to its significant academic and commercial importance.This study emphasizes research that has only employed facial images for face expression recognition(FER),because facial expressions are a basic way that people communicate meaning to each other.The immense achievement of deep learning has resulted in a growing use of its much architecture to enhance efficiency.This review is on machine learning,deep learning,and hybrid methods’use of preprocessing,augmentation techniques,and feature extraction for temporal properties of successive frames of data.The following section gives a brief summary of assessment criteria that are accessible to the public and then compares them with benchmark results the most trustworthy way to assess FER-related research topics statistically.In this review,a brief synopsis of the subject matter may be beneficial for novices in the field of FER as well as seasoned scholars seeking fruitful avenues for further investigation.The information conveys fundamental knowledge and provides a comprehensive understanding of the most recent state-of-the-art research.
基金supported by the Ministry of Science and Technology of China,No.2020AAA0109605(to XL)Meizhou Major Scientific and Technological Innovation PlatformsProjects of Guangdong Provincial Science & Technology Plan Projects,No.2019A0102005(to HW).
文摘Early identification and treatment of stroke can greatly improve patient outcomes and quality of life.Although clinical tests such as the Cincinnati Pre-hospital Stroke Scale(CPSS)and the Face Arm Speech Test(FAST)are commonly used for stroke screening,accurate administration is dependent on specialized training.In this study,we proposed a novel multimodal deep learning approach,based on the FAST,for assessing suspected stroke patients exhibiting symptoms such as limb weakness,facial paresis,and speech disorders in acute settings.We collected a dataset comprising videos and audio recordings of emergency room patients performing designated limb movements,facial expressions,and speech tests based on the FAST.We compared the constructed deep learning model,which was designed to process multi-modal datasets,with six prior models that achieved good action classification performance,including the I3D,SlowFast,X3D,TPN,TimeSformer,and MViT.We found that the findings of our deep learning model had a higher clinical value compared with the other approaches.Moreover,the multi-modal model outperformed its single-module variants,highlighting the benefit of utilizing multiple types of patient data,such as action videos and speech audio.These results indicate that a multi-modal deep learning model combined with the FAST could greatly improve the accuracy and sensitivity of early stroke identification of stroke,thus providing a practical and powerful tool for assessing stroke patients in an emergency clinical setting.
基金the“Intelligent Recognition Industry Service Center”as part of the Featured Areas Research Center Program under the Higher Education Sprout Project by the Ministry of Education(MOE)in Taiwan,and the National Science and Technology Council,Taiwan,under grants 113-2221-E-224-041 and 113-2622-E-224-002.Additionally,partial support was provided by Isuzu Optics Corporation.
文摘Liver cancer remains a leading cause of mortality worldwide,and precise diagnostic tools are essential for effective treatment planning.Liver Tumors(LTs)vary significantly in size,shape,and location,and can present with tissues of similar intensities,making automatically segmenting and classifying LTs from abdominal tomography images crucial and challenging.This review examines recent advancements in Liver Segmentation(LS)and Tumor Segmentation(TS)algorithms,highlighting their strengths and limitations regarding precision,automation,and resilience.Performance metrics are utilized to assess key detection algorithms and analytical methods,emphasizing their effectiveness and relevance in clinical contexts.The review also addresses ongoing challenges in liver tumor segmentation and identification,such as managing high variability in patient data and ensuring robustness across different imaging conditions.It suggests directions for future research,with insights into technological advancements that can enhance surgical planning and diagnostic accuracy by comparing popular methods.This paper contributes to a comprehensive understanding of current liver tumor detection techniques,provides a roadmap for future innovations,and improves diagnostic and therapeutic outcomes for liver cancer by integrating recent progress with remaining challenges.
基金supported by the Deanship of Scientific Research(DSR)at King Abdulaziz University,Jeddah,under grant no.(GPIP:13-612-2024).
文摘App reviews are crucial in influencing user decisions and providing essential feedback for developers to improve their products.Automating the analysis of these reviews is vital for efficient review management.While traditional machine learning(ML)models rely on basic word-based feature extraction,deep learning(DL)methods,enhanced with advanced word embeddings,have shown superior performance.This research introduces a novel aspectbased sentiment analysis(ABSA)framework to classify app reviews based on key non-functional requirements,focusing on usability factors:effectiveness,efficiency,and satisfaction.We propose a hybrid DL model,combining BERT(Bidirectional Encoder Representations from Transformers)with BiLSTM(Bidirectional Long Short-Term Memory)and CNN(Convolutional Neural Networks)layers,to enhance classification accuracy.Comparative analysis against state-of-the-art models demonstrates that our BERT-BiLSTM-CNN model achieves exceptional performance,with precision,recall,F1-score,and accuracy of 96%,87%,91%,and 94%,respectively.Thesignificant contributions of this work include a refined ABSA-based relabeling framework,the development of a highperformance classifier,and the comprehensive relabeling of the Instagram App Reviews dataset.These advancements provide valuable insights for software developers to enhance usability and drive user-centric application development.
基金supported in part by the Guangxi Science and Technology Major Program under grant AA22068067the Guangxi Natural Science Foundation under grant 2023GXNSFAA026236 and 2024GXNSFDA010064the National Natural Science Foundation of China under project 62172119.
文摘As more and more devices in Cyber-Physical Systems(CPS)are connected to the Internet,physical components such as programmable logic controller(PLC),sensors,and actuators are facing greater risks of network attacks,and fast and accurate attack detection techniques are crucial.The key problem in distinguishing between normal and abnormal sequences is to model sequential changes in a large and diverse field of time series.To address this issue,we propose an anomaly detection method based on distributed deep learning.Our method uses a bilateral filtering algorithm for sequential sequences to remove noise in the time series,which can maintain the edge of discrete features.We use a distributed linear deep learning model to establish a sequential prediction model and adjust the threshold for anomaly detection based on the prediction error of the validation set.Our method can not only detect abnormal attacks but also locate the sensors that cause anomalies.We conducted experiments on the Secure Water Treatment(SWAT)and Water Distribution(WADI)public datasets.The experimental results show that our method is superior to the baseline method in identifying the types of attacks and detecting efficiency.
文摘Lung cancer continues to be a leading cause of cancer-related deaths worldwide,emphasizing the critical need for improved diagnostic techniques.Early detection of lung tumors significantly increases the chances of successful treatment and survival.However,current diagnostic methods often fail to detect tumors at an early stage or to accurately pinpoint their location within the lung tissue.Single-model deep learning technologies for lung cancer detection,while beneficial,cannot capture the full range of features present in medical imaging data,leading to incomplete or inaccurate detection.Furthermore,it may not be robust enough to handle the wide variability in medical images due to different imaging conditions,patient anatomy,and tumor characteristics.To overcome these disadvantages,dual-model or multi-model approaches can be employed.This research focuses on enhancing the detection of lung cancer by utilizing a combination of two learning models:a Convolutional Neural Network(CNN)for categorization and the You Only Look Once(YOLOv8)architecture for real-time identification and pinpointing of tumors.CNNs automatically learn to extract hierarchical features from raw image data,capturing patterns such as edges,textures,and complex structures that are crucial for identifying lung cancer.YOLOv8 incorporates multiscale feature extraction,enabling the detection of tumors of varying sizes and scales within a single image.This is particularly beneficial for identifying small or irregularly shaped tumors that may be challenging to detect.Furthermore,through the utilization of cutting-edge data augmentation methods,such as Deep Convolutional Generative Adversarial Networks(DCGAN),the suggested approach can handle the issue of limited data and boost the models’ability to learn from diverse and comprehensive datasets.The combined method not only improved accuracy and localization but also ensured efficient real-time processing,which is crucial for practical clinical applications.The CNN achieved an accuracy of 97.67%in classifying lung tissues into healthy and cancerous categories.The YOLOv8 model achieved an Intersection over Union(IoU)score of 0.85 for tumor localization,reflecting high precision in detecting and marking tumor boundaries within the images.Finally,the incorporation of synthetic images generated by DCGAN led to a 10%improvement in both the CNN classification accuracy and YOLOv8 detection performance.
基金supported by the Project of Stable Support for Youth Team in Basic Research Field,CAS(grant No.YSBR-018)the National Natural Science Foundation of China(grant Nos.42188101,42130204)+4 种基金the B-type Strategic Priority Program of CAS(grant no.XDB41000000)the National Natural Science Foundation of China(NSFC)Distinguished Overseas Young Talents Program,Innovation Program for Quantum Science and Technology(2021ZD0300301)the Open Research Project of Large Research Infrastructures of CAS-“Study on the interaction between low/mid-latitude atmosphere and ionosphere based on the Chinese Meridian Project”.The project was supported also by the National Key Laboratory of Deep Space Exploration(Grant No.NKLDSE2023A002)the Open Fund of Anhui Provincial Key Laboratory of Intelligent Underground Detection(Grant No.APKLIUD23KF01)the China National Space Administration(CNSA)pre-research Project on Civil Aerospace Technologies No.D010305,D010301.
文摘Sporadic E(Es)layers in the ionosphere are characterized by intense plasma irregularities in the E region at altitudes of 90-130 km.Because they can significantly influence radio communications and navigation systems,accurate forecasting of Es layers is crucial for ensuring the precision and dependability of navigation satellite systems.In this study,we present Es predictions made by an empirical model and by a deep learning model,and analyze their differences comprehensively by comparing the model predictions to satellite RO measurements and ground-based ionosonde observations.The deep learning model exhibited significantly better performance,as indicated by its high coefficient of correlation(r=0.87)with RO observations and predictions,than did the empirical model(r=0.53).This study highlights the importance of integrating artificial intelligence technology into ionosphere modelling generally,and into predicting Es layer occurrences and characteristics,in particular.
文摘Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay differential equations(DDEs)and delay integrodifferential equations(DIDEs)with constant delays,primarily due to their low regularity at delayinduced breaking points.In this paper,a DNN method that combines multi-task learning(MTL)which is proposed to solve both the forward and inverse problems of DIDEs.The core idea of this approach is to divide the original equation into multiple tasks based on the delay,using auxiliary outputs to represent the integral terms,followed by the use of MTL to seamlessly incorporate the properties at the breaking points into the loss function.Furthermore,given the increased training dificulty associated with multiple tasks and outputs,we employ a sequential training scheme to reduce training complexity and provide reference solutions for subsequent tasks.This approach significantly enhances the approximation accuracy of solving DIDEs with DNNs,as demonstrated by comparisons with traditional DNN methods.We validate the effectiveness of this method through several numerical experiments,test various parameter sharing structures in MTL and compare the testing results of these structures.Finally,this method is implemented to solve the inverse problem of nonlinear DIDE and the results show that the unknown parameters of DIDE can be discovered with sparse or noisy data.
文摘In the article“Deep Learning-Enhanced Brain Tumor Prediction via Entropy-Coded BPSO in CIELAB Color Space”by Mudassir Khalil,Muhammad Imran Sharif,Ahmed Naeem,Muhammad Umar Chaudhry,Hafiz Tayyab Rauf,Adham E.Ragab Computers,Materials&Continua,2023,Vol.77,No.2,pp.2031–2047.DOI:10.32604/cmc.2023.043687,URL:https://www.techscience.com/cmc/v77n2/54831,there was an error regarding the affiliation for the author Hafiz Tayyab Rauf.Instead of“Centre for Smart Systems,AI and Cybersecurity,Staffordshire University,Stoke-on-Trent,ST42DE,UK”,the affiliation should be“Independent Researcher,Bradford,BD80HS,UK”.
文摘The fast increase of online communities has brought about an increase in cyber threats inclusive of cyberbullying, hate speech, misinformation, and online harassment, making content moderation a pressing necessity. Traditional single-modal AI-based detection systems, which analyze both text, photos, or movies in isolation, have established useless at taking pictures multi-modal threats, in which malicious actors spread dangerous content throughout a couple of formats. To cope with these demanding situations, we advise a multi-modal deep mastering framework that integrates Natural Language Processing (NLP), Convolutional Neural Networks (CNNs), and Long Short-Term Memory (LSTM) networks to become aware of and mitigate online threats effectively. Our proposed model combines BERT for text class, ResNet50 for photograph processing, and a hybrid LSTM-3-d CNN community for video content material analysis. We constructed a large-scale dataset comprising 500,000 textual posts, 200,000 offensive images, and 50,000 annotated motion pictures from more than one platform, which includes Twitter, Reddit, YouTube, and online gaming forums. The system became carefully evaluated using trendy gadget mastering metrics which include accuracy, precision, remember, F1-score, and ROC-AUC curves. Experimental outcomes demonstrate that our multi-modal method extensively outperforms single-modal AI classifiers, achieving an accuracy of 92.3%, precision of 91.2%, do not forget of 90.1%, and an AUC rating of 0.95. The findings validate the necessity of integrating multi-modal AI for actual-time, high-accuracy online chance detection and moderation. Future paintings will have consciousness on improving hostile robustness, enhancing scalability for real-world deployment, and addressing ethical worries associated with AI-driven content moderation.
基金funded by Taif University,Saudi Arabia,project No.(TU-DSPP-2024-263).
文摘Deep learning algorithms have been rapidly incorporated into many different applications due to the increase in computational power and the availability of massive amounts of data.Recently,both deep learning and ensemble learning have been used to recognize underlying structures and patterns from high-level features to make predictions/decisions.With the growth in popularity of deep learning and ensemble learning algorithms,they have received significant attention from both scientists and the industrial community due to their superior ability to learn features from big data.Ensemble deep learning has exhibited significant performance in enhancing learning generalization through the use of multiple deep learning algorithms.Although ensemble deep learning has large quantities of training parameters,which results in time and space overheads,it performs much better than traditional ensemble learning.Ensemble deep learning has been successfully used in several areas,such as bioinformatics,finance,and health care.In this paper,we review and investigate recent ensemble deep learning algorithms and techniques in health care domains,medical imaging,health care data analytics,genomics,diagnosis,disease prevention,and drug discovery.We cover several widely used deep learning algorithms along with their architectures,including deep neural networks(DNNs),convolutional neural networks(CNNs),recurrent neural networks(RNNs),and generative adversarial networks(GANs).Common healthcare tasks,such as medical imaging,electronic health records,and genomics,are also demonstrated.Furthermore,in this review,the challenges inherent in reducing the burden on the healthcare system are discussed and explored.Finally,future directions and opportunities for enhancing healthcare model performance are discussed.
文摘This article is devoted to developing a deep learning method for the numerical solution of the partial differential equations (PDEs). Graph kernel neural networks (GKNN) approach to embedding graphs into a computationally numerical format has been used. In particular, for investigation mathematical models of the dynamical system of cancer cell invasion in inhomogeneous areas of human tissues have been considered. Neural operators were initially proposed to model the differential operator of PDEs. The GKNN mapping features between input data to the PDEs and their solutions have been constructed. The boundary integral method in combination with Green’s functions for a large number of boundary conditions is used. The tools applied in this development are based on the Fourier neural operators (FNOs), graph theory, theory elasticity, and singular integral equations.
文摘Deep learning has significantly transformed personalized education by enabling intelligent adaptation to indi-vidual learning needs.This study explores deep learning-based modeling methods that enhance personalized learning experiences,optimize instructional content,and predict student progress.We examine key techniques,including recurrent neural networks(RNNs),transformers,reinforcement learning,and multimodal learning analytics,to demonstrate their roles in personalized learning path recommendations and adaptive content gen-eration.Case studies of AI-driven tutoring systems and learning management platforms illustrate real-world applications.Additionally,we address challenges related to data privacy,algorithmic bias,and model inter-pretability.The paper concludes with future directions for deep learning in education,emphasizing its potential for enhancing immersive and intelligent learning environments.
文摘The precise identification of quartz minerals is crucial in mineralogy and geology due to their widespread occurrence and industrial significance.Traditional methods of quartz identification in thin sections are labor-intensive and require significant expertise,often complicated by the coexistence of other minerals.This study presents a novel approach leveraging deep learning techniques combined with hyperspectral imaging to automate the identification process of quartz minerals.The utilizied four advanced deep learning models—PSPNet,U-Net,FPN,and LinkNet—has significant advancements in efficiency and accuracy.Among these models,PSPNet exhibited superior performance,achieving the highest intersection over union(IoU)scores and demonstrating exceptional reliability in segmenting quartz minerals,even in complex scenarios.The study involved a comprehensive dataset of 120 thin sections,encompassing 2470 hyperspectral images prepared from 20 rock samples.Expert-reviewed masks were used for model training,ensuring robust segmentation results.This automated approach not only expedites the recognition process but also enhances reliability,providing a valuable tool for geologists and advancing the field of mineralogical analysis.
基金the Deputyship for Research&Innovation,Ministry of Education in Saudi Arabia for funding this research work through project number RI-44-0833.
文摘The field of biometric identification has seen significant advancements over the years,with research focusing on enhancing the accuracy and security of these systems.One of the key developments is the integration of deep learning techniques in biometric systems.However,despite these advancements,certain challenges persist.One of the most significant challenges is scalability over growing complexity.Traditional methods either require maintaining and securing a growing database,introducing serious security challenges,or relying on retraining the entiremodelwhen new data is introduced-a process that can be computationally expensive and complex.This challenge underscores the need for more efficient methods to scale securely.To this end,we introduce a novel approach that addresses these challenges by integrating multimodal biometrics,cancelable biometrics,and incremental learning techniques.This work is among the first attempts to seamlessly incorporate deep cancelable biometrics with dynamic architectural updates,applied incrementally to the deep learning model as new users are enrolled,achieving high performance with minimal catastrophic forgetting.By leveraging a One-Dimensional Convolutional Neural Network(1D-CNN)architecture combined with a hybrid incremental learning approach,our system achieves high recognition accuracy,averaging 98.98% over incrementing datasets,while ensuring user privacy through cancelable templates generated via a pre-trained CNN model and random projection.The approach demonstrates remarkable adaptability,utilizing the least intrusive biometric traits like facial features and fingerprints,ensuring not only robust performance but also long-term serviceability.
基金supported by Institute of Information&Communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(RS-2024-00460621,Developing BCI-Based Digital Health Technologies for Mental Illness and Pain Management).
文摘Automatic detection of Leukemia or blood cancer is one of the most challenging tasks that need to be addressed in the healthcare system.Analysis of white blood cells(WBCs)in the blood or bone marrow microscopic slide images play a crucial part in early identification to facilitate medical experts.For Acute Lymphocytic Leukemia(ALL),the most preferred part of the blood or marrow is to be analyzed by the experts before it spreads in the whole body and the condition becomes worse.The researchers have done a lot of work in this field,to demonstrate a comprehensive analysis few literature reviews have been published focusing on various artificial intelligence-based techniques like machine and deep learning detection of ALL.The systematic review has been done in this article under the PRISMA guidelines which presents the most recent advancements in this field.Different image segmentation techniques were broadly studied and categorized from various online databases like Google Scholar,Science Direct,and PubMed as image processing-based,traditional machine and deep learning-based,and advanced deep learning-based models were presented.Convolutional Neural Networks(CNN)based on traditional models and then the recent advancements in CNN used for the classification of ALL into its subtypes.A critical analysis of the existing methods is provided to offer clarity on the current state of the field.Finally,the paper concludes with insights and suggestions for future research,aiming to guide new researchers in the development of advanced automated systems for detecting life-threatening diseases.
基金supported by the National Key Research and Development Program of China(Grant No.2023YFC3009400)the National Natural Science Foundation of China(Grant Nos.42307218 and U2239251).
文摘The current deep learning models for braced excavation cannot predict deformation from the beginning of excavation due to the need for a substantial corpus of sufficient historical data for training purposes.To address this issue,this study proposes a transfer learning model based on a sequence-to-sequence twodimensional(2D)convolutional long short-term memory neural network(S2SCL2D).The model can use the existing data from other adjacent similar excavations to achieve wall deflection prediction once a limited amount of monitoring data from the target excavation has been recorded.In the absence of adjacent excavation data,numerical simulation data from the target project can be employed instead.A weight update strategy is proposed to improve the prediction accuracy by integrating the stochastic gradient masking with an early stopping mechanism.To illustrate the proposed methodology,an excavation project in Hangzhou,China is adopted.The proposed deep transfer learning model,which uses either adjacent excavation data or numerical simulation data as the source domain,shows a significant improvement in performance when compared to the non-transfer learning model.Using the simulation data from the target project even leads to better prediction performance than using the actual monitoring data from other adjacent excavations.The results demonstrate that the proposed model can reasonably predict the deformation with limited data from the target project.
基金funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R435),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Segmenting a breast ultrasound image is still challenging due to the presence of speckle noise,dependency on the operator,and the variation of image quality.This paper presents the UltraSegNet architecture that addresses these challenges through three key technical innovations:This work adds three things:(1)a changed ResNet-50 backbone with sequential 3×3 convolutions to keep fine anatomical details that are needed for finding lesion boundaries;(2)a computationally efficient regional attention mechanism that works on high-resolution features without using a transformer’s extra memory;and(3)an adaptive feature fusion strategy that changes local and global featuresbasedonhowthe image isbeing used.Extensive evaluation on two distinct datasets demonstrates UltraSegNet’s superior performance:On the BUSI dataset,it obtains a precision of 0.915,a recall of 0.908,and an F1 score of 0.911.In the UDAIT dataset,it achieves robust performance across the board,with a precision of 0.901 and recall of 0.894.Importantly,these improvements are achieved at clinically feasible computation times,taking 235 ms per image on standard GPU hardware.Notably,UltraSegNet does amazingly well on difficult small lesions(less than 10 mm),achieving a detection accuracy of 0.891.This is a huge improvement over traditional methods that have a hard time with small-scale features,as standard models can only achieve 0.63–0.71 accuracy.This improvement in small lesion detection is particularly crucial for early-stage breast cancer identification.Results from this work demonstrate that UltraSegNet can be practically deployable in clinical workflows to improve breast cancer screening accuracy.
基金The authors of this study extend their appreciation to the Researchers Supporting Project number(RSPD2025R544),King Saud University,Riyadh,Saudia Arabia.
文摘As the trend to use the latestmachine learning models to automate requirements engineering processes continues,security requirements classification is tuning into the most researched field in the software engineering community.Previous literature studies have proposed numerousmodels for the classification of security requirements.However,adopting those models is constrained due to the lack of essential datasets permitting the repetition and generalization of studies employing more advanced machine learning algorithms.Moreover,most of the researchers focus only on the classification of requirements with security keywords.They did not consider other nonfunctional requirements(NFR)directly or indirectly related to security.This has been identified as a significant research gap in security requirements engineering.The major objective of this study is to propose a security requirements classification model that categorizes security and other relevant security requirements.We use PROMISE_exp and DOSSPRE,the two most commonly used datasets in the software engineering community.The proposed methodology consists of two steps.In the first step,we analyze all the nonfunctional requirements and their relation with security requirements.We found 10 NFRs that have a strong relationship with security requirements.In the second step,we categorize those NFRs in the security requirements category.Our proposedmethodology is a hybridmodel based on the ConvolutionalNeural Network(CNN)and Extreme Gradient Boosting(XGBoost)models.Moreover,we evaluate the model by updating the requirement type column with a binary classification column in the dataset to classify the requirements into security and non-security categories.The performance is evaluated using four metrics:recall,precision,accuracy,and F1 Score with 20 and 28 epochs number and batch size of 32 for PROMISE_exp and DOSSPRE datasets and achieved 87.3%and 85.3%accuracy,respectively.The proposed study shows an enhancement in metrics values compared to the previous literature studies.This is a proof of concept for systematizing the evaluation of security recognition in software systems during the early phases of software development.
文摘Heart disease prediction is a critical issue in healthcare,where accurate early diagnosis can save lives and reduce healthcare costs.The problem is inherently complex due to the high dimensionality of medical data,irrelevant or redundant features,and the variability in risk factors such as age,lifestyle,andmedical history.These challenges often lead to inefficient and less accuratemodels.Traditional predictionmethodologies face limitations in effectively handling large feature sets and optimizing classification performance,which can result in overfitting poor generalization,and high computational cost.This work proposes a novel classification model for heart disease prediction that addresses these challenges by integrating feature selection through a Genetic Algorithm(GA)with an ensemble deep learning approach optimized using the Tunicate Swarm Algorithm(TSA).GA selects the most relevant features,reducing dimensionality and improvingmodel efficiency.Theselected features are then used to train an ensemble of deep learning models,where the TSA optimizes the weight of each model in the ensemble to enhance prediction accuracy.This hybrid approach addresses key challenges in the field,such as high dimensionality,redundant features,and classification performance,by introducing an efficient feature selection mechanism and optimizing the weighting of deep learning models in the ensemble.These enhancements result in a model that achieves superior accuracy,generalization,and efficiency compared to traditional methods.The proposed model demonstrated notable advancements in both prediction accuracy and computational efficiency over traditionalmodels.Specifically,it achieved an accuracy of 97.5%,a sensitivity of 97.2%,and a specificity of 97.8%.Additionally,with a 60-40 data split and 5-fold cross-validation,the model showed a significant reduction in training time(90 s),memory consumption(950 MB),and CPU usage(80%),highlighting its effectiveness in processing large,complex medical datasets for heart disease prediction.