Myocardial perfusion imaging(MPI),which uses single-photon emission computed tomography(SPECT),is a well-known estimating tool for medical diagnosis,employing the classification of images to show situations in coronar...Myocardial perfusion imaging(MPI),which uses single-photon emission computed tomography(SPECT),is a well-known estimating tool for medical diagnosis,employing the classification of images to show situations in coronary artery disease(CAD).The automatic classification of SPECT images for different techniques has achieved near-optimal accuracy when using convolutional neural networks(CNNs).This paper uses a SPECT classification framework with three steps:1)Image denoising,2)Attenuation correction,and 3)Image classification.Image denoising is done by a U-Net architecture that ensures effective image denoising.Attenuation correction is implemented by a convolution neural network model that can remove the attenuation that affects the feature extraction process of classification.Finally,a novel multi-scale diluted convolution(MSDC)network is proposed.It merges the features extracted in different scales and makes the model learn the features more efficiently.Three scales of filters with size 3×3 are used to extract features.All three steps are compared with state-of-the-art methods.The proposed denoising architecture ensures a high-quality image with the highest peak signal-to-noise ratio(PSNR)value of 39.7.The proposed classification method is compared with the five different CNN models,and the proposed method ensures better classification with an accuracy of 96%,precision of 87%,sensitivity of 87%,specificity of 89%,and F1-score of 87%.To demonstrate the importance of preprocessing,the classification model was analyzed without denoising and attenuation correction.展开更多
In radiology,magnetic resonance imaging(MRI)is an essential diagnostic tool that provides detailed images of a patient’s anatomical and physiological structures.MRI is particularly effective for detecting soft tissue...In radiology,magnetic resonance imaging(MRI)is an essential diagnostic tool that provides detailed images of a patient’s anatomical and physiological structures.MRI is particularly effective for detecting soft tissue anomalies.Traditionally,radiologists manually interpret these images,which can be labor-intensive and time-consuming due to the vast amount of data.To address this challenge,machine learning,and deep learning approaches can be utilized to improve the accuracy and efficiency of anomaly detection in MRI scans.This manuscript presents the use of the Deep AlexNet50 model for MRI classification with discriminative learning methods.There are three stages for learning;in the first stage,the whole dataset is used to learn the features.In the second stage,some layers of AlexNet50 are frozen with an augmented dataset,and in the third stage,AlexNet50 with an augmented dataset with the augmented dataset.This method used three publicly available MRI classification datasets:Harvard whole brain atlas(HWBA-dataset),the School of Biomedical Engineering of Southern Medical University(SMU-dataset),and The National Institute of Neuroscience and Hospitals brain MRI dataset(NINS-dataset)for analysis.Various hyperparameter optimizers like Adam,stochastic gradient descent(SGD),Root mean square propagation(RMS prop),Adamax,and AdamW have been used to compare the performance of the learning process.HWBA-dataset registers maximum classification performance.We evaluated the performance of the proposed classification model using several quantitative metrics,achieving an average accuracy of 98%.展开更多
Many fields,such as neuroscience,are experiencing the vast prolife ration of cellular data,underscoring the need fo r organizing and interpreting large datasets.A popular approach partitions data into manageable subse...Many fields,such as neuroscience,are experiencing the vast prolife ration of cellular data,underscoring the need fo r organizing and interpreting large datasets.A popular approach partitions data into manageable subsets via hierarchical clustering,but objective methods to determine the appropriate classification granularity are missing.We recently introduced a technique to systematically identify when to stop subdividing clusters based on the fundamental principle that cells must differ more between than within clusters.Here we present the corresponding protocol to classify cellular datasets by combining datadriven unsupervised hierarchical clustering with statistical testing.These general-purpose functions are applicable to any cellular dataset that can be organized as two-dimensional matrices of numerical values,including molecula r,physiological,and anatomical datasets.We demonstrate the protocol using cellular data from the Janelia MouseLight project to chara cterize morphological aspects of neurons.展开更多
Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify sp...Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify specific flaws/diseases for diagnosis.The primary concern of ML applications is the precise selection of flexible image features for pattern detection and region classification.Most of the extracted image features are irrelevant and lead to an increase in computation time.Therefore,this article uses an analytical learning paradigm to design a Congruent Feature Selection Method to select the most relevant image features.This process trains the learning paradigm using similarity and correlation-based features over different textural intensities and pixel distributions.The similarity between the pixels over the various distribution patterns with high indexes is recommended for disease diagnosis.Later,the correlation based on intensity and distribution is analyzed to improve the feature selection congruency.Therefore,the more congruent pixels are sorted in the descending order of the selection,which identifies better regions than the distribution.Now,the learning paradigm is trained using intensity and region-based similarity to maximize the chances of selection.Therefore,the probability of feature selection,regardless of the textures and medical image patterns,is improved.This process enhances the performance of ML applications for different medical image processing.The proposed method improves the accuracy,precision,and training rate by 13.19%,10.69%,and 11.06%,respectively,compared to other models for the selected dataset.The mean error and selection time is also reduced by 12.56%and 13.56%,respectively,compared to the same models and dataset.展开更多
In response to the scarcity of infrared aircraft samples and the tendency of traditional deep learning to overfit,a few-shot infrared aircraft classification method based on cross-correlation networks is proposed.This...In response to the scarcity of infrared aircraft samples and the tendency of traditional deep learning to overfit,a few-shot infrared aircraft classification method based on cross-correlation networks is proposed.This method combines two core modules:a simple parameter-free self-attention and cross-attention.By analyzing the self-correlation and cross-correlation between support images and query images,it achieves effective classification of infrared aircraft under few-shot conditions.The proposed cross-correlation network integrates these two modules and is trained in an end-to-end manner.The simple parameter-free self-attention is responsible for extracting the internal structure of the image while the cross-attention can calculate the cross-correlation between images further extracting and fusing the features between images.Compared with existing few-shot infrared target classification models,this model focuses on the geometric structure and thermal texture information of infrared images by modeling the semantic relevance between the features of the support set and query set,thus better attending to the target objects.Experimental results show that this method outperforms existing infrared aircraft classification methods in various classification tasks,with the highest classification accuracy improvement exceeding 3%.In addition,ablation experiments and comparative experiments also prove the effectiveness of the method.展开更多
The World Wide Web provides a wealth of information about everything, including contemporary audio and visual art events, which are discussed on media outlets, blogs, and specialized websites alike. This information m...The World Wide Web provides a wealth of information about everything, including contemporary audio and visual art events, which are discussed on media outlets, blogs, and specialized websites alike. This information may become a robust source of real-world data, which may form the basis of an objective data-driven analysis. In this study, a methodology for collecting information about audio and visual art events in an automated manner from a large array of websites is presented in detail. This process uses cutting edge Semantic Web, Web Search and Generative AI technologies to convert website documents into a collection of structured data. The value of the methodology is demonstrated by creating a large dataset concerning audiovisual events in Greece. The collected information includes event characteristics, estimated metrics based on their text descriptions, outreach metrics based on the media that reported them, and a multi-layered classification of these events based on their type, subjects and methods used. This dataset is openly provided to the general and academic public through a Web application. Moreover, each event’s outreach is evaluated using these quantitative metrics, the results are analyzed with an emphasis on classification popularity and useful conclusions are drawn concerning the importance of artistic subjects, methods, and media.展开更多
Flood disasters can have a serious impact on people's production and lives, and can cause hugelosses in lives and property security. Based on multi-source remote sensing data, this study establisheddecision tree c...Flood disasters can have a serious impact on people's production and lives, and can cause hugelosses in lives and property security. Based on multi-source remote sensing data, this study establisheddecision tree classification rules through multi-source and multi-temporal feature fusion, classified groundobjects before the disaster and extracted flood information in the disaster area based on optical imagesduring the disaster, so as to achieve rapid acquisition of the disaster situation of each disaster bearing object.In the case of Qianliang Lake, which suffered from flooding in 2020, the results show that decision treeclassification algorithms based on multi-temporal features can effectively integrate multi-temporal and multispectralinformation to overcome the shortcomings of single-temporal image classification and achieveground-truth object classification.展开更多
The risk factors for type 2 diabetes mellitus(T2DM)have been increasingly researched,but the lack of systematic identification and categorization makes it difficult for clinicians to quickly and accurately access and ...The risk factors for type 2 diabetes mellitus(T2DM)have been increasingly researched,but the lack of systematic identification and categorization makes it difficult for clinicians to quickly and accurately access and understand all the risk factors,which are categorized in this paper into five categories:Social determinants,lifestyle,checkable/testable risk factors,history of illness and medication,and other factors,which are discussed in a narrative review.Meanwhile,this paper points out the problems of the current research,helps to improve the systematic categorisation and practicality of T2DM risk factors,and provides a professional research basis for clinical practice and industry decision-making.展开更多
In this paper,the sentiment classification method of multimodal adversarial autoencoder is studied.This paper includes the introduction of the multimodal adversarial autoencoder emotion classification method and the e...In this paper,the sentiment classification method of multimodal adversarial autoencoder is studied.This paper includes the introduction of the multimodal adversarial autoencoder emotion classification method and the experiment of the emotion classification method based on the encoder.The experimental analysis shows that the encoder has higher precision than other encoders in emotion classification.It is hoped that this analysis can provide some reference for the emotion classification under the current intelligent algorithm mode.展开更多
This study first analyzes four distinct forms of Fujian folk dance,highlighting the notable differences in their cultural characteristics and dance qualities.It then categorizes these dance forms to align with textboo...This study first analyzes four distinct forms of Fujian folk dance,highlighting the notable differences in their cultural characteristics and dance qualities.It then categorizes these dance forms to align with textbook construction,discussing in depth the principles guiding the development of textbooks that correspond to these classifications.展开更多
With the widespread use of upper gastrointestinal endoscopy,more and more gastric polyps(GPs)are being detected.Traditional management strategies often rely on histopathologic examination,which can be time-consuming a...With the widespread use of upper gastrointestinal endoscopy,more and more gastric polyps(GPs)are being detected.Traditional management strategies often rely on histopathologic examination,which can be time-consuming and may not guide immediate clinical decisions.This paper aims to introduce a novel classification system for GPs based on their potential risk of malignant transformation,categorizing them as"good","bad",and"ugly".A review of the literature and clinical case analysis were conducted to explore the clinical implications,management strategies,and the system's application in endoscopic practice.Good polyps,mainly including fundic gland polyps and inflammatory fibrous polyps,have a low risk of malignancy and typically require minimal or no intervention.Bad polyps,mainly including hyperplastic polyps and adenomas,pose an intermediate risk of malignancy,necessitating closer monitoring or removal.Ugly polyps,mainly including type 3 neuroendocrine tumors and early gastric cancer,indicate a high potential for malignancy and require urgent and comprehensive treatment.The new classification system provides a simplified and practical framework for diagnosing and managing GPs,improving diagnostic accuracy,guiding individualized treatment,and promoting advancements in endoscopic techniques.Despite some challenges,such as the risk of misclassification due to similar endoscopic appearances,this system is essential for the standardized management of GPs.It also lays the foundation for future research into biomarkers and the development of personalized medicine.展开更多
The integration of image analysis through deep learning(DL)into rock classification represents a significant leap forward in geological research.While traditional methods remain invaluable for their expertise and hist...The integration of image analysis through deep learning(DL)into rock classification represents a significant leap forward in geological research.While traditional methods remain invaluable for their expertise and historical context,DL offers a powerful complement by enhancing the speed,objectivity,and precision of the classification process.This research explores the significance of image data augmentation techniques in optimizing the performance of convolutional neural networks(CNNs)for geological image analysis,particularly in the classification of igneous,metamorphic,and sedimentary rock types from rock thin section(RTS)images.This study primarily focuses on classic image augmentation techniques and evaluates their impact on model accuracy and precision.Results demonstrate that augmentation techniques like Equalize significantly enhance the model's classification capabilities,achieving an F1-Score of 0.9869 for igneous rocks,0.9884 for metamorphic rocks,and 0.9929 for sedimentary rocks,representing improvements compared to the baseline original results.Moreover,the weighted average F1-Score across all classes and techniques is 0.9886,indicating an enhancement.Conversely,methods like Distort lead to decreased accuracy and F1-Score,with an F1-Score of 0.949 for igneous rocks,0.954 for metamorphic rocks,and 0.9416 for sedimentary rocks,exacerbating the performance compared to the baseline.The study underscores the practicality of image data augmentation in geological image classification and advocates for the adoption of DL methods in this domain for automation and improved results.The findings of this study can benefit various fields,including remote sensing,mineral exploration,and environmental monitoring,by enhancing the accuracy of geological image analysis both for scientific research and industrial applications.展开更多
In network traffic classification,it is important to understand the correlation between network traffic and its causal application,protocol,or service group,for example,in facilitating lawful interception,ensuring the...In network traffic classification,it is important to understand the correlation between network traffic and its causal application,protocol,or service group,for example,in facilitating lawful interception,ensuring the quality of service,preventing application choke points,and facilitating malicious behavior identification.In this paper,we review existing network classification techniques,such as port-based identification and those based on deep packet inspection,statistical features in conjunction with machine learning,and deep learning algorithms.We also explain the implementations,advantages,and limitations associated with these techniques.Our review also extends to publicly available datasets used in the literature.Finally,we discuss existing and emerging challenges,as well as future research directions.展开更多
When building a classification model,the scenario where the samples of one class are significantly more than those of the other class is called data imbalance.Data imbalance causes the trained classification model to ...When building a classification model,the scenario where the samples of one class are significantly more than those of the other class is called data imbalance.Data imbalance causes the trained classification model to be in favor of the majority class(usually defined as the negative class),which may do harm to the accuracy of the minority class(usually defined as the positive class),and then lead to poor overall performance of the model.A method called MSHR-FCSSVM for solving imbalanced data classification is proposed in this article,which is based on a new hybrid resampling approach(MSHR)and a new fine cost-sensitive support vector machine(CS-SVM)classifier(FCSSVM).The MSHR measures the separability of each negative sample through its Silhouette value calculated by Mahalanobis distance between samples,based on which,the so-called pseudo-negative samples are screened out to generate new positive samples(over-sampling step)through linear interpolation and are deleted finally(under-sampling step).This approach replaces pseudo-negative samples with generated new positive samples one by one to clear up the inter-class overlap on the borderline,without changing the overall scale of the dataset.The FCSSVM is an improved version of the traditional CS-SVM.It considers influences of both the imbalance of sample number and the class distribution on classification simultaneously,and through finely tuning the class cost weights by using the efficient optimization algorithm based on the physical phenomenon of rime-ice(RIME)algorithm with cross-validation accuracy as the fitness function to accurately adjust the classification borderline.To verify the effectiveness of the proposed method,a series of experiments are carried out based on 20 imbalanced datasets including both mildly and extremely imbalanced datasets.The experimental results show that the MSHR-FCSSVM method performs better than the methods for comparison in most cases,and both the MSHR and the FCSSVM played significant roles.展开更多
Recently, there have been some attempts of Transformer in 3D point cloud classification. In order to reduce computations, most existing methods focus on local spatial attention,but ignore their content and fail to est...Recently, there have been some attempts of Transformer in 3D point cloud classification. In order to reduce computations, most existing methods focus on local spatial attention,but ignore their content and fail to establish relationships between distant but relevant points. To overcome the limitation of local spatial attention, we propose a point content-based Transformer architecture, called PointConT for short. It exploits the locality of points in the feature space(content-based), which clusters the sampled points with similar features into the same class and computes the self-attention within each class, thus enabling an effective trade-off between capturing long-range dependencies and computational complexity. We further introduce an inception feature aggregator for point cloud classification, which uses parallel structures to aggregate high-frequency and low-frequency information in each branch separately. Extensive experiments show that our PointConT model achieves a remarkable performance on point cloud shape classification. Especially, our method exhibits 90.3% Top-1 accuracy on the hardest setting of ScanObjectN N. Source code of this paper is available at https://github.com/yahuiliu99/PointC onT.展开更多
The network of Himalayan roadways and highways connects some remote regions of valleys or hill slopes,which is vital for India’s socio-economic growth.Due to natural and artificial factors,frequency of slope instabil...The network of Himalayan roadways and highways connects some remote regions of valleys or hill slopes,which is vital for India’s socio-economic growth.Due to natural and artificial factors,frequency of slope instabilities along the networks has been increasing over last few decades.Assessment of stability of natural and artificial slopes due to construction of these connecting road networks is significant in safely executing these roads throughout the year.Several rock mass classification methods are generally used to assess the strength and deformability of rock mass.This study assesses slope stability along the NH-1A of Ramban district of North Western Himalayas.Various structurally and non-structurally controlled rock mass classification systems have been applied to assess the stability conditions of 14 slopes.For evaluating the stability of these slopes,kinematic analysis was performed along with geological strength index(GSI),rock mass rating(RMR),continuous slope mass rating(CoSMR),slope mass rating(SMR),and Q-slope in the present study.The SMR gives three slopes as completely unstable while CoSMR suggests four slopes as completely unstable.The stability of all slopes was also analyzed using a design chart under dynamic and static conditions by slope stability rating(SSR)for the factor of safety(FoS)of 1.2 and 1 respectively.Q-slope with probability of failure(PoF)1%gives two slopes as stable slopes.Stable slope angle has been determined based on the Q-slope safe angle equation and SSR design chart based on the FoS.The value ranges given by different empirical classifications were RMR(37-74),GSI(27.3-58.5),SMR(11-59),and CoSMR(3.39-74.56).Good relationship was found among RMR&SSR and RMR&GSI with correlation coefficient(R 2)value of 0.815 and 0.6866,respectively.Lastly,a comparative stability of all these slopes based on the above classification has been performed to identify the most critical slope along this road.展开更多
基金the Research Grant of Kwangwoon University in 2024.
文摘Myocardial perfusion imaging(MPI),which uses single-photon emission computed tomography(SPECT),is a well-known estimating tool for medical diagnosis,employing the classification of images to show situations in coronary artery disease(CAD).The automatic classification of SPECT images for different techniques has achieved near-optimal accuracy when using convolutional neural networks(CNNs).This paper uses a SPECT classification framework with three steps:1)Image denoising,2)Attenuation correction,and 3)Image classification.Image denoising is done by a U-Net architecture that ensures effective image denoising.Attenuation correction is implemented by a convolution neural network model that can remove the attenuation that affects the feature extraction process of classification.Finally,a novel multi-scale diluted convolution(MSDC)network is proposed.It merges the features extracted in different scales and makes the model learn the features more efficiently.Three scales of filters with size 3×3 are used to extract features.All three steps are compared with state-of-the-art methods.The proposed denoising architecture ensures a high-quality image with the highest peak signal-to-noise ratio(PSNR)value of 39.7.The proposed classification method is compared with the five different CNN models,and the proposed method ensures better classification with an accuracy of 96%,precision of 87%,sensitivity of 87%,specificity of 89%,and F1-score of 87%.To demonstrate the importance of preprocessing,the classification model was analyzed without denoising and attenuation correction.
文摘In radiology,magnetic resonance imaging(MRI)is an essential diagnostic tool that provides detailed images of a patient’s anatomical and physiological structures.MRI is particularly effective for detecting soft tissue anomalies.Traditionally,radiologists manually interpret these images,which can be labor-intensive and time-consuming due to the vast amount of data.To address this challenge,machine learning,and deep learning approaches can be utilized to improve the accuracy and efficiency of anomaly detection in MRI scans.This manuscript presents the use of the Deep AlexNet50 model for MRI classification with discriminative learning methods.There are three stages for learning;in the first stage,the whole dataset is used to learn the features.In the second stage,some layers of AlexNet50 are frozen with an augmented dataset,and in the third stage,AlexNet50 with an augmented dataset with the augmented dataset.This method used three publicly available MRI classification datasets:Harvard whole brain atlas(HWBA-dataset),the School of Biomedical Engineering of Southern Medical University(SMU-dataset),and The National Institute of Neuroscience and Hospitals brain MRI dataset(NINS-dataset)for analysis.Various hyperparameter optimizers like Adam,stochastic gradient descent(SGD),Root mean square propagation(RMS prop),Adamax,and AdamW have been used to compare the performance of the learning process.HWBA-dataset registers maximum classification performance.We evaluated the performance of the proposed classification model using several quantitative metrics,achieving an average accuracy of 98%.
基金supported in part by NIH grants R01NS39600,U01MH114829RF1MH128693(to GAA)。
文摘Many fields,such as neuroscience,are experiencing the vast prolife ration of cellular data,underscoring the need fo r organizing and interpreting large datasets.A popular approach partitions data into manageable subsets via hierarchical clustering,but objective methods to determine the appropriate classification granularity are missing.We recently introduced a technique to systematically identify when to stop subdividing clusters based on the fundamental principle that cells must differ more between than within clusters.Here we present the corresponding protocol to classify cellular datasets by combining datadriven unsupervised hierarchical clustering with statistical testing.These general-purpose functions are applicable to any cellular dataset that can be organized as two-dimensional matrices of numerical values,including molecula r,physiological,and anatomical datasets.We demonstrate the protocol using cellular data from the Janelia MouseLight project to chara cterize morphological aspects of neurons.
基金the Deanship of Scientifc Research at King Khalid University for funding this work through large group Research Project under grant number RGP2/421/45supported via funding from Prince Sattam bin Abdulaziz University project number(PSAU/2024/R/1446)+1 种基金supported by theResearchers Supporting Project Number(UM-DSR-IG-2023-07)Almaarefa University,Riyadh,Saudi Arabia.supported by the Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(No.2021R1F1A1055408).
文摘Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify specific flaws/diseases for diagnosis.The primary concern of ML applications is the precise selection of flexible image features for pattern detection and region classification.Most of the extracted image features are irrelevant and lead to an increase in computation time.Therefore,this article uses an analytical learning paradigm to design a Congruent Feature Selection Method to select the most relevant image features.This process trains the learning paradigm using similarity and correlation-based features over different textural intensities and pixel distributions.The similarity between the pixels over the various distribution patterns with high indexes is recommended for disease diagnosis.Later,the correlation based on intensity and distribution is analyzed to improve the feature selection congruency.Therefore,the more congruent pixels are sorted in the descending order of the selection,which identifies better regions than the distribution.Now,the learning paradigm is trained using intensity and region-based similarity to maximize the chances of selection.Therefore,the probability of feature selection,regardless of the textures and medical image patterns,is improved.This process enhances the performance of ML applications for different medical image processing.The proposed method improves the accuracy,precision,and training rate by 13.19%,10.69%,and 11.06%,respectively,compared to other models for the selected dataset.The mean error and selection time is also reduced by 12.56%and 13.56%,respectively,compared to the same models and dataset.
基金Supported by the National Pre-research Program during the 14th Five-Year Plan(514010405)。
文摘In response to the scarcity of infrared aircraft samples and the tendency of traditional deep learning to overfit,a few-shot infrared aircraft classification method based on cross-correlation networks is proposed.This method combines two core modules:a simple parameter-free self-attention and cross-attention.By analyzing the self-correlation and cross-correlation between support images and query images,it achieves effective classification of infrared aircraft under few-shot conditions.The proposed cross-correlation network integrates these two modules and is trained in an end-to-end manner.The simple parameter-free self-attention is responsible for extracting the internal structure of the image while the cross-attention can calculate the cross-correlation between images further extracting and fusing the features between images.Compared with existing few-shot infrared target classification models,this model focuses on the geometric structure and thermal texture information of infrared images by modeling the semantic relevance between the features of the support set and query set,thus better attending to the target objects.Experimental results show that this method outperforms existing infrared aircraft classification methods in various classification tasks,with the highest classification accuracy improvement exceeding 3%.In addition,ablation experiments and comparative experiments also prove the effectiveness of the method.
文摘The World Wide Web provides a wealth of information about everything, including contemporary audio and visual art events, which are discussed on media outlets, blogs, and specialized websites alike. This information may become a robust source of real-world data, which may form the basis of an objective data-driven analysis. In this study, a methodology for collecting information about audio and visual art events in an automated manner from a large array of websites is presented in detail. This process uses cutting edge Semantic Web, Web Search and Generative AI technologies to convert website documents into a collection of structured data. The value of the methodology is demonstrated by creating a large dataset concerning audiovisual events in Greece. The collected information includes event characteristics, estimated metrics based on their text descriptions, outreach metrics based on the media that reported them, and a multi-layered classification of these events based on their type, subjects and methods used. This dataset is openly provided to the general and academic public through a Web application. Moreover, each event’s outreach is evaluated using these quantitative metrics, the results are analyzed with an emphasis on classification popularity and useful conclusions are drawn concerning the importance of artistic subjects, methods, and media.
文摘Flood disasters can have a serious impact on people's production and lives, and can cause hugelosses in lives and property security. Based on multi-source remote sensing data, this study establisheddecision tree classification rules through multi-source and multi-temporal feature fusion, classified groundobjects before the disaster and extracted flood information in the disaster area based on optical imagesduring the disaster, so as to achieve rapid acquisition of the disaster situation of each disaster bearing object.In the case of Qianliang Lake, which suffered from flooding in 2020, the results show that decision treeclassification algorithms based on multi-temporal features can effectively integrate multi-temporal and multispectralinformation to overcome the shortcomings of single-temporal image classification and achieveground-truth object classification.
基金National Natural Science Foundation of China,No.T2341018Science and Technology Innovation Project of Chinese Academy of Traditional Chinese Medicine,No.CI2023C049YLL.
文摘The risk factors for type 2 diabetes mellitus(T2DM)have been increasingly researched,but the lack of systematic identification and categorization makes it difficult for clinicians to quickly and accurately access and understand all the risk factors,which are categorized in this paper into five categories:Social determinants,lifestyle,checkable/testable risk factors,history of illness and medication,and other factors,which are discussed in a narrative review.Meanwhile,this paper points out the problems of the current research,helps to improve the systematic categorisation and practicality of T2DM risk factors,and provides a professional research basis for clinical practice and industry decision-making.
文摘In this paper,the sentiment classification method of multimodal adversarial autoencoder is studied.This paper includes the introduction of the multimodal adversarial autoencoder emotion classification method and the experiment of the emotion classification method based on the encoder.The experimental analysis shows that the encoder has higher precision than other encoders in emotion classification.It is hoped that this analysis can provide some reference for the emotion classification under the current intelligent algorithm mode.
文摘This study first analyzes four distinct forms of Fujian folk dance,highlighting the notable differences in their cultural characteristics and dance qualities.It then categorizes these dance forms to align with textbook construction,discussing in depth the principles guiding the development of textbooks that correspond to these classifications.
文摘With the widespread use of upper gastrointestinal endoscopy,more and more gastric polyps(GPs)are being detected.Traditional management strategies often rely on histopathologic examination,which can be time-consuming and may not guide immediate clinical decisions.This paper aims to introduce a novel classification system for GPs based on their potential risk of malignant transformation,categorizing them as"good","bad",and"ugly".A review of the literature and clinical case analysis were conducted to explore the clinical implications,management strategies,and the system's application in endoscopic practice.Good polyps,mainly including fundic gland polyps and inflammatory fibrous polyps,have a low risk of malignancy and typically require minimal or no intervention.Bad polyps,mainly including hyperplastic polyps and adenomas,pose an intermediate risk of malignancy,necessitating closer monitoring or removal.Ugly polyps,mainly including type 3 neuroendocrine tumors and early gastric cancer,indicate a high potential for malignancy and require urgent and comprehensive treatment.The new classification system provides a simplified and practical framework for diagnosing and managing GPs,improving diagnostic accuracy,guiding individualized treatment,and promoting advancements in endoscopic techniques.Despite some challenges,such as the risk of misclassification due to similar endoscopic appearances,this system is essential for the standardized management of GPs.It also lays the foundation for future research into biomarkers and the development of personalized medicine.
文摘The integration of image analysis through deep learning(DL)into rock classification represents a significant leap forward in geological research.While traditional methods remain invaluable for their expertise and historical context,DL offers a powerful complement by enhancing the speed,objectivity,and precision of the classification process.This research explores the significance of image data augmentation techniques in optimizing the performance of convolutional neural networks(CNNs)for geological image analysis,particularly in the classification of igneous,metamorphic,and sedimentary rock types from rock thin section(RTS)images.This study primarily focuses on classic image augmentation techniques and evaluates their impact on model accuracy and precision.Results demonstrate that augmentation techniques like Equalize significantly enhance the model's classification capabilities,achieving an F1-Score of 0.9869 for igneous rocks,0.9884 for metamorphic rocks,and 0.9929 for sedimentary rocks,representing improvements compared to the baseline original results.Moreover,the weighted average F1-Score across all classes and techniques is 0.9886,indicating an enhancement.Conversely,methods like Distort lead to decreased accuracy and F1-Score,with an F1-Score of 0.949 for igneous rocks,0.954 for metamorphic rocks,and 0.9416 for sedimentary rocks,exacerbating the performance compared to the baseline.The study underscores the practicality of image data augmentation in geological image classification and advocates for the adoption of DL methods in this domain for automation and improved results.The findings of this study can benefit various fields,including remote sensing,mineral exploration,and environmental monitoring,by enhancing the accuracy of geological image analysis both for scientific research and industrial applications.
文摘In network traffic classification,it is important to understand the correlation between network traffic and its causal application,protocol,or service group,for example,in facilitating lawful interception,ensuring the quality of service,preventing application choke points,and facilitating malicious behavior identification.In this paper,we review existing network classification techniques,such as port-based identification and those based on deep packet inspection,statistical features in conjunction with machine learning,and deep learning algorithms.We also explain the implementations,advantages,and limitations associated with these techniques.Our review also extends to publicly available datasets used in the literature.Finally,we discuss existing and emerging challenges,as well as future research directions.
基金supported by the Yunnan Major Scientific and Technological Projects(Grant No.202302AD080001)the National Natural Science Foundation,China(No.52065033).
文摘When building a classification model,the scenario where the samples of one class are significantly more than those of the other class is called data imbalance.Data imbalance causes the trained classification model to be in favor of the majority class(usually defined as the negative class),which may do harm to the accuracy of the minority class(usually defined as the positive class),and then lead to poor overall performance of the model.A method called MSHR-FCSSVM for solving imbalanced data classification is proposed in this article,which is based on a new hybrid resampling approach(MSHR)and a new fine cost-sensitive support vector machine(CS-SVM)classifier(FCSSVM).The MSHR measures the separability of each negative sample through its Silhouette value calculated by Mahalanobis distance between samples,based on which,the so-called pseudo-negative samples are screened out to generate new positive samples(over-sampling step)through linear interpolation and are deleted finally(under-sampling step).This approach replaces pseudo-negative samples with generated new positive samples one by one to clear up the inter-class overlap on the borderline,without changing the overall scale of the dataset.The FCSSVM is an improved version of the traditional CS-SVM.It considers influences of both the imbalance of sample number and the class distribution on classification simultaneously,and through finely tuning the class cost weights by using the efficient optimization algorithm based on the physical phenomenon of rime-ice(RIME)algorithm with cross-validation accuracy as the fitness function to accurately adjust the classification borderline.To verify the effectiveness of the proposed method,a series of experiments are carried out based on 20 imbalanced datasets including both mildly and extremely imbalanced datasets.The experimental results show that the MSHR-FCSSVM method performs better than the methods for comparison in most cases,and both the MSHR and the FCSSVM played significant roles.
基金supported in part by the Nationa Natural Science Foundation of China (61876011)the National Key Research and Development Program of China (2022YFB4703700)+1 种基金the Key Research and Development Program 2020 of Guangzhou (202007050002)the Key-Area Research and Development Program of Guangdong Province (2020B090921003)。
文摘Recently, there have been some attempts of Transformer in 3D point cloud classification. In order to reduce computations, most existing methods focus on local spatial attention,but ignore their content and fail to establish relationships between distant but relevant points. To overcome the limitation of local spatial attention, we propose a point content-based Transformer architecture, called PointConT for short. It exploits the locality of points in the feature space(content-based), which clusters the sampled points with similar features into the same class and computes the self-attention within each class, thus enabling an effective trade-off between capturing long-range dependencies and computational complexity. We further introduce an inception feature aggregator for point cloud classification, which uses parallel structures to aggregate high-frequency and low-frequency information in each branch separately. Extensive experiments show that our PointConT model achieves a remarkable performance on point cloud shape classification. Especially, our method exhibits 90.3% Top-1 accuracy on the hardest setting of ScanObjectN N. Source code of this paper is available at https://github.com/yahuiliu99/PointC onT.
文摘The network of Himalayan roadways and highways connects some remote regions of valleys or hill slopes,which is vital for India’s socio-economic growth.Due to natural and artificial factors,frequency of slope instabilities along the networks has been increasing over last few decades.Assessment of stability of natural and artificial slopes due to construction of these connecting road networks is significant in safely executing these roads throughout the year.Several rock mass classification methods are generally used to assess the strength and deformability of rock mass.This study assesses slope stability along the NH-1A of Ramban district of North Western Himalayas.Various structurally and non-structurally controlled rock mass classification systems have been applied to assess the stability conditions of 14 slopes.For evaluating the stability of these slopes,kinematic analysis was performed along with geological strength index(GSI),rock mass rating(RMR),continuous slope mass rating(CoSMR),slope mass rating(SMR),and Q-slope in the present study.The SMR gives three slopes as completely unstable while CoSMR suggests four slopes as completely unstable.The stability of all slopes was also analyzed using a design chart under dynamic and static conditions by slope stability rating(SSR)for the factor of safety(FoS)of 1.2 and 1 respectively.Q-slope with probability of failure(PoF)1%gives two slopes as stable slopes.Stable slope angle has been determined based on the Q-slope safe angle equation and SSR design chart based on the FoS.The value ranges given by different empirical classifications were RMR(37-74),GSI(27.3-58.5),SMR(11-59),and CoSMR(3.39-74.56).Good relationship was found among RMR&SSR and RMR&GSI with correlation coefficient(R 2)value of 0.815 and 0.6866,respectively.Lastly,a comparative stability of all these slopes based on the above classification has been performed to identify the most critical slope along this road.