期刊文献+
共找到342,630篇文章
< 1 2 250 >
每页显示 20 50 100
AI-Enhanced Secure Data Aggregation for Smart Grids with Privacy Preservation
1
作者 Congcong Wang Chen Wang +1 位作者 Wenying Zheng Wei Gu 《Computers, Materials & Continua》 SCIE EI 2025年第1期799-816,共18页
As smart grid technology rapidly advances,the vast amount of user data collected by smart meter presents significant challenges in data security and privacy protection.Current research emphasizes data security and use... As smart grid technology rapidly advances,the vast amount of user data collected by smart meter presents significant challenges in data security and privacy protection.Current research emphasizes data security and user privacy concerns within smart grids.However,existing methods struggle with efficiency and security when processing large-scale data.Balancing efficient data processing with stringent privacy protection during data aggregation in smart grids remains an urgent challenge.This paper proposes an AI-based multi-type data aggregation method designed to enhance aggregation efficiency and security by standardizing and normalizing various data modalities.The approach optimizes data preprocessing,integrates Long Short-Term Memory(LSTM)networks for handling time-series data,and employs homomorphic encryption to safeguard user privacy.It also explores the application of Boneh Lynn Shacham(BLS)signatures for user authentication.The proposed scheme’s efficiency,security,and privacy protection capabilities are validated through rigorous security proofs and experimental analysis. 展开更多
关键词 Smart grid data security privacy protection artificial intelligence data aggregation
在线阅读 下载PDF
A novel method for clustering cellular data to improve classification
2
作者 Diek W.Wheeler Giorgio A.Ascoli 《Neural Regeneration Research》 SCIE CAS 2025年第9期2697-2705,共9页
Many fields,such as neuroscience,are experiencing the vast prolife ration of cellular data,underscoring the need fo r organizing and interpreting large datasets.A popular approach partitions data into manageable subse... Many fields,such as neuroscience,are experiencing the vast prolife ration of cellular data,underscoring the need fo r organizing and interpreting large datasets.A popular approach partitions data into manageable subsets via hierarchical clustering,but objective methods to determine the appropriate classification granularity are missing.We recently introduced a technique to systematically identify when to stop subdividing clusters based on the fundamental principle that cells must differ more between than within clusters.Here we present the corresponding protocol to classify cellular datasets by combining datadriven unsupervised hierarchical clustering with statistical testing.These general-purpose functions are applicable to any cellular dataset that can be organized as two-dimensional matrices of numerical values,including molecula r,physiological,and anatomical datasets.We demonstrate the protocol using cellular data from the Janelia MouseLight project to chara cterize morphological aspects of neurons. 展开更多
关键词 cellular data clustering dendrogram data classification Levene's one-tailed statistical test unsupervised hierarchical clustering
在线阅读 下载PDF
A Support Vector Machine(SVM)Model for Privacy Recommending Data Processing Model(PRDPM)in Internet of Vehicles
3
作者 Ali Alqarni 《Computers, Materials & Continua》 SCIE EI 2025年第1期389-406,共18页
Open networks and heterogeneous services in the Internet of Vehicles(IoV)can lead to security and privacy challenges.One key requirement for such systems is the preservation of user privacy,ensuring a seamless experie... Open networks and heterogeneous services in the Internet of Vehicles(IoV)can lead to security and privacy challenges.One key requirement for such systems is the preservation of user privacy,ensuring a seamless experience in driving,navigation,and communication.These privacy needs are influenced by various factors,such as data collected at different intervals,trip durations,and user interactions.To address this,the paper proposes a Support Vector Machine(SVM)model designed to process large amounts of aggregated data and recommend privacy preserving measures.The model analyzes data based on user demands and interactions with service providers or neighboring infrastructure.It aims to minimize privacy risks while ensuring service continuity and sustainability.The SVMmodel helps validate the system’s reliability by creating a hyperplane that distinguishes between maximum and minimum privacy recommendations.The results demonstrate the effectiveness of the proposed SVM model in enhancing both privacy and service performance. 展开更多
关键词 Support vector machine big data IoV PRIVACY-PRESERVING
在线阅读 下载PDF
IoT Empowered Early Warning of Transmission Line Galloping Based on Integrated Optical Fiber Sensing and Weather Forecast Time Series Data
4
作者 Zhe Li Yun Liang +1 位作者 Jinyu Wang Yang Gao 《Computers, Materials & Continua》 SCIE EI 2025年第1期1171-1192,共22页
Iced transmission line galloping poses a significant threat to the safety and reliability of power systems,leading directly to line tripping,disconnections,and power outages.Existing early warning methods of iced tran... Iced transmission line galloping poses a significant threat to the safety and reliability of power systems,leading directly to line tripping,disconnections,and power outages.Existing early warning methods of iced transmission line galloping suffer from issues such as reliance on a single data source,neglect of irregular time series,and lack of attention-based closed-loop feedback,resulting in high rates of missed and false alarms.To address these challenges,we propose an Internet of Things(IoT)empowered early warning method of transmission line galloping that integrates time series data from optical fiber sensing and weather forecast.Initially,the method applies a primary adaptive weighted fusion to the IoT empowered optical fiber real-time sensing data and weather forecast data,followed by a secondary fusion based on a Back Propagation(BP)neural network,and uses the K-medoids algorithm for clustering the fused data.Furthermore,an adaptive irregular time series perception adjustment module is introduced into the traditional Gated Recurrent Unit(GRU)network,and closed-loop feedback based on attentionmechanism is employed to update network parameters through gradient feedback of the loss function,enabling closed-loop training and time series data prediction of the GRU network model.Subsequently,considering various types of prediction data and the duration of icing,an iced transmission line galloping risk coefficient is established,and warnings are categorized based on this coefficient.Finally,using an IoT-driven realistic dataset of iced transmission line galloping,the effectiveness of the proposed method is validated through multi-dimensional simulation scenarios. 展开更多
关键词 Optical fiber sensing multi-source data fusion early warning of galloping time series data IOT adaptive weighted learning irregular time series perception closed-loop attention mechanism
在线阅读 下载PDF
A New Encryption Mechanism Supporting the Update of Encrypted Data for Secure and Efficient Collaboration in the Cloud Environment
5
作者 Chanhyeong Cho Byeori Kim +1 位作者 Haehyun Cho Taek-Young Youn 《Computer Modeling in Engineering & Sciences》 SCIE EI 2025年第1期813-834,共22页
With the rise of remote collaboration,the demand for advanced storage and collaboration tools has rapidly increased.However,traditional collaboration tools primarily rely on access control,leaving data stored on cloud... With the rise of remote collaboration,the demand for advanced storage and collaboration tools has rapidly increased.However,traditional collaboration tools primarily rely on access control,leaving data stored on cloud servers vulnerable due to insufficient encryption.This paper introduces a novel mechanism that encrypts data in‘bundle’units,designed to meet the dual requirements of efficiency and security for frequently updated collaborative data.Each bundle includes updated information,allowing only the updated portions to be reencrypted when changes occur.The encryption method proposed in this paper addresses the inefficiencies of traditional encryption modes,such as Cipher Block Chaining(CBC)and Counter(CTR),which require decrypting and re-encrypting the entire dataset whenever updates occur.The proposed method leverages update-specific information embedded within data bundles and metadata that maps the relationship between these bundles and the plaintext data.By utilizing this information,the method accurately identifies the modified portions and applies algorithms to selectively re-encrypt only those sections.This approach significantly enhances the efficiency of data updates while maintaining high performance,particularly in large-scale data environments.To validate this approach,we conducted experiments measuring execution time as both the size of the modified data and the total dataset size varied.Results show that the proposed method significantly outperforms CBC and CTR modes in execution speed,with greater performance gains as data size increases.Additionally,our security evaluation confirms that this method provides robust protection against both passive and active attacks. 展开更多
关键词 Cloud collaboration mode of operation data update efficiency
在线阅读 下载PDF
A Generative Model-Based Network Framework for Ecological Data Reconstruction
6
作者 Shuqiao Liu Zhao Zhang +1 位作者 Hongyan Zhou Xuebo Chen 《Computers, Materials & Continua》 SCIE EI 2025年第1期929-948,共20页
This study examines the effectiveness of artificial intelligence techniques in generating high-quality environmental data for species introductory site selection systems.Combining Strengths,Weaknesses,Opportunities,Th... This study examines the effectiveness of artificial intelligence techniques in generating high-quality environmental data for species introductory site selection systems.Combining Strengths,Weaknesses,Opportunities,Threats(SWOT)analysis data with Variation Autoencoder(VAE)and Generative AdversarialNetwork(GAN)the network framework model(SAE-GAN),is proposed for environmental data reconstruction.The model combines two popular generative models,GAN and VAE,to generate features conditional on categorical data embedding after SWOT Analysis.The model is capable of generating features that resemble real feature distributions and adding sample factors to more accurately track individual sample data.Reconstructed data is used to retain more semantic information to generate features.The model was applied to species in Southern California,USA,citing SWOT analysis data to train the model.Experiments show that the model is capable of integrating data from more comprehensive analyses than traditional methods and generating high-quality reconstructed data from them,effectively solving the problem of insufficient data collection in development environments.The model is further validated by the Technique for Order Preference by Similarity to an Ideal Solution(TOPSIS)classification assessment commonly used in the environmental data domain.This study provides a reliable and rich source of training data for species introduction site selection systems and makes a significant contribution to ecological and sustainable development. 展开更多
关键词 Convolutional Neural Network(CNN) VAE GAN TOPSIS data reconstruction
在线阅读 下载PDF
Optimization of an Artificial Intelligence Database and Camera Installation for Recognition of Risky Passenger Behavior in Railway Vehicles
7
作者 Min-kyeong Kim Yeong Geol Lee +3 位作者 Won-Hee Park Su-hwan Yun Tae-Soon Kwon Duckhee Lee 《Computers, Materials & Continua》 SCIE EI 2025年第1期1277-1293,共17页
Urban railways are vital means of public transportation in Korea.More than 30%of metropolitan residents use the railways,and this proportion is expected to increase.To enhance safety,the government has mandated the in... Urban railways are vital means of public transportation in Korea.More than 30%of metropolitan residents use the railways,and this proportion is expected to increase.To enhance safety,the government has mandated the installation of closed-circuit televisions in all carriages by 2024.However,cameras still monitored humans.To address this limitation,we developed a dataset of risk factors and a smart detection system that enables an immediate response to any abnormal behavior and intensive monitoring thereof.We created an innovative learning dataset that takes into account seven unique risk factors specific to Korean railway passengers.Detailed data collection was conducted across the Shinbundang Line of the Incheon Transportation Corporation,and the Ui-Shinseol Line.We observed several behavioral characteristics and assigned unique annotations to them.We also considered carriage congestion.Recognition performance was evaluated by camera placement and number.Then the camera installation plan was optimized.The dataset will find immediate applications in domestic railway operations.The artificial intelligence algorithms will be verified shortly. 展开更多
关键词 AI railway vehicle risk factor smart detection AI training data
在线阅读 下载PDF
Impact of ocean data assimilation on the seasonal forecast of the 2014/15 marine heatwave in the Northeast Pacific Ocean
8
作者 Tiantian Tang Jiaying He +1 位作者 Huihang Sun Jingjia Luo 《Atmospheric and Oceanic Science Letters》 2025年第1期24-31,共8页
A remarkable marine heatwave,known as the“Blob”,occurred in the Northeast Pacific Ocean from late 2013 to early 2016,which displayed strong warm anomalies extending from the surface to a depth of 300 m.This study em... A remarkable marine heatwave,known as the“Blob”,occurred in the Northeast Pacific Ocean from late 2013 to early 2016,which displayed strong warm anomalies extending from the surface to a depth of 300 m.This study employed two assimilation schemes based on the global Climate Forecast System of Nanjing University of Information Science(NUIST-CFS 1.0)to investigate the impact of ocean data assimilation on the seasonal prediction of this extreme marine heatwave.The sea surface temperature(SST)nudging scheme assimilates SST only,while the deterministic ensemble Kalman filter(EnKF)scheme assimilates observations from the surface to the deep ocean.The latter notably improves the forecasting skill for subsurface temperature anomalies,especially at the depth of 100-300 m(the lower layer),outperforming the SST nudging scheme.It excels in predicting both horizontal and vertical heat transport in the lower layer,contributing to improved forecasts of the lower-layer warming during the Blob.These improvements stem from the assimilation of subsurface observational data,which are important in predicting the upper-ocean conditions.The results suggest that assimilating ocean data with the EnKF scheme significantly enhances the accuracy in predicting subsurface temperature anomalies during the Blob and offers better understanding of its underlying mechanisms. 展开更多
关键词 Seasonal forecast Ocean data assimilation Marine heatwave Subsurface temperature
在线阅读 下载PDF
A Latency-Aware and Fault-Tolerant Framework for Resource Scheduling and Data Management in Fog-Enabled Smart City Transportation Systems
9
作者 Ibrar Afzal Noor ul Amin +1 位作者 Zulfiqar Ahmad Abdulmohsen Algarni 《Computers, Materials & Continua》 SCIE EI 2025年第1期1377-1399,共23页
Thedeployment of the Internet of Things(IoT)with smart sensors has facilitated the emergence of fog computing as an important technology for delivering services to smart environments such as campuses,smart cities,and ... Thedeployment of the Internet of Things(IoT)with smart sensors has facilitated the emergence of fog computing as an important technology for delivering services to smart environments such as campuses,smart cities,and smart transportation systems.Fog computing tackles a range of challenges,including processing,storage,bandwidth,latency,and reliability,by locally distributing secure information through end nodes.Consisting of endpoints,fog nodes,and back-end cloud infrastructure,it provides advanced capabilities beyond traditional cloud computing.In smart environments,particularly within smart city transportation systems,the abundance of devices and nodes poses significant challenges related to power consumption and system reliability.To address the challenges of latency,energy consumption,and fault tolerance in these environments,this paper proposes a latency-aware,faulttolerant framework for resource scheduling and data management,referred to as the FORD framework,for smart cities in fog environments.This framework is designed to meet the demands of time-sensitive applications,such as those in smart transportation systems.The FORD framework incorporates latency-aware resource scheduling to optimize task execution in smart city environments,leveraging resources from both fog and cloud environments.Through simulation-based executions,tasks are allocated to the nearest available nodes with minimum latency.In the event of execution failure,a fault-tolerantmechanism is employed to ensure the successful completion of tasks.Upon successful execution,data is efficiently stored in the cloud data center,ensuring data integrity and reliability within the smart city ecosystem. 展开更多
关键词 Fog computing smart cities smart transportation data management fault tolerance resource scheduling
在线阅读 下载PDF
Tailored Partitioning for Healthcare Big Data: A Novel Technique for Efficient Data Management and Hash Retrieval in RDBMS Relational Architectures
10
作者 Ehsan Soltanmohammadi Neset Hikmet Dilek Akgun 《Journal of Data Analysis and Information Processing》 2025年第1期46-65,共20页
Efficient data management in healthcare is essential for providing timely and accurate patient care, yet traditional partitioning methods in relational databases often struggle with the high volume, heterogeneity, and... Efficient data management in healthcare is essential for providing timely and accurate patient care, yet traditional partitioning methods in relational databases often struggle with the high volume, heterogeneity, and regulatory complexity of healthcare data. This research introduces a tailored partitioning strategy leveraging the MD5 hashing algorithm to enhance data insertion, query performance, and load balancing in healthcare systems. By applying a consistent hash function to patient IDs, our approach achieves uniform distribution of records across partitions, optimizing retrieval paths and reducing access latency while ensuring data integrity and compliance. We evaluated the method through experiments focusing on partitioning efficiency, scalability, and fault tolerance. The partitioning efficiency analysis compared our MD5-based approach with standard round-robin methods, measuring insertion times, query latency, and data distribution balance. Scalability tests assessed system performance across increasing dataset sizes and varying partition counts, while fault tolerance experiments examined data integrity and retrieval performance under simulated partition failures. The experimental results demonstrate that the MD5-based partitioning strategy significantly reduces query retrieval times by optimizing data access patterns, achieving up to X% better performance compared to round-robin methods. It also scales effectively with larger datasets, maintaining low latency and ensuring robust resilience under failure scenarios. This novel approach offers a scalable, efficient, and fault-tolerant solution for healthcare systems, facilitating faster clinical decision-making and improved patient care in complex data environments. 展开更多
关键词 Healthcare data Partitioning Relational database Management Systems (RDBMS) Big data Management Load Balance Query Performance Improvement data Integrity and Fault Tolerance EFFICIENT Big data in Healthcare Dynamic data Distribution Healthcare Information Systems Partitioning Algorithms Performance Evaluation in databases
在线阅读 下载PDF
Designing a Comprehensive Data Governance Maturity Model for Kenya Ministry of Defence
11
作者 Gilly Gitahi Gathogo Simon Maina Karume Josphat Karani 《Journal of Information Security》 2025年第1期44-69,共26页
The study aimed to develop a customized Data Governance Maturity Model (DGMM) for the Ministry of Defence (MoD) in Kenya to address data governance challenges in military settings. Current frameworks lack specific req... The study aimed to develop a customized Data Governance Maturity Model (DGMM) for the Ministry of Defence (MoD) in Kenya to address data governance challenges in military settings. Current frameworks lack specific requirements for the defence industry. The model uses Key Performance Indicators (KPIs) to enhance data governance procedures. Design Science Research guided the study, using qualitative and quantitative methods to gather data from MoD personnel. Major deficiencies were found in data integration, quality control, and adherence to data security regulations. The DGMM helps the MOD improve personnel, procedures, technology, and organizational elements related to data management. The model was tested against ISO/IEC 38500 and recommended for use in other government sectors with similar data governance issues. The DGMM has the potential to enhance data management efficiency, security, and compliance in the MOD and guide further research in military data governance. 展开更多
关键词 data Governance Maturity Model Maturity Index Kenya Ministry of Defence Key Performance Indicators data Security Regulations
在线阅读 下载PDF
Synthetic data as an investigative tool in hypertension and renal diseases research
12
作者 Aleena Jamal Som Singh Fawad Qureshi 《World Journal of Methodology》 2025年第1期9-13,共5页
There is a growing body of clinical research on the utility of synthetic data derivatives,an emerging research tool in medicine.In nephrology,clinicians can use machine learning and artificial intelligence as powerful... There is a growing body of clinical research on the utility of synthetic data derivatives,an emerging research tool in medicine.In nephrology,clinicians can use machine learning and artificial intelligence as powerful aids in their clinical decision-making while also preserving patient privacy.This is especially important given the epidemiology of chronic kidney disease,renal oncology,and hypertension worldwide.However,there remains a need to create a framework for guidance regarding how to better utilize synthetic data as a practical application in this research. 展开更多
关键词 Synthetic data Artificial intelligence NEPHROLOGY Blood pressure RESEARCH EDITORIAL
在线阅读 下载PDF
An Immediate Mortality Prediction Score That is Robust to Missing Data
13
作者 Tara M. Westover Marta B. Fernandes +1 位作者 M. Brandon Westover Sahar F. Zafar 《Open Journal of Statistics》 2025年第1期73-80,共8页
Objective: To develop an illness severity score that predicts short-term mortality, based on a small number of readily available measurements, and overcomes limitations of the SOFA score, for use in research involving... Objective: To develop an illness severity score that predicts short-term mortality, based on a small number of readily available measurements, and overcomes limitations of the SOFA score, for use in research involving large-scale electronic health records. Design: Retrospective analysis of electronic records for 37,739 adult inpatients. Setting: A single tertiary care hospital system from 2016-2022. Patients: 37,739 adult ICU patients. Interventions: IMPS was developed using logistic regression with the 6 SOFA components, age, sex and missingness indicators as predictors, and 10-day mortality as the outcome. This was compared with SOFA with median imputation. Measurements and Main Results: Discrimination was evaluated by AUROC, calibration by comparing predicted and observed mortality. IMPS showed excellent discrimination (AUROC 0.80) and calibration. It outperformed SOFA alone (AUROC 0.70) and with age/sex (0.74). Conclusions: By retaining continuous data, adding age, allowing for missingness, and optimizing weights based on empirical mortality association, IMPS achieved substantially better mortality prediction than the original SOFA. 展开更多
关键词 Critical Care Missing data Electronic Health Records Illness Severity MORTALITY
在线阅读 下载PDF
Audiovisual Art Event Classification and Outreach Based on Web Extracted Data
14
作者 Andreas Giannakoulopoulos Minas Pergantis +1 位作者 Aristeidis Lamprogeorgos Stella Lampoura 《Journal of Software Engineering and Applications》 2025年第1期24-43,共20页
The World Wide Web provides a wealth of information about everything, including contemporary audio and visual art events, which are discussed on media outlets, blogs, and specialized websites alike. This information m... The World Wide Web provides a wealth of information about everything, including contemporary audio and visual art events, which are discussed on media outlets, blogs, and specialized websites alike. This information may become a robust source of real-world data, which may form the basis of an objective data-driven analysis. In this study, a methodology for collecting information about audio and visual art events in an automated manner from a large array of websites is presented in detail. This process uses cutting edge Semantic Web, Web Search and Generative AI technologies to convert website documents into a collection of structured data. The value of the methodology is demonstrated by creating a large dataset concerning audiovisual events in Greece. The collected information includes event characteristics, estimated metrics based on their text descriptions, outreach metrics based on the media that reported them, and a multi-layered classification of these events based on their type, subjects and methods used. This dataset is openly provided to the general and academic public through a Web application. Moreover, each event’s outreach is evaluated using these quantitative metrics, the results are analyzed with an emphasis on classification popularity and useful conclusions are drawn concerning the importance of artistic subjects, methods, and media. 展开更多
关键词 Web data Extraction Art Events Classification Artistic Outreach Online Media
在线阅读 下载PDF
Intelligent ETL for Enterprise Software Applications Using Unstructured Data
15
作者 Manthan Joshi Vijay K. Madisetti 《Journal of Software Engineering and Applications》 2025年第1期44-65,共22页
Enterprise applications utilize relational databases and structured business processes, requiring slow and expensive conversion of inputs and outputs, from business documents such as invoices, purchase orders, and rec... Enterprise applications utilize relational databases and structured business processes, requiring slow and expensive conversion of inputs and outputs, from business documents such as invoices, purchase orders, and receipts, into known templates and schemas before processing. We propose a new LLM Agent-based intelligent data extraction, transformation, and load (IntelligentETL) pipeline that not only ingests PDFs and detects inputs within it but also addresses the extraction of structured and unstructured data by developing tools that most efficiently and securely deal with respective data types. We study the efficiency of our proposed pipeline and compare it with enterprise solutions that also utilize LLMs. We establish the supremacy in timely and accurate data extraction and transformation capabilities of our approach for analyzing the data from varied sources based on nested and/or interlinked input constraints. 展开更多
关键词 Structured data Relational Model LLM-Powered Agents Field-Level Extraction Knowledge Graph
在线阅读 下载PDF
Gene Expression Data Analysis Based on Mixed Effects Model
16
作者 Yuanbo Dai 《Journal of Computer and Communications》 2025年第2期223-235,共13页
DNA microarray technology is an extremely effective technique for studying gene expression patterns in cells, and the main challenge currently faced by this technology is how to analyze the large amount of gene expres... DNA microarray technology is an extremely effective technique for studying gene expression patterns in cells, and the main challenge currently faced by this technology is how to analyze the large amount of gene expression data generated. To address this, this paper employs a mixed-effects model to analyze gene expression data. In terms of data selection, 1176 genes from the white mouse gene expression dataset under two experimental conditions were chosen, setting up two conditions: pneumococcal infection and no infection, and constructing a mixed-effects model. After preprocessing the gene chip information, the data were imported into the model, preliminary results were calculated, and permutation tests were performed to biologically validate the preliminary results using GSEA. The final dataset consists of 20 groups of gene expression data from pneumococcal infection, which categorizes functionally related genes based on the similarity of their expression profiles, facilitating the study of genes with unknown functions. 展开更多
关键词 Mixed Effects Model Gene Expression data Analysis Gene Analysis Gene Chip
在线阅读 下载PDF
Analysis of the Impact of Legal Digital Currencies on Bank Big Data Practices
17
作者 Zhengkun Xiu 《Journal of Electronic Research and Application》 2025年第1期23-27,共5页
This paper analyzes the advantages of legal digital currencies and explores their impact on bank big data practices.By combining bank big data collection and processing,it clarifies that legal digital currencies can e... This paper analyzes the advantages of legal digital currencies and explores their impact on bank big data practices.By combining bank big data collection and processing,it clarifies that legal digital currencies can enhance the efficiency of bank data processing,enrich data types,and strengthen data analysis and application capabilities.In response to future development needs,it is necessary to strengthen data collection management,enhance data processing capabilities,innovate big data application models,and provide references for bank big data practices,promoting the transformation and upgrading of the banking industry in the context of legal digital currencies. 展开更多
关键词 Legal digital currency Bank big data data processing efficiency data analysis and application Countermeasures and suggestions
在线阅读 下载PDF
Social Media Data Analysis:A Causal Inference Based Study of User Behavior Patterns
18
作者 Liangkeyi SUN 《计算社会科学》 2025年第1期37-53,共17页
This study aims to conduct an in-depth analysis of social media data using causal inference methods to explore the underlying mechanisms driving user behavior patterns.By leveraging large-scale social media datasets,t... This study aims to conduct an in-depth analysis of social media data using causal inference methods to explore the underlying mechanisms driving user behavior patterns.By leveraging large-scale social media datasets,this research develops a systematic analytical framework that integrates techniques such as propensity score matching,regression analysis,and regression discontinuity design to identify the causal effects of content characteristics,user attributes,and social network structures on user interactions,including clicks,shares,comments,and likes.The empirical findings indicate that factors such as sentiment,topical relevance,and network centrality have significant causal impacts on user behavior,with notable differences observed among various user groups.This study not only enriches the theoretical understanding of social media data analysis but also provides data-driven decision support and practical guidance for fields such as digital marketing,public opinion management,and digital governance. 展开更多
关键词 Social Media data Causal Inference User Behavior Patterns Propensity Score Matching DISCONTINUITY data Preprocessing
在线阅读 下载PDF
Attributes of Family Spirituality and Influencing Factors of Its Decline: Data Triangulation of Literature and Family Interviews
19
作者 Naohiro Hohashi Mikio Watanabe 《Open Journal of Nursing》 2025年第2期93-110,共18页
Background and Purpose: In recent years, individual spirituality has been attracting attention, but little research has been conducted as it relates to family spirituality that applies this concept to the family and r... Background and Purpose: In recent years, individual spirituality has been attracting attention, but little research has been conducted as it relates to family spirituality that applies this concept to the family and relates to the meaning of the family’s existence in terms of the entire family. The purpose of this study was to clarify the attributes of family spirituality and the influencing factors of its decline. Methods: Regarding family spirituality, 1) a literature search was conducted using PubMed and reviews of 20 English-language articles;and 2) semi-structured interviews were conducted with 12 Japanese families having elderly members in the household. Data triangulation was performed for both, and a directed content analysis was conducted using Hohashi’s Concentric Sphere Family Environment Theory as the framework. Results: Attributes of family spirituality included 21 categories, such as “I think that my family exists for my children and grandchildren.” Factors influencing the decline in family spirituality included 20 categories in total, including 6 categories of risk/causal/promoting factors such as “lack of caring for family members”;11 categories of preventive/inhibitory/suppression factors such as “healthcare professionals not being close to the family”;and three categories of context-sensitive factors such as “death of a family member.” Conclusions/Implications for Practice: Family intervention requires nurses to understand the attributes of family spirituality and to control the influencing factors of a decline in family spirituality. Through such efforts, families will be able to discover the meaning of the existence of the family and maintain and improve their well-being. 展开更多
关键词 Family Spirituality Influencing Factor Concentric Sphere Family Environment Theory Literature Review Family Interview data Triangulation
在线阅读 下载PDF
The Role of Big Data Analysis in Digital Currency Systems
20
作者 Zhengkun Xiu 《Proceedings of Business and Economic Studies》 2025年第1期1-5,共5页
In the contemporary era,characterized by the Internet and digitalization as fundamental features,the operation and application of digital currency have gradually developed into a comprehensive structural system.This s... In the contemporary era,characterized by the Internet and digitalization as fundamental features,the operation and application of digital currency have gradually developed into a comprehensive structural system.This system restores the essential characteristics of currency while providing auxiliary services related to the formation,circulation,storage,application,and promotion of digital currency.Compared to traditional currency management technologies,big data analysis technology,which is primarily embedded in digital currency systems,enables the rapid acquisition of information.This facilitates the identification of standard associations within currency data and provides technical support for the operational framework of digital currency. 展开更多
关键词 Big data Digital currency Computational methods Transaction speed
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部