Federated Graph Neural Networks (FedGNNs) have achieved significant success in representation learning for graph data, enabling collaborative training among multiple parties without sharing their raw graph data and so...Federated Graph Neural Networks (FedGNNs) have achieved significant success in representation learning for graph data, enabling collaborative training among multiple parties without sharing their raw graph data and solving the data isolation problem faced by centralized GNNs in data-sensitive scenarios. Despite the plethora of prior work on inference attacks against centralized GNNs, the vulnerability of FedGNNs to inference attacks has not yet been widely explored. It is still unclear whether the privacy leakage risks of centralized GNNs will also be introduced in FedGNNs. To bridge this gap, we present PIAFGNN, the first property inference attack (PIA) against FedGNNs. Compared with prior works on centralized GNNs, in PIAFGNN, the attacker can only obtain the global embedding gradient distributed by the central server. The attacker converts the task of stealing the target user’s local embeddings into a regression problem, using a regression model to generate the target graph node embeddings. By training shadow models and property classifiers, the attacker can infer the basic property information within the target graph that is of interest. Experiments on three benchmark graph datasets demonstrate that PIAFGNN achieves attack accuracy of over 70% in most cases, even approaching the attack accuracy of inference attacks against centralized GNNs in some instances, which is much higher than the attack accuracy of the random guessing method. Furthermore, we observe that common defense mechanisms cannot mitigate our attack without affecting the model’s performance on mainly classification tasks.展开更多
This study aims to conduct an in-depth analysis of social media data using causal inference methods to explore the underlying mechanisms driving user behavior patterns.By leveraging large-scale social media datasets,t...This study aims to conduct an in-depth analysis of social media data using causal inference methods to explore the underlying mechanisms driving user behavior patterns.By leveraging large-scale social media datasets,this research develops a systematic analytical framework that integrates techniques such as propensity score matching,regression analysis,and regression discontinuity design to identify the causal effects of content characteristics,user attributes,and social network structures on user interactions,including clicks,shares,comments,and likes.The empirical findings indicate that factors such as sentiment,topical relevance,and network centrality have significant causal impacts on user behavior,with notable differences observed among various user groups.This study not only enriches the theoretical understanding of social media data analysis but also provides data-driven decision support and practical guidance for fields such as digital marketing,public opinion management,and digital governance.展开更多
Artificial Intelligence(AI)has revolutionized education by enabling personalized learning experiences through adaptive platforms.However,traditional AI-driven systems primarily rely on correlation-based analytics,lim-...Artificial Intelligence(AI)has revolutionized education by enabling personalized learning experiences through adaptive platforms.However,traditional AI-driven systems primarily rely on correlation-based analytics,lim-iting their ability to uncover the causal mechanisms behind learning outcomes.This study explores the in-tegration of Knowledge Graphs(KGs)and Causal Inference(CI)as a novel approach to enhance AI-driven educational systems.KGs provide a structured representation of educational knowledge,facilitating intelligent content recommendations and adaptive learning pathways,while CI enables AI systems to move beyond pattern recognition to identify cause-and-effect relationships in student learning.By combining these methods,this research aims to optimize personalized learning path recommendations,improve educational decision-making,and ensure AI-driven interventions are both data-informed and causally validated.Case studies from real-world applications,including intelligent tutoring systems and MOOC platforms,illustrate the practical impact of this approach.The findings contribute to advancing AI-driven education by fostering a balance between knowledge modeling,adaptability,and empirical rigor.展开更多
Robustness against measurement uncertainties is crucial for gas turbine engine diagnosis.While current research focuses mainly on measurement noise,measurement bias remains challenging.This study proposes a novel perf...Robustness against measurement uncertainties is crucial for gas turbine engine diagnosis.While current research focuses mainly on measurement noise,measurement bias remains challenging.This study proposes a novel performance-based fault detection and identification(FDI)strategy for twin-shaft turbofan gas turbine engines and addresses these uncertainties through a first-order Takagi-Sugeno-Kang fuzzy inference system.To handle ambient condition changes,we use parameter correction to preprocess the raw measurement data,which reduces the FDI’s system complexity.Additionally,the power-level angle is set as a scheduling parameter to reduce the number of rules in the TSK-based FDI system.The data for designing,training,and testing the proposed FDI strategy are generated using a component-level turbofan engine model.The antecedent and consequent parameters of the TSK-based FDI system are optimized using the particle swarm optimization algorithm and ridge regression.A robust structure combining a specialized fuzzy inference system with the TSK-based FDI system is proposed to handle measurement biases.The performance of the first-order TSK-based FDI system and robust FDI structure are evaluated through comprehensive simulation studies.Comparative studies confirm the superior accuracy of the first-order TSK-based FDI system in fault detection,isolation,and identification.The robust structure demonstrates a 2%-8%improvement in the success rate index under relatively large measurement bias conditions,thereby indicating excellent robustness.Accuracy against significant bias values and computation time are also evaluated,suggesting that the proposed robust structure has desirable online performance.This study proposes a novel FDI strategy that effectively addresses measurement uncertainties.展开更多
A distributed bearing-only target tracking algorithm based on variational Bayesian inference(VBI)under random measurement anomalies is proposed for the problem of adverse effect of random measurement anomalies on the ...A distributed bearing-only target tracking algorithm based on variational Bayesian inference(VBI)under random measurement anomalies is proposed for the problem of adverse effect of random measurement anomalies on the state estimation accuracy of moving targets in bearing-only tracking scenarios.Firstly,the measurement information of each sensor is complemented by using triangulation under the distributed framework.Secondly,the Student-t distribution is selected to model the measurement likelihood probability density function,and the joint posteriori probability density function of the estimated variables is approximately decoupled by VBI.Finally,the estimation results of each local filter are sent to the fusion center and fed back to each local filter.The simulation results show that the proposed distributed bearing-only target tracking algorithm based on VBI in the presence of abnormal measurement noise comprehensively considers the influence of system nonlinearity and random anomaly of measurement noise,and has higher estimation accuracy and robustness than other existing algorithms in the above scenarios.展开更多
Traditional global sensitivity analysis(GSA)neglects the epistemic uncertainties associated with the probabilistic characteristics(i.e.type of distribution type and its parameters)of input rock properties emanating du...Traditional global sensitivity analysis(GSA)neglects the epistemic uncertainties associated with the probabilistic characteristics(i.e.type of distribution type and its parameters)of input rock properties emanating due to the small size of datasets while mapping the relative importance of properties to the model response.This paper proposes an augmented Bayesian multi-model inference(BMMI)coupled with GSA methodology(BMMI-GSA)to address this issue by estimating the imprecision in the momentindependent sensitivity indices of rock structures arising from the small size of input data.The methodology employs BMMI to quantify the epistemic uncertainties associated with model type and parameters of input properties.The estimated uncertainties are propagated in estimating imprecision in moment-independent Borgonovo’s indices by employing a reweighting approach on candidate probabilistic models.The proposed methodology is showcased for a rock slope prone to stress-controlled failure in the Himalayan region of India.The proposed methodology was superior to the conventional GSA(neglects all epistemic uncertainties)and Bayesian coupled GSA(B-GSA)(neglects model uncertainty)due to its capability to incorporate the uncertainties in both model type and parameters of properties.Imprecise Borgonovo’s indices estimated via proposed methodology provide the confidence intervals of the sensitivity indices instead of their fixed-point estimates,which makes the user more informed in the data collection efforts.Analyses performed with the varying sample sizes suggested that the uncertainties in sensitivity indices reduce significantly with the increasing sample sizes.The accurate importance ranking of properties was only possible via samples of large sizes.Further,the impact of the prior knowledge in terms of prior ranges and distributions was significant;hence,any related assumption should be made carefully.展开更多
Modern industrial processes are typically characterized by large-scale and intricate internal relationships.Therefore,the distributed modeling process monitoring method is effective.A novel distributed monitoring sche...Modern industrial processes are typically characterized by large-scale and intricate internal relationships.Therefore,the distributed modeling process monitoring method is effective.A novel distributed monitoring scheme utilizing the Kantorovich distance-multiblock variational autoencoder(KD-MBVAE)is introduced.Firstly,given the high consistency of relevant variables within each sub-block during the change process,the variables exhibiting analogous statistical features are grouped into identical segments according to the optimal quality transfer theory.Subsequently,the variational autoencoder(VAE)model was separately established,and corresponding T^(2)statistics were calculated.To improve fault sensitivity further,a novel statistic,derived from Kantorovich distance,is introduced by analyzing model residuals from the perspective of probability distribution.The thresholds of both statistics were determined by kernel density estimation.Finally,monitoring results for both types of statistics within all blocks are amalgamated using Bayesian inference.Additionally,a novel approach for fault diagnosis is introduced.The feasibility and efficiency of the introduced scheme are verified through two cases.展开更多
The development of the Internet of Things(IoT)has brought great convenience to people.However,some information security problems such as privacy leakage are caused by communicating with risky users.It is a challenge t...The development of the Internet of Things(IoT)has brought great convenience to people.However,some information security problems such as privacy leakage are caused by communicating with risky users.It is a challenge to choose reliable users with which to interact in the IoT.Therefore,trust plays a crucial role in the IoT because trust may avoid some risks.Agents usually choose reliable users with high trust to maximize their own interests based on reinforcement learning.However,trust propagation is time-consuming,and trust changes with the interaction process in social networks.To track the dynamic changes in trust values,a dynamic trust inference algorithm named Dynamic Double DQN Trust(Dy-DDQNTrust)is proposed to predict the indirect trust values of two users without direct contact with each other.The proposed algorithm simulates the interactions among users by double DQN.Firstly,CurrentNet and TargetNet networks are used to select users for interaction.The users with high trust are chosen to interact in future iterations.Secondly,the trust value is updated dynamically until a reliable trust path is found according to the result of the interaction.Finally,the trust value between indirect users is inferred by aggregating the opinions from multiple users through a Modified Collaborative Filtering Averagebased Similarity(SMCFAvg)aggregation strategy.Experiments are carried out on the FilmTrust and the Epinions datasets.Compared with TidalTrust,MoleTrust,DDQNTrust,DyTrust and Dynamic Weighted Heuristic trust path Search algorithm(DWHS),our dynamic trust inference algorithm has higher prediction accuracy and better scalability.展开更多
An accurate plasma current profile has irreplaceable value for the steady-state operation of the plasma.In this study,plasma current tomography based on Bayesian inference is applied to an HL-2A device and used to rec...An accurate plasma current profile has irreplaceable value for the steady-state operation of the plasma.In this study,plasma current tomography based on Bayesian inference is applied to an HL-2A device and used to reconstruct the plasma current profile.Two different Bayesian probability priors are tried,namely the Conditional Auto Regressive(CAR)prior and the Advanced Squared Exponential(ASE)kernel prior.Compared to the CAR prior,the ASE kernel prior adopts nonstationary hyperparameters and introduces the current profile of the reference discharge into the hyperparameters,which can make the shape of the current profile more flexible in space.The results indicate that the ASE prior couples more information,reduces the probability of unreasonable solutions,and achieves higher reconstruction accuracy.展开更多
Media convergence works by processing information from different modalities and applying them to different domains.It is difficult for the conventional knowledge graph to utilise multi-media features because the intro...Media convergence works by processing information from different modalities and applying them to different domains.It is difficult for the conventional knowledge graph to utilise multi-media features because the introduction of a large amount of information from other modalities reduces the effectiveness of representation learning and makes knowledge graph inference less effective.To address the issue,an inference method based on Media Convergence and Rule-guided Joint Inference model(MCRJI)has been pro-posed.The authors not only converge multi-media features of entities but also introduce logic rules to improve the accuracy and interpretability of link prediction.First,a multi-headed self-attention approach is used to obtain the attention of different media features of entities during semantic synthesis.Second,logic rules of different lengths are mined from knowledge graph to learn new entity representations.Finally,knowledge graph inference is performed based on representing entities that converge multi-media features.Numerous experimental results show that MCRJI outperforms other advanced baselines in using multi-media features and knowledge graph inference,demonstrating that MCRJI provides an excellent approach for knowledge graph inference with converged multi-media features.展开更多
Recently,deep learning-based semantic communication has garnered widespread attention,with numerous systems designed for transmitting diverse data sources,including text,image,and speech,etc.While efforts have been di...Recently,deep learning-based semantic communication has garnered widespread attention,with numerous systems designed for transmitting diverse data sources,including text,image,and speech,etc.While efforts have been directed toward improving system performance,many studies have concentrated on enhancing the structure of the encoder and decoder.However,this often overlooks the resulting increase in model complexity,imposing additional storage and computational burdens on smart devices.Furthermore,existing work tends to prioritize explicit semantics,neglecting the potential of implicit semantics.This paper aims to easily and effectively enhance the receiver's decoding capability without modifying the encoder and decoder structures.We propose a novel semantic communication system with variational neural inference for text transmission.Specifically,we introduce a simple but effective variational neural inferer at the receiver to infer the latent semantic information within the received text.This information is then utilized to assist in the decoding process.The simulation results show a significant enhancement in system performance and improved robustness.展开更多
The Stokes production coefficient(E_(6))constitutes a critical parameter within the Mellor-Yamada type(MY-type)Langmuir turbulence(LT)parameterization schemes,significantly affecting the simulation of turbulent kineti...The Stokes production coefficient(E_(6))constitutes a critical parameter within the Mellor-Yamada type(MY-type)Langmuir turbulence(LT)parameterization schemes,significantly affecting the simulation of turbulent kinetic energy,turbulent length scale,and vertical diffusivity coefficient for turbulent kinetic energy in the upper ocean.However,the accurate determination of its value remains a pressing scientific challenge.This study adopted an innovative approach by leveraging deep learning technology to address this challenge of inferring the E_(6).Through the integration of the information of the turbulent length scale equation into a physical-informed neural network(PINN),we achieved an accurate and physically meaningful inference of E_(6).Multiple cases were examined to assess the feasibility of PINN in this task,revealing that under optimal settings,the average mean squared error of the E_(6) inference was only 0.01,attesting to the effectiveness of PINN.The optimal hyperparameter combination was identified using the Tanh activation function,along with a spatiotemporal sampling interval of 1 s and 0.1 m.This resulted in a substantial reduction in the average bias of the E_(6) inference,ranging from O(10^(1))to O(10^(2))times compared with other combinations.This study underscores the potential application of PINN in intricate marine environments,offering a novel and efficient method for optimizing MY-type LT parameterization schemes.展开更多
Recently,weak supervision has received growing attention in the field of salient object detection due to the convenience of labelling.However,there is a large performance gap between weakly supervised and fully superv...Recently,weak supervision has received growing attention in the field of salient object detection due to the convenience of labelling.However,there is a large performance gap between weakly supervised and fully supervised salient object detectors because the scribble annotation can only provide very limited foreground/background information.Therefore,an intuitive idea is to infer annotations that cover more complete object and background regions for training.To this end,a label inference strategy is proposed based on the assumption that pixels with similar colours and close positions should have consistent labels.Specifically,k-means clustering algorithm was first performed on both colours and coordinates of original annotations,and then assigned the same labels to points having similar colours with colour cluster centres and near coordinate cluster centres.Next,the same annotations for pixels with similar colours within each kernel neighbourhood was set further.Extensive experiments on six benchmarks demonstrate that our method can significantly improve the performance and achieve the state-of-the-art results.展开更多
Various intelligent applications based on non-chain DNN models are widely used in Internet of Things(IoT)scenarios.However,resource-constrained Io T devices usually cannot afford the heavy computation burden and canno...Various intelligent applications based on non-chain DNN models are widely used in Internet of Things(IoT)scenarios.However,resource-constrained Io T devices usually cannot afford the heavy computation burden and cannot guarantee the strict inference latency requirements of non-chain DNN models.Multi-device collaboration has become a promising paradigm for achieving inference acceleration.However,existing works neglect the possibility of inter-layer parallel execution,which fails to exploit the parallelism of collaborating devices and inevitably prolongs the overall completion latency.Thus,there is an urgent need to pay attention to the issue of non-chain DNN inference acceleration with multi-device collaboration based on inter-layer parallel.Three major challenges to be overcome in this problem include exponential computational complexity,complicated layer dependencies,and intractable execution location selection.To this end,we propose a Topological Sorting Based Bidirectional Search(TSBS)algorithm that can adaptively partition non-chain DNN models and select suitable execution locations at layer granularity.More specifically,the TSBS algorithm consists of a topological sorting subalgorithm to realize parallel execution with low computational complexity under complicated layer parallel constraints,and a bidirectional search subalgorithm to quickly find the suitable execution locations for non-parallel layers.Extensive experiments show that the TSBS algorithm significantly outperforms the state-of-the-arts in the completion latency of non-chain DNN inference,a reduction of up to 22.69%.展开更多
Hyperparameter tuning is a key step in developing high-performing machine learning models, but searching large hyperparameter spaces requires extensive computation using standard sequential methods. This work analyzes...Hyperparameter tuning is a key step in developing high-performing machine learning models, but searching large hyperparameter spaces requires extensive computation using standard sequential methods. This work analyzes the performance gains from parallel versus sequential hyperparameter optimization. Using scikit-learn’s Randomized SearchCV, this project tuned a Random Forest classifier for fake news detection via randomized grid search. Setting n_jobs to -1 enabled full parallelization across CPU cores. Results show the parallel implementation achieved over 5× faster CPU times and 3× faster total run times compared to sequential tuning. However, test accuracy slightly dropped from 99.26% sequentially to 99.15% with parallelism, indicating a trade-off between evaluation efficiency and model performance. Still, the significant computational gains allow more extensive hyperparameter exploration within reasonable timeframes, outweighing the small accuracy decrease. Further analysis could better quantify this trade-off across different models, tuning techniques, tasks, and hardware.展开更多
Image classification algorithms are commonly based on the Independent and Identically Distribution (i.i.d.) assumption, but in practice, the Out-Of-Distribution (OOD) problem widely exists, that is, the contexts of im...Image classification algorithms are commonly based on the Independent and Identically Distribution (i.i.d.) assumption, but in practice, the Out-Of-Distribution (OOD) problem widely exists, that is, the contexts of images in the model predicting are usually unseen during training. In this case, existing models trained under the i.i.d. assumption are limiting generalisation. Causal inference is an important method to learn the causal associations which are invariant across different environments, thus improving the generalisation ability of the model. However, existing methods usually require partitioning of the environment to learn invariant features, which mostly have imbalance problems due to the lack of constraints. In this paper, we propose a balanced causal learning framework (BCL), starting from how to divide the dataset in a balanced way and the balance of training after the division, which automatically generates fine-grained balanced data partitions in an unsupervised manner and balances the training difficulty of different classes, thereby enhancing the generalisation ability of models in different environments. Experiments on the OOD datasets NICO and NICO++ demonstrate that BCL achieves stable predictions on OOD data, and we also find that models using BCL focus more accurately on the foreground of images compared with the existing causal inference method, which effectively improves the generalisation ability.展开更多
This study’s main purpose is to use Bayesian structural time-series models to investigate the causal effect of an earthquake on the Borsa Istanbul Stock Index.The results reveal a significant negative impact on stock...This study’s main purpose is to use Bayesian structural time-series models to investigate the causal effect of an earthquake on the Borsa Istanbul Stock Index.The results reveal a significant negative impact on stock market value during the post-treatment period.The results indicate rapid divergence from counterfactual predictions,and the actual stock index is lower than would have been expected in the absence of an earthquake.The curve of the actual stock value and the counterfactual prediction after the earthquake suggest a reconvening pattern in the stock market when the stock market resumes its activities.The cumulative impact effect shows a negative effect in relative terms,as evidenced by the decrease in the BIST-100 index of -30%.These results have significant implications for investors and policymakers,emphasizing the need to prepare for natural disasters to minimize their adverse effects on stock market valuations.展开更多
Due to the increasing demand for goods movement,externalities from freight mobility have attracted much concern among local citizens and policymakers.Freight truck-related crash is one of these externalities and impac...Due to the increasing demand for goods movement,externalities from freight mobility have attracted much concern among local citizens and policymakers.Freight truck-related crash is one of these externalities and impacts urban freight transportation most drastically.Previous studies have mainly focused on correlation analyses of influencing factors based on crash density/count data,but have paid little attention to the inherent uncertainties of freight truck-related crashes(FTCs)from a spatial perspective.While establishing an interpretable analysis model for freight truck-related accidents that consid-ers uncertainties is of great significance for promoting the robust development of urban freight transportation systems.Hence,this study proposes the concept of FTC hazard(FTCH),and employs the Bayesian neural network(BNN)model based on stochastic varia-tional inference to model uncertainty.Considering the difficulty in interpreting deep learning-based models,this study introduces the local interpretable modelagnostic expla-nation(LIME)model into the analysis framework to explain the results of the neural net-work model.This study then verifies the feasibility of the proposed analysis framework using data from California from 2011 to 2020.Results show that FTCHs can be effectively modeled by predicting confidence intervals for effects of built environment factors,in par-ticular demographics,land use,and road network structure.Results based on LIME values indicate the spatial heterogeneity in influence mechanisms on FTCHs between areas within the metropolitan regions and alongside the freeways.These findings may help transport planners and logistic managers develop more effective measures to avoid potential nega-tive effects brought by FTCHs in local communities.展开更多
基金supported by the National Natural Science Foundation of China(Nos.62176122 and 62061146002).
文摘Federated Graph Neural Networks (FedGNNs) have achieved significant success in representation learning for graph data, enabling collaborative training among multiple parties without sharing their raw graph data and solving the data isolation problem faced by centralized GNNs in data-sensitive scenarios. Despite the plethora of prior work on inference attacks against centralized GNNs, the vulnerability of FedGNNs to inference attacks has not yet been widely explored. It is still unclear whether the privacy leakage risks of centralized GNNs will also be introduced in FedGNNs. To bridge this gap, we present PIAFGNN, the first property inference attack (PIA) against FedGNNs. Compared with prior works on centralized GNNs, in PIAFGNN, the attacker can only obtain the global embedding gradient distributed by the central server. The attacker converts the task of stealing the target user’s local embeddings into a regression problem, using a regression model to generate the target graph node embeddings. By training shadow models and property classifiers, the attacker can infer the basic property information within the target graph that is of interest. Experiments on three benchmark graph datasets demonstrate that PIAFGNN achieves attack accuracy of over 70% in most cases, even approaching the attack accuracy of inference attacks against centralized GNNs in some instances, which is much higher than the attack accuracy of the random guessing method. Furthermore, we observe that common defense mechanisms cannot mitigate our attack without affecting the model’s performance on mainly classification tasks.
文摘This study aims to conduct an in-depth analysis of social media data using causal inference methods to explore the underlying mechanisms driving user behavior patterns.By leveraging large-scale social media datasets,this research develops a systematic analytical framework that integrates techniques such as propensity score matching,regression analysis,and regression discontinuity design to identify the causal effects of content characteristics,user attributes,and social network structures on user interactions,including clicks,shares,comments,and likes.The empirical findings indicate that factors such as sentiment,topical relevance,and network centrality have significant causal impacts on user behavior,with notable differences observed among various user groups.This study not only enriches the theoretical understanding of social media data analysis but also provides data-driven decision support and practical guidance for fields such as digital marketing,public opinion management,and digital governance.
文摘Artificial Intelligence(AI)has revolutionized education by enabling personalized learning experiences through adaptive platforms.However,traditional AI-driven systems primarily rely on correlation-based analytics,lim-iting their ability to uncover the causal mechanisms behind learning outcomes.This study explores the in-tegration of Knowledge Graphs(KGs)and Causal Inference(CI)as a novel approach to enhance AI-driven educational systems.KGs provide a structured representation of educational knowledge,facilitating intelligent content recommendations and adaptive learning pathways,while CI enables AI systems to move beyond pattern recognition to identify cause-and-effect relationships in student learning.By combining these methods,this research aims to optimize personalized learning path recommendations,improve educational decision-making,and ensure AI-driven interventions are both data-informed and causally validated.Case studies from real-world applications,including intelligent tutoring systems and MOOC platforms,illustrate the practical impact of this approach.The findings contribute to advancing AI-driven education by fostering a balance between knowledge modeling,adaptability,and empirical rigor.
文摘Robustness against measurement uncertainties is crucial for gas turbine engine diagnosis.While current research focuses mainly on measurement noise,measurement bias remains challenging.This study proposes a novel performance-based fault detection and identification(FDI)strategy for twin-shaft turbofan gas turbine engines and addresses these uncertainties through a first-order Takagi-Sugeno-Kang fuzzy inference system.To handle ambient condition changes,we use parameter correction to preprocess the raw measurement data,which reduces the FDI’s system complexity.Additionally,the power-level angle is set as a scheduling parameter to reduce the number of rules in the TSK-based FDI system.The data for designing,training,and testing the proposed FDI strategy are generated using a component-level turbofan engine model.The antecedent and consequent parameters of the TSK-based FDI system are optimized using the particle swarm optimization algorithm and ridge regression.A robust structure combining a specialized fuzzy inference system with the TSK-based FDI system is proposed to handle measurement biases.The performance of the first-order TSK-based FDI system and robust FDI structure are evaluated through comprehensive simulation studies.Comparative studies confirm the superior accuracy of the first-order TSK-based FDI system in fault detection,isolation,and identification.The robust structure demonstrates a 2%-8%improvement in the success rate index under relatively large measurement bias conditions,thereby indicating excellent robustness.Accuracy against significant bias values and computation time are also evaluated,suggesting that the proposed robust structure has desirable online performance.This study proposes a novel FDI strategy that effectively addresses measurement uncertainties.
基金Supported by the Science and Technology Key Project of Science and Technology Department of Henan Province(No.252102211041)the Key Research and Development Projects of Henan Province(No.231111212500).
文摘A distributed bearing-only target tracking algorithm based on variational Bayesian inference(VBI)under random measurement anomalies is proposed for the problem of adverse effect of random measurement anomalies on the state estimation accuracy of moving targets in bearing-only tracking scenarios.Firstly,the measurement information of each sensor is complemented by using triangulation under the distributed framework.Secondly,the Student-t distribution is selected to model the measurement likelihood probability density function,and the joint posteriori probability density function of the estimated variables is approximately decoupled by VBI.Finally,the estimation results of each local filter are sent to the fusion center and fed back to each local filter.The simulation results show that the proposed distributed bearing-only target tracking algorithm based on VBI in the presence of abnormal measurement noise comprehensively considers the influence of system nonlinearity and random anomaly of measurement noise,and has higher estimation accuracy and robustness than other existing algorithms in the above scenarios.
文摘Traditional global sensitivity analysis(GSA)neglects the epistemic uncertainties associated with the probabilistic characteristics(i.e.type of distribution type and its parameters)of input rock properties emanating due to the small size of datasets while mapping the relative importance of properties to the model response.This paper proposes an augmented Bayesian multi-model inference(BMMI)coupled with GSA methodology(BMMI-GSA)to address this issue by estimating the imprecision in the momentindependent sensitivity indices of rock structures arising from the small size of input data.The methodology employs BMMI to quantify the epistemic uncertainties associated with model type and parameters of input properties.The estimated uncertainties are propagated in estimating imprecision in moment-independent Borgonovo’s indices by employing a reweighting approach on candidate probabilistic models.The proposed methodology is showcased for a rock slope prone to stress-controlled failure in the Himalayan region of India.The proposed methodology was superior to the conventional GSA(neglects all epistemic uncertainties)and Bayesian coupled GSA(B-GSA)(neglects model uncertainty)due to its capability to incorporate the uncertainties in both model type and parameters of properties.Imprecise Borgonovo’s indices estimated via proposed methodology provide the confidence intervals of the sensitivity indices instead of their fixed-point estimates,which makes the user more informed in the data collection efforts.Analyses performed with the varying sample sizes suggested that the uncertainties in sensitivity indices reduce significantly with the increasing sample sizes.The accurate importance ranking of properties was only possible via samples of large sizes.Further,the impact of the prior knowledge in terms of prior ranges and distributions was significant;hence,any related assumption should be made carefully.
基金support from the National Key Research&Development Program of China(2021YFC2101100)the National Natural Science Foundation of China(62322309,61973119).
文摘Modern industrial processes are typically characterized by large-scale and intricate internal relationships.Therefore,the distributed modeling process monitoring method is effective.A novel distributed monitoring scheme utilizing the Kantorovich distance-multiblock variational autoencoder(KD-MBVAE)is introduced.Firstly,given the high consistency of relevant variables within each sub-block during the change process,the variables exhibiting analogous statistical features are grouped into identical segments according to the optimal quality transfer theory.Subsequently,the variational autoencoder(VAE)model was separately established,and corresponding T^(2)statistics were calculated.To improve fault sensitivity further,a novel statistic,derived from Kantorovich distance,is introduced by analyzing model residuals from the perspective of probability distribution.The thresholds of both statistics were determined by kernel density estimation.Finally,monitoring results for both types of statistics within all blocks are amalgamated using Bayesian inference.Additionally,a novel approach for fault diagnosis is introduced.The feasibility and efficiency of the introduced scheme are verified through two cases.
基金supported by the National Natural Science Foundation of China(62072392)the National Natural Science Foundation of China(61972360)the Major Scientific and Technological Innovation Projects of Shandong Province(2019522Y020131).
文摘The development of the Internet of Things(IoT)has brought great convenience to people.However,some information security problems such as privacy leakage are caused by communicating with risky users.It is a challenge to choose reliable users with which to interact in the IoT.Therefore,trust plays a crucial role in the IoT because trust may avoid some risks.Agents usually choose reliable users with high trust to maximize their own interests based on reinforcement learning.However,trust propagation is time-consuming,and trust changes with the interaction process in social networks.To track the dynamic changes in trust values,a dynamic trust inference algorithm named Dynamic Double DQN Trust(Dy-DDQNTrust)is proposed to predict the indirect trust values of two users without direct contact with each other.The proposed algorithm simulates the interactions among users by double DQN.Firstly,CurrentNet and TargetNet networks are used to select users for interaction.The users with high trust are chosen to interact in future iterations.Secondly,the trust value is updated dynamically until a reliable trust path is found according to the result of the interaction.Finally,the trust value between indirect users is inferred by aggregating the opinions from multiple users through a Modified Collaborative Filtering Averagebased Similarity(SMCFAvg)aggregation strategy.Experiments are carried out on the FilmTrust and the Epinions datasets.Compared with TidalTrust,MoleTrust,DDQNTrust,DyTrust and Dynamic Weighted Heuristic trust path Search algorithm(DWHS),our dynamic trust inference algorithm has higher prediction accuracy and better scalability.
基金supported by the National MCF Energy R&D Program of China (Nos. 2018 YFE0301105, 2022YFE03010002 and 2018YFE0302100)the National Key R&D Program of China (Nos. 2022YFE03070004 and 2022YFE03070000)National Natural Science Foundation of China (Nos. 12205195, 12075155 and 11975277)
文摘An accurate plasma current profile has irreplaceable value for the steady-state operation of the plasma.In this study,plasma current tomography based on Bayesian inference is applied to an HL-2A device and used to reconstruct the plasma current profile.Two different Bayesian probability priors are tried,namely the Conditional Auto Regressive(CAR)prior and the Advanced Squared Exponential(ASE)kernel prior.Compared to the CAR prior,the ASE kernel prior adopts nonstationary hyperparameters and introduces the current profile of the reference discharge into the hyperparameters,which can make the shape of the current profile more flexible in space.The results indicate that the ASE prior couples more information,reduces the probability of unreasonable solutions,and achieves higher reconstruction accuracy.
基金National College Students’Training Programs of Innovation and Entrepreneurship,Grant/Award Number:S202210022060the CACMS Innovation Fund,Grant/Award Number:CI2021A00512the National Nature Science Foundation of China under Grant,Grant/Award Number:62206021。
文摘Media convergence works by processing information from different modalities and applying them to different domains.It is difficult for the conventional knowledge graph to utilise multi-media features because the introduction of a large amount of information from other modalities reduces the effectiveness of representation learning and makes knowledge graph inference less effective.To address the issue,an inference method based on Media Convergence and Rule-guided Joint Inference model(MCRJI)has been pro-posed.The authors not only converge multi-media features of entities but also introduce logic rules to improve the accuracy and interpretability of link prediction.First,a multi-headed self-attention approach is used to obtain the attention of different media features of entities during semantic synthesis.Second,logic rules of different lengths are mined from knowledge graph to learn new entity representations.Finally,knowledge graph inference is performed based on representing entities that converge multi-media features.Numerous experimental results show that MCRJI outperforms other advanced baselines in using multi-media features and knowledge graph inference,demonstrating that MCRJI provides an excellent approach for knowledge graph inference with converged multi-media features.
基金supported in part by the National Science Foundation of China(NSFC)with grant no.62271514in part by the Science,Technology and Innovation Commission of Shenzhen Municipality with grant no.JCYJ20210324120002007 and ZDSYS20210623091807023in part by the State Key Laboratory of Public Big Data with grant no.PBD2023-01。
文摘Recently,deep learning-based semantic communication has garnered widespread attention,with numerous systems designed for transmitting diverse data sources,including text,image,and speech,etc.While efforts have been directed toward improving system performance,many studies have concentrated on enhancing the structure of the encoder and decoder.However,this often overlooks the resulting increase in model complexity,imposing additional storage and computational burdens on smart devices.Furthermore,existing work tends to prioritize explicit semantics,neglecting the potential of implicit semantics.This paper aims to easily and effectively enhance the receiver's decoding capability without modifying the encoder and decoder structures.We propose a novel semantic communication system with variational neural inference for text transmission.Specifically,we introduce a simple but effective variational neural inferer at the receiver to infer the latent semantic information within the received text.This information is then utilized to assist in the decoding process.The simulation results show a significant enhancement in system performance and improved robustness.
基金The National Key Research and Development Program of China under contract No.2022YFC3105002the National Natural Science Foundation of China under contract No.42176020the project from the Key Laboratory of Marine Environmental Information Technology,Ministry of Natural Resources,under contract No.2023GFW-1047.
文摘The Stokes production coefficient(E_(6))constitutes a critical parameter within the Mellor-Yamada type(MY-type)Langmuir turbulence(LT)parameterization schemes,significantly affecting the simulation of turbulent kinetic energy,turbulent length scale,and vertical diffusivity coefficient for turbulent kinetic energy in the upper ocean.However,the accurate determination of its value remains a pressing scientific challenge.This study adopted an innovative approach by leveraging deep learning technology to address this challenge of inferring the E_(6).Through the integration of the information of the turbulent length scale equation into a physical-informed neural network(PINN),we achieved an accurate and physically meaningful inference of E_(6).Multiple cases were examined to assess the feasibility of PINN in this task,revealing that under optimal settings,the average mean squared error of the E_(6) inference was only 0.01,attesting to the effectiveness of PINN.The optimal hyperparameter combination was identified using the Tanh activation function,along with a spatiotemporal sampling interval of 1 s and 0.1 m.This resulted in a substantial reduction in the average bias of the E_(6) inference,ranging from O(10^(1))to O(10^(2))times compared with other combinations.This study underscores the potential application of PINN in intricate marine environments,offering a novel and efficient method for optimizing MY-type LT parameterization schemes.
文摘Recently,weak supervision has received growing attention in the field of salient object detection due to the convenience of labelling.However,there is a large performance gap between weakly supervised and fully supervised salient object detectors because the scribble annotation can only provide very limited foreground/background information.Therefore,an intuitive idea is to infer annotations that cover more complete object and background regions for training.To this end,a label inference strategy is proposed based on the assumption that pixels with similar colours and close positions should have consistent labels.Specifically,k-means clustering algorithm was first performed on both colours and coordinates of original annotations,and then assigned the same labels to points having similar colours with colour cluster centres and near coordinate cluster centres.Next,the same annotations for pixels with similar colours within each kernel neighbourhood was set further.Extensive experiments on six benchmarks demonstrate that our method can significantly improve the performance and achieve the state-of-the-art results.
基金supported by the National Key Research and Development Program of China(2021YFB2900102)the National Natural Science Foundation of China(No.62072436 and No.62202449)。
文摘Various intelligent applications based on non-chain DNN models are widely used in Internet of Things(IoT)scenarios.However,resource-constrained Io T devices usually cannot afford the heavy computation burden and cannot guarantee the strict inference latency requirements of non-chain DNN models.Multi-device collaboration has become a promising paradigm for achieving inference acceleration.However,existing works neglect the possibility of inter-layer parallel execution,which fails to exploit the parallelism of collaborating devices and inevitably prolongs the overall completion latency.Thus,there is an urgent need to pay attention to the issue of non-chain DNN inference acceleration with multi-device collaboration based on inter-layer parallel.Three major challenges to be overcome in this problem include exponential computational complexity,complicated layer dependencies,and intractable execution location selection.To this end,we propose a Topological Sorting Based Bidirectional Search(TSBS)algorithm that can adaptively partition non-chain DNN models and select suitable execution locations at layer granularity.More specifically,the TSBS algorithm consists of a topological sorting subalgorithm to realize parallel execution with low computational complexity under complicated layer parallel constraints,and a bidirectional search subalgorithm to quickly find the suitable execution locations for non-parallel layers.Extensive experiments show that the TSBS algorithm significantly outperforms the state-of-the-arts in the completion latency of non-chain DNN inference,a reduction of up to 22.69%.
文摘Hyperparameter tuning is a key step in developing high-performing machine learning models, but searching large hyperparameter spaces requires extensive computation using standard sequential methods. This work analyzes the performance gains from parallel versus sequential hyperparameter optimization. Using scikit-learn’s Randomized SearchCV, this project tuned a Random Forest classifier for fake news detection via randomized grid search. Setting n_jobs to -1 enabled full parallelization across CPU cores. Results show the parallel implementation achieved over 5× faster CPU times and 3× faster total run times compared to sequential tuning. However, test accuracy slightly dropped from 99.26% sequentially to 99.15% with parallelism, indicating a trade-off between evaluation efficiency and model performance. Still, the significant computational gains allow more extensive hyperparameter exploration within reasonable timeframes, outweighing the small accuracy decrease. Further analysis could better quantify this trade-off across different models, tuning techniques, tasks, and hardware.
基金TaiShan Scholars Program(Grant no.tsqn202211289)National Key R&D Program of China(Grant no.2021YFC3300203)Oversea Innovation Team Project of the“20 Regulations for New Universities”funding program of Jinan(Grant no.2021GXRC073),and the Excellent Youth Scholars Program of Shandong Province(Grant no.2022HWYQ-048).
文摘Image classification algorithms are commonly based on the Independent and Identically Distribution (i.i.d.) assumption, but in practice, the Out-Of-Distribution (OOD) problem widely exists, that is, the contexts of images in the model predicting are usually unseen during training. In this case, existing models trained under the i.i.d. assumption are limiting generalisation. Causal inference is an important method to learn the causal associations which are invariant across different environments, thus improving the generalisation ability of the model. However, existing methods usually require partitioning of the environment to learn invariant features, which mostly have imbalance problems due to the lack of constraints. In this paper, we propose a balanced causal learning framework (BCL), starting from how to divide the dataset in a balanced way and the balance of training after the division, which automatically generates fine-grained balanced data partitions in an unsupervised manner and balances the training difficulty of different classes, thereby enhancing the generalisation ability of models in different environments. Experiments on the OOD datasets NICO and NICO++ demonstrate that BCL achieves stable predictions on OOD data, and we also find that models using BCL focus more accurately on the foreground of images compared with the existing causal inference method, which effectively improves the generalisation ability.
文摘This study’s main purpose is to use Bayesian structural time-series models to investigate the causal effect of an earthquake on the Borsa Istanbul Stock Index.The results reveal a significant negative impact on stock market value during the post-treatment period.The results indicate rapid divergence from counterfactual predictions,and the actual stock index is lower than would have been expected in the absence of an earthquake.The curve of the actual stock value and the counterfactual prediction after the earthquake suggest a reconvening pattern in the stock market when the stock market resumes its activities.The cumulative impact effect shows a negative effect in relative terms,as evidenced by the decrease in the BIST-100 index of -30%.These results have significant implications for investors and policymakers,emphasizing the need to prepare for natural disasters to minimize their adverse effects on stock market valuations.
基金supported by the Shanghai Sailing Program of China(ID:20YF1451700)the Science and Technology Commission of Shanghai Municipality of China(Nos.23692119000&21692112203)the Fundamental Research Funds for the Central Universities of China(No.2023-4-YB-01).
文摘Due to the increasing demand for goods movement,externalities from freight mobility have attracted much concern among local citizens and policymakers.Freight truck-related crash is one of these externalities and impacts urban freight transportation most drastically.Previous studies have mainly focused on correlation analyses of influencing factors based on crash density/count data,but have paid little attention to the inherent uncertainties of freight truck-related crashes(FTCs)from a spatial perspective.While establishing an interpretable analysis model for freight truck-related accidents that consid-ers uncertainties is of great significance for promoting the robust development of urban freight transportation systems.Hence,this study proposes the concept of FTC hazard(FTCH),and employs the Bayesian neural network(BNN)model based on stochastic varia-tional inference to model uncertainty.Considering the difficulty in interpreting deep learning-based models,this study introduces the local interpretable modelagnostic expla-nation(LIME)model into the analysis framework to explain the results of the neural net-work model.This study then verifies the feasibility of the proposed analysis framework using data from California from 2011 to 2020.Results show that FTCHs can be effectively modeled by predicting confidence intervals for effects of built environment factors,in par-ticular demographics,land use,and road network structure.Results based on LIME values indicate the spatial heterogeneity in influence mechanisms on FTCHs between areas within the metropolitan regions and alongside the freeways.These findings may help transport planners and logistic managers develop more effective measures to avoid potential nega-tive effects brought by FTCHs in local communities.