The sap flow method is widely used to estimate forest transpiration.However,at the individual tree level it has spatiotemporal variations due to the impacts of environmental conditions and spatial relationships among ...The sap flow method is widely used to estimate forest transpiration.However,at the individual tree level it has spatiotemporal variations due to the impacts of environmental conditions and spatial relationships among trees.Therefore,an in-depth understanding of the coupling effects of these factors is important for designing sap flow measurement methods and performing accurate assessments of stand scale transpiration.This study is based on observations of sap flux density(SF_(d))of nine sample trees with different Hegyi’s competition indices(HCIs),soil moisture,and meteorological conditions in a pure plantation of Larix gmelinii var.principis-rupprechtii during the 2021 growing season(May to September).A multifactorial model of sap flow was developed and possible errors in the stand scale sap flow estimates associated with sample sizes were determined using model-based predictions of sap flow.Temporal variations are controlled by vapour pressure deficit(VPD),solar radiation(R),and soil moisture,and these relationships can be described by polynomial or saturated exponential functions.Spatial(individual)differences were influenced by the HCI,as shown by the decaying power function.A simple SF_(d)model at the individual tree level was developed to describe the synergistic influences of VPD,R,soil moisture,and HCI.The coefficient of variations(CV)of the sap flow estimates gradually stabilized when the sample size was>10;at least six sample trees were needed if the CV was within 10%.This study improves understanding of the mechanisms of spatiotemporal variations in sap flow at the individual tree level and provides a new methodology for determining the optimal sample size for sap flow measurements.展开更多
With the widespread use of blockchain technology for smart contracts and decentralized applications on the Ethereum platform, the blockchain has become a cornerstone of trust in the modern financial system. However, i...With the widespread use of blockchain technology for smart contracts and decentralized applications on the Ethereum platform, the blockchain has become a cornerstone of trust in the modern financial system. However, its anonymity has provided new ways for Ponzi schemes to commit fraud, posing significant risks to investors. Current research still has some limitations, for example, Ponzi schemes are difficult to detect in the early stages of smart contract deployment, and data imbalance is not considered. In addition, there is room for improving the detection accuracy. To address the above issues, this paper proposes LT-SPSD (LSTM-Transformer smart Ponzi schemes detection), which is a Ponzi scheme detection method that combines Long Short-Term Memory (LSTM) and Transformer considering the time-series transaction information of smart contracts as well as the global information. Based on the verified smart contract addresses, account features, and code features are extracted to construct a feature dataset, and the SMOTE-Tomek algorithm is used to deal with the imbalanced data classification problem. By comparing our method with the other four typical detection methods in the experiment, the LT-SPSD method shows significant performance improvement in precision, recall, and F1-score. The results of the experiment confirm the efficacy of the model, which has some application value in Ethereum Ponzi scheme smart contract detection.展开更多
The problem of designing a digital frontend (DFE) was considered which can dynamically access or sense dual bands in any radio frequency (RF) regions without requiring hardware changes. In particular, second-order ban...The problem of designing a digital frontend (DFE) was considered which can dynamically access or sense dual bands in any radio frequency (RF) regions without requiring hardware changes. In particular, second-order bandpass sampling (BPS) as a technique that enables to realize the multiband reception function was discussed. In a second-order BPS system, digital reconstruction filters were utilized to eliminate the interferences generated while down converting arbitrarily positioned RF-band signals by using the direct digitization method. However, the inaccuracy in the phase shift or the amplitude mismatch between the two sample streams may cause insufficient rejection of interference. Practical problems were studied, such as performance degradation in signal-to-interference ratio (SIR) and compensation methods to overcome them. In order to demonstrate the second- order BPS as a flexible DFE suitable for software-defined radio (SDR) or cognitive radio (CR), a DFE testbed with a reconfigurable structure was implemented. Moreover, with a view to further demonstrate the proposed compensation algorithms, experimental results show that dual bands are received simultaneously.展开更多
The bounded consensus tracking problems of second-order multi-agent systems under directed networks with sam- pling delay are addressed in this paper. When the sampling delay is more than a sampling period, new protoc...The bounded consensus tracking problems of second-order multi-agent systems under directed networks with sam- pling delay are addressed in this paper. When the sampling delay is more than a sampling period, new protocols based on sampled-data control are proposed so that each agent can track the time-varying reference state of the virtual leader. By using the delay decomposition approach, the augmented matrix method, and the frequency domain analysis, necessary and sufficient conditions are obtained, which guarantee that the bounded consensus tracking is realized. Furthermore, some numerical simulations are presented to demonstrate the effectiveness of the theoretical results.展开更多
The aim of this study is to investigate the impacts of the sampling strategy of landslide and non-landslide on the performance of landslide susceptibility assessment(LSA).The study area is the Feiyun catchment in Wenz...The aim of this study is to investigate the impacts of the sampling strategy of landslide and non-landslide on the performance of landslide susceptibility assessment(LSA).The study area is the Feiyun catchment in Wenzhou City,Southeast China.Two types of landslides samples,combined with seven non-landslide sampling strategies,resulted in a total of 14 scenarios.The corresponding landslide susceptibility map(LSM)for each scenario was generated using the random forest model.The receiver operating characteristic(ROC)curve and statistical indicators were calculated and used to assess the impact of the dataset sampling strategy.The results showed that higher accuracies were achieved when using the landslide core as positive samples,combined with non-landslide sampling from the very low zone or buffer zone.The results reveal the influence of landslide and non-landslide sampling strategies on the accuracy of LSA,which provides a reference for subsequent researchers aiming to obtain a more reasonable LSM.展开更多
Global variance reduction is a bottleneck in Monte Carlo shielding calculations.The global variance reduction problem requires that the statistical error of the entire space is uniform.This study proposed a grid-AIS m...Global variance reduction is a bottleneck in Monte Carlo shielding calculations.The global variance reduction problem requires that the statistical error of the entire space is uniform.This study proposed a grid-AIS method for the global variance reduction problem based on the AIS method,which was implemented in the Monte Carlo program MCShield.The proposed method was validated using the VENUS-Ⅲ international benchmark problem and a self-shielding calculation example.The results from the VENUS-Ⅲ benchmark problem showed that the grid-AIS method achieved a significant reduction in the variance of the statistical errors of the MESH grids,decreasing from 1.08×10^(-2) to 3.84×10^(-3),representing a 64.00% reduction.This demonstrates that the grid-AIS method is effective in addressing global issues.The results of the selfshielding calculation demonstrate that the grid-AIS method produced accurate computational results.Moreover,the grid-AIS method exhibited a computational efficiency approximately one order of magnitude higher than that of the AIS method and approximately two orders of magnitude higher than that of the conventional Monte Carlo method.展开更多
In this paper,we establish a new multivariate Hermite sampling series involving samples from the function itself and its mixed and non-mixed partial derivatives of arbitrary order.This multivariate form of Hermite sam...In this paper,we establish a new multivariate Hermite sampling series involving samples from the function itself and its mixed and non-mixed partial derivatives of arbitrary order.This multivariate form of Hermite sampling will be valid for some classes of multivariate entire functions,satisfying certain growth conditions.We will show that many known results included in Commun Korean Math Soc,2002,17:731-740,Turk J Math,2017,41:387-403 and Filomat,2020,34:3339-3347 are special cases of our results.Moreover,we estimate the truncation error of this sampling based on localized sampling without decay assumption.Illustrative examples are also presented.展开更多
This study presents the design of a modified attributed control chart based on a double sampling(DS)np chart applied in combination with generalized multiple dependent state(GMDS)sampling to monitor the mean life of t...This study presents the design of a modified attributed control chart based on a double sampling(DS)np chart applied in combination with generalized multiple dependent state(GMDS)sampling to monitor the mean life of the product based on the time truncated life test employing theWeibull distribution.The control chart developed supports the examination of the mean lifespan variation for a particular product in the process of manufacturing.Three control limit levels are used:the warning control limit,inner control limit,and outer control limit.Together,they enhance the capability for variation detection.A genetic algorithm can be used for optimization during the in-control process,whereby the optimal parameters can be established for the proposed control chart.The control chart performance is assessed using the average run length,while the influence of the model parameters upon the control chart solution is assessed via sensitivity analysis based on an orthogonal experimental design withmultiple linear regression.A comparative study was conducted based on the out-of-control average run length,in which the developed control chart offered greater sensitivity in the detection of process shifts while making use of smaller samples on average than is the case for existing control charts.Finally,to exhibit the utility of the developed control chart,this paper presents its application using simulated data with parameters drawn from the real set of data.展开更多
Imbalance is a distinctive feature of many datasets,and how to make the dataset balanced become a hot topic in the machine learning field.The Synthetic Minority Oversampling Technique(SMOTE)is the classical method to ...Imbalance is a distinctive feature of many datasets,and how to make the dataset balanced become a hot topic in the machine learning field.The Synthetic Minority Oversampling Technique(SMOTE)is the classical method to solve this problem.Although much research has been conducted on SMOTE,there is still the problem of synthetic sample singularity.To solve the issues of class imbalance and diversity of generated samples,this paper proposes a hybrid resampling method for binary imbalanced data sets,RE-SMOTE,which is designed based on the improvements of two oversampling methods parameter-free SMOTE(PF-SMOTE)and SMOTE-Weighted Ensemble Nearest Neighbor(SMOTE-WENN).Initially,minority class samples are divided into safe and boundary minority categories.Boundary minority samples are regenerated through linear interpolation with the nearest majority class samples.In contrast,safe minority samples are randomly generated within a circular range centered on the initial safe minority samples with a radius determined by the distance to the nearest majority class samples.Furthermore,we use Weighted Edited Nearest Neighbor(WENN)and relative density methods to clean the generated samples and remove the low-quality samples.Relative density is calculated based on the ratio of majority to minority samples among the reverse k-nearest neighbor samples.To verify the effectiveness and robustness of the proposed model,we conducted a comprehensive experimental study on 40 datasets selected from real applications.The experimental results show the superiority of radius estimation-SMOTE(RE-SMOTE)over other state-of-the-art methods.Code is available at:https://github.com/blue9792/RE-SMOTE(accessed on 30 September 2024).展开更多
The advent of self-attention mechanisms within Transformer models has significantly propelled the advancement of deep learning algorithms,yielding outstanding achievements across diverse domains.Nonetheless,self-atten...The advent of self-attention mechanisms within Transformer models has significantly propelled the advancement of deep learning algorithms,yielding outstanding achievements across diverse domains.Nonetheless,self-attention mechanisms falter when applied to datasets with intricate semantic content and extensive dependency structures.In response,this paper introduces a Diffusion Sampling and Label-Driven Co-attention Neural Network(DSLD),which adopts a diffusion sampling method to capture more comprehensive semantic information of the data.Additionally,themodel leverages the joint correlation information of labels and data to introduce the computation of text representation,correcting semantic representationbiases in thedata,andincreasing the accuracyof semantic representation.Ultimately,the model computes the corresponding classification results by synthesizing these rich data semantic representations.Experiments on seven benchmark datasets show that our proposed model achieves competitive results compared to state-of-the-art methods.展开更多
The rapid advancement and broad application of machine learning(ML)have driven a groundbreaking revolution in computational biology.One of the most cutting-edge and important applications of ML is its integration with...The rapid advancement and broad application of machine learning(ML)have driven a groundbreaking revolution in computational biology.One of the most cutting-edge and important applications of ML is its integration with molecular simulations to improve the sampling efficiency of the vast conformational space of large biomolecules.This review focuses on recent studies that utilize ML-based techniques in the exploration of protein conformational landscape.We first highlight the recent development of ML-aided enhanced sampling methods,including heuristic algorithms and neural networks that are designed to refine the selection of reaction coordinates for the construction of bias potential,or facilitate the exploration of the unsampled region of the energy landscape.Further,we review the development of autoencoder based methods that combine molecular simulations and deep learning to expand the search for protein conformations.Lastly,we discuss the cutting-edge methodologies for the one-shot generation of protein conformations with precise Boltzmann weights.Collectively,this review demonstrates the promising potential of machine learning in revolutionizing our insight into the complex conformational ensembles of proteins.展开更多
To effectively extract multi-scale information from observation data and improve computational efficiency,a multi-scale second-order autoregressive recursive filter(MSRF)method is designed.The second-order autoregress...To effectively extract multi-scale information from observation data and improve computational efficiency,a multi-scale second-order autoregressive recursive filter(MSRF)method is designed.The second-order autoregressive filter used in this study has been attempted to replace the traditional first-order recursive filter used in spatial multi-scale recursive filter(SMRF)method.The experimental results indicate that the MSRF scheme successfully extracts various scale information resolved by observations.Moreover,compared with the SMRF scheme,the MSRF scheme improves computational accuracy and efficiency to some extent.The MSRF scheme can not only propagate to a longer distance without the attenuation of innovation,but also reduce the mean absolute deviation between the reconstructed sea ice concentration results and observations reduced by about 3.2%compared to the SMRF scheme.On the other hand,compared with traditional first-order recursive filters using in the SMRF scheme that multiple filters are executed,the MSRF scheme only needs to perform two filter processes in one iteration,greatly improving filtering efficiency.In the two-dimensional experiment of sea ice concentration,the calculation time of the MSRF scheme is only 1/7 of that of SMRF scheme.This means that the MSRF scheme can achieve better performance with less computational cost,which is of great significance for further application in real-time ocean or sea ice data assimilation systems in the future.展开更多
Peer-to-peer(P2P)overlay networks provide message transmission capabilities for blockchain systems.Improving data transmission efficiency in P2P networks can greatly enhance the performance of blockchain systems.Howev...Peer-to-peer(P2P)overlay networks provide message transmission capabilities for blockchain systems.Improving data transmission efficiency in P2P networks can greatly enhance the performance of blockchain systems.However,traditional blockchain P2P networks face a common challenge where there is often a mismatch between the upper-layer traffic requirements and the underlying physical network topology.This mismatch results in redundant data transmission and inefficient routing,severely constraining the scalability of blockchain systems.To address these pressing issues,we propose FPSblo,an efficient transmission method for blockchain networks.Our inspiration for FPSblo stems from the Farthest Point Sampling(FPS)algorithm,a well-established technique widely utilized in point cloud image processing.In this work,we analogize blockchain nodes to points in a point cloud image and select a representative set of nodes to prioritize message forwarding so that messages reach the network edge quickly and are evenly distributed.Moreover,we compare our model with the Kadcast transmission model,which is a classic improvement model for blockchain P2P transmission networks,the experimental findings show that the FPSblo model reduces 34.8%of transmission redundancy and reduces the overload rate by 37.6%.By conducting experimental analysis,the FPS-BT model enhances the transmission capabilities of the P2P network in blockchain.展开更多
Freeform surface measurement is a key basic technology for product quality control and reverse engineering in aerospace field.Surface measurement technology based on multi-sensor fusion such as laser scanner and conta...Freeform surface measurement is a key basic technology for product quality control and reverse engineering in aerospace field.Surface measurement technology based on multi-sensor fusion such as laser scanner and contact probe can combine the complementary characteristics of different sensors,and has been widely concerned in industry and academia.The number and distribution of measurement points will significantly affect the efficiency of multisensor fusion and the accuracy of surface reconstruction.An aggregation‑value‑based active sampling method for multisensor freeform surface measurement and reconstruction is proposed.Based on game theory iteration,probe measurement points are generated actively,and the importance of each measurement point on freeform surface to multi-sensor fusion is clearly defined as Shapley value of the measurement point.Thus,the problem of obtaining the optimal measurement point set is transformed into the problem of maximizing the aggregation value of the sample set.Simulation and real measurement results verify that the proposed method can significantly reduce the required probe sample size while ensuring the measurement accuracy of multi-sensor fusion.展开更多
For the problem of slow search and tortuous paths in the Rapidly Exploring Random Tree(RRT)algorithm,a feedback-biased sampling RRT,called FS-RRT,is proposedbasedon RRT.Firstly,toimprove the samplingefficiency of RRT ...For the problem of slow search and tortuous paths in the Rapidly Exploring Random Tree(RRT)algorithm,a feedback-biased sampling RRT,called FS-RRT,is proposedbasedon RRT.Firstly,toimprove the samplingefficiency of RRT to shorten the search time,the search area of the randomtree is restricted to improve the sampling efficiency.Secondly,to obtain better information about obstacles to shorten the path length,a feedback-biased sampling strategy is used instead of the traditional random sampling,the collision of the expanding node with an obstacle generates feedback information so that the next expanding node avoids expanding within a specific angle range.Thirdly,this paper proposes using the inverse optimization strategy to remove redundancy points from the initial path,making the path shorter and more accurate.Finally,to satisfy the smooth operation of the robot in practice,auxiliary points are used to optimize the cubic Bezier curve to avoid path-crossing obstacles when using the Bezier curve optimization.The experimental results demonstrate that,compared to the traditional RRT algorithm,the proposed FS-RRT algorithm performs favorably against mainstream algorithms regarding running time,number of search iterations,and path length.Moreover,the improved algorithm also performs well in a narrow obstacle environment,and its effectiveness is further confirmed by experimental verification.展开更多
In order to accurately measure an object’s three-dimensional surface shape,the influence of sampling on it was studied.First,on the basis of deriving spectra expressions through the Fourier transform,the generation o...In order to accurately measure an object’s three-dimensional surface shape,the influence of sampling on it was studied.First,on the basis of deriving spectra expressions through the Fourier transform,the generation of CCD pixels was analyzed,and its expression was given.Then,based on the discrete expression of deformation fringes obtained after sampling,its Fourier spectrum expression was derived,resulting in an infinitely repeated"spectra island"in the frequency domain.Finally,on the basis of using a low-pass filter to remove high-order harmonic components and retaining only one fundamental frequency component,the inverse Fourier transform was used to reconstruct the signal strength.A method of reducing the sampling interval,i.e.,reducing the number of sampling points per fringe,was proposed to increase the ratio between the sampling frequency and the fundamental frequency of the grating.This was done to reconstruct the object’s surface shape more accurately under the condition of m>4.The basic principle was verified through simulation and experiment.In the simulation,the sampling intervals were 8 pixels,4 pixels,2 pixels,and 1 pixel,the maximum absolute error values obtained in the last three situations were 88.80%,38.38%,and 31.50%in the first situation,respectively,and the corresponding average absolute error values are 71.84%,43.27%,and 32.26%.It is demonstrated that the smaller the sampling interval,the better the recovery effect.Taking the same four sampling intervals in the experiment as in the simulation can also lead to the same conclusions.The simulated and experimental results show that reducing the sampling interval can improve the accuracy of object surface shape measurement and achieve better reconstruction results.展开更多
Disjoint sampling is critical for rigorous and unbiased evaluation of state-of-the-art(SOTA)models e.g.,Attention Graph and Vision Transformer.When training,validation,and test sets overlap or share data,it introduces...Disjoint sampling is critical for rigorous and unbiased evaluation of state-of-the-art(SOTA)models e.g.,Attention Graph and Vision Transformer.When training,validation,and test sets overlap or share data,it introduces a bias that inflates performance metrics and prevents accurate assessment of a model’s true ability to generalize to new examples.This paper presents an innovative disjoint sampling approach for training SOTA models for the Hyperspectral Image Classification(HSIC).By separating training,validation,and test data without overlap,the proposed method facilitates a fairer evaluation of how well a model can classify pixels it was not exposed to during training or validation.Experiments demonstrate the approach significantly improves a model’s generalization compared to alternatives that include training and validation data in test data(A trivial approach involves testing the model on the entire Hyperspectral dataset to generate the ground truth maps.This approach produces higher accuracy but ultimately results in low generalization performance).Disjoint sampling eliminates data leakage between sets and provides reliable metrics for benchmarking progress in HSIC.Disjoint sampling is critical for advancing SOTA models and their real-world application to large-scale land mapping with Hyperspectral sensors.Overall,with the disjoint test set,the performance of the deep models achieves 96.36%accuracy on Indian Pines data,99.73%on Pavia University data,98.29%on University of Houston data,99.43%on Botswana data,and 99.88%on Salinas data.展开更多
We propose a new framework for the sampling,compression,and analysis of distributions of point sets and other geometric objects embedded in Euclidean spaces.Our approach involves constructing a tensor called the RaySe...We propose a new framework for the sampling,compression,and analysis of distributions of point sets and other geometric objects embedded in Euclidean spaces.Our approach involves constructing a tensor called the RaySense sketch,which captures nearest neighbors from the underlying geometry of points along a set of rays.We explore various operations that can be performed on the RaySense sketch,leading to different properties and potential applications.Statistical information about the data set can be extracted from the sketch,independent of the ray set.Line integrals on point sets can be efficiently computed using the sketch.We also present several examples illustrating applications of the proposed strategy in practical scenarios.展开更多
In the cascaded H-bridge inverter(CHBI)with supercapacitor and dc-dc stage,inherent second-order harmonic power flows through each submodule(SM),causing fluctuations in both the dc-link voltage and the dc-dc current.T...In the cascaded H-bridge inverter(CHBI)with supercapacitor and dc-dc stage,inherent second-order harmonic power flows through each submodule(SM),causing fluctuations in both the dc-link voltage and the dc-dc current.There exist limitations in handling these fluctuations at variable output frequencies when employing proportional-integral(PI)control to the dc-dc stage.This paper aims to coordinately control these second-order harmonic voltage and current fluctuations in the CHBI.The presented method configures a specific second-order harmonic voltage reference,equipped with a maximum voltage fluctuation constraint and a suitable phase,for the dc-dc stage.A PI-resonant controller is used to track the configured reference.This allows for regulating the second-order harmonic fluctuation in the average dc-link voltage among the SMs within a certain value.Importantly,the second-order harmonic fluctuation in the dc-dc current can also be reduced.Simulation and experimental results demonstrate the effectiveness of the presented method.展开更多
Physics-informed neural networks(PINNs)have become an attractive machine learning framework for obtaining solutions to partial differential equations(PDEs).PINNs embed initial,boundary,and PDE constraints into the los...Physics-informed neural networks(PINNs)have become an attractive machine learning framework for obtaining solutions to partial differential equations(PDEs).PINNs embed initial,boundary,and PDE constraints into the loss function.The performance of PINNs is generally affected by both training and sampling.Specifically,training methods focus on how to overcome the training difficulties caused by the special PDE residual loss of PINNs,and sampling methods are concerned with the location and distribution of the sampling points upon which evaluations of PDE residual loss are accomplished.However,a common problem among these original PINNs is that they omit special temporal information utilization during the training or sampling stages when dealing with an important PDE category,namely,time-dependent PDEs,where temporal information plays a key role in the algorithms used.There is one method,called Causal PINN,that considers temporal causality at the training level but not special temporal utilization at the sampling level.Incorporating temporal knowledge into sampling remains to be studied.To fill this gap,we propose a novel temporal causality-based adaptive sampling method that dynamically determines the sampling ratio according to both PDE residual and temporal causality.By designing a sampling ratio determined by both residual loss and temporal causality to control the number and location of sampled points in each temporal sub-domain,we provide a practical solution by incorporating temporal information into sampling.Numerical experiments of several nonlinear time-dependent PDEs,including the Cahn–Hilliard,Korteweg–de Vries,Allen–Cahn and wave equations,show that our proposed sampling method can improve the performance.We demonstrate that using such a relatively simple sampling method can improve prediction performance by up to two orders of magnitude compared with the results from other methods,especially when points are limited.展开更多
基金supported by the Fundamental Research Funds of the Chinese Academy of Forestry(CAFYBB2020QB004)the National Natural Science Foundation of China(41971038,32171559,U20A2085,and U21A2005).
文摘The sap flow method is widely used to estimate forest transpiration.However,at the individual tree level it has spatiotemporal variations due to the impacts of environmental conditions and spatial relationships among trees.Therefore,an in-depth understanding of the coupling effects of these factors is important for designing sap flow measurement methods and performing accurate assessments of stand scale transpiration.This study is based on observations of sap flux density(SF_(d))of nine sample trees with different Hegyi’s competition indices(HCIs),soil moisture,and meteorological conditions in a pure plantation of Larix gmelinii var.principis-rupprechtii during the 2021 growing season(May to September).A multifactorial model of sap flow was developed and possible errors in the stand scale sap flow estimates associated with sample sizes were determined using model-based predictions of sap flow.Temporal variations are controlled by vapour pressure deficit(VPD),solar radiation(R),and soil moisture,and these relationships can be described by polynomial or saturated exponential functions.Spatial(individual)differences were influenced by the HCI,as shown by the decaying power function.A simple SF_(d)model at the individual tree level was developed to describe the synergistic influences of VPD,R,soil moisture,and HCI.The coefficient of variations(CV)of the sap flow estimates gradually stabilized when the sample size was>10;at least six sample trees were needed if the CV was within 10%.This study improves understanding of the mechanisms of spatiotemporal variations in sap flow at the individual tree level and provides a new methodology for determining the optimal sample size for sap flow measurements.
基金This work was granted by Qin Xin Talents Cultivation Program(No.QXTCP C202115)Beijing Information Science and Technology University+1 种基金the Beijing Advanced Innovation Center for Future Blockchain and Privacy Computing Fund(No.GJJ-23)National Social Science Foundation,China(No.21BTQ079).
文摘With the widespread use of blockchain technology for smart contracts and decentralized applications on the Ethereum platform, the blockchain has become a cornerstone of trust in the modern financial system. However, its anonymity has provided new ways for Ponzi schemes to commit fraud, posing significant risks to investors. Current research still has some limitations, for example, Ponzi schemes are difficult to detect in the early stages of smart contract deployment, and data imbalance is not considered. In addition, there is room for improving the detection accuracy. To address the above issues, this paper proposes LT-SPSD (LSTM-Transformer smart Ponzi schemes detection), which is a Ponzi scheme detection method that combines Long Short-Term Memory (LSTM) and Transformer considering the time-series transaction information of smart contracts as well as the global information. Based on the verified smart contract addresses, account features, and code features are extracted to construct a feature dataset, and the SMOTE-Tomek algorithm is used to deal with the imbalanced data classification problem. By comparing our method with the other four typical detection methods in the experiment, the LT-SPSD method shows significant performance improvement in precision, recall, and F1-score. The results of the experiment confirm the efficacy of the model, which has some application value in Ethereum Ponzi scheme smart contract detection.
基金Research financially supported by Changwon National University in 2009-2010the Second Stage of Brain Korea 21 Projects
文摘The problem of designing a digital frontend (DFE) was considered which can dynamically access or sense dual bands in any radio frequency (RF) regions without requiring hardware changes. In particular, second-order bandpass sampling (BPS) as a technique that enables to realize the multiband reception function was discussed. In a second-order BPS system, digital reconstruction filters were utilized to eliminate the interferences generated while down converting arbitrarily positioned RF-band signals by using the direct digitization method. However, the inaccuracy in the phase shift or the amplitude mismatch between the two sample streams may cause insufficient rejection of interference. Practical problems were studied, such as performance degradation in signal-to-interference ratio (SIR) and compensation methods to overcome them. In order to demonstrate the second- order BPS as a flexible DFE suitable for software-defined radio (SDR) or cognitive radio (CR), a DFE testbed with a reconfigurable structure was implemented. Moreover, with a view to further demonstrate the proposed compensation algorithms, experimental results show that dual bands are received simultaneously.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.60874053 and 61034006)
文摘The bounded consensus tracking problems of second-order multi-agent systems under directed networks with sam- pling delay are addressed in this paper. When the sampling delay is more than a sampling period, new protocols based on sampled-data control are proposed so that each agent can track the time-varying reference state of the virtual leader. By using the delay decomposition approach, the augmented matrix method, and the frequency domain analysis, necessary and sufficient conditions are obtained, which guarantee that the bounded consensus tracking is realized. Furthermore, some numerical simulations are presented to demonstrate the effectiveness of the theoretical results.
文摘The aim of this study is to investigate the impacts of the sampling strategy of landslide and non-landslide on the performance of landslide susceptibility assessment(LSA).The study area is the Feiyun catchment in Wenzhou City,Southeast China.Two types of landslides samples,combined with seven non-landslide sampling strategies,resulted in a total of 14 scenarios.The corresponding landslide susceptibility map(LSM)for each scenario was generated using the random forest model.The receiver operating characteristic(ROC)curve and statistical indicators were calculated and used to assess the impact of the dataset sampling strategy.The results showed that higher accuracies were achieved when using the landslide core as positive samples,combined with non-landslide sampling from the very low zone or buffer zone.The results reveal the influence of landslide and non-landslide sampling strategies on the accuracy of LSA,which provides a reference for subsequent researchers aiming to obtain a more reasonable LSM.
基金supported by the Platform Development Foundation of the China Institute for Radiation Protection(No.YP21030101)the National Natural Science Foundation of China(General Program)(Nos.12175114,U2167209)+1 种基金the National Key R&D Program of China(No.2021YFF0603600)the Tsinghua University Initiative Scientific Research Program(No.20211080081).
文摘Global variance reduction is a bottleneck in Monte Carlo shielding calculations.The global variance reduction problem requires that the statistical error of the entire space is uniform.This study proposed a grid-AIS method for the global variance reduction problem based on the AIS method,which was implemented in the Monte Carlo program MCShield.The proposed method was validated using the VENUS-Ⅲ international benchmark problem and a self-shielding calculation example.The results from the VENUS-Ⅲ benchmark problem showed that the grid-AIS method achieved a significant reduction in the variance of the statistical errors of the MESH grids,decreasing from 1.08×10^(-2) to 3.84×10^(-3),representing a 64.00% reduction.This demonstrates that the grid-AIS method is effective in addressing global issues.The results of the selfshielding calculation demonstrate that the grid-AIS method produced accurate computational results.Moreover,the grid-AIS method exhibited a computational efficiency approximately one order of magnitude higher than that of the AIS method and approximately two orders of magnitude higher than that of the conventional Monte Carlo method.
文摘In this paper,we establish a new multivariate Hermite sampling series involving samples from the function itself and its mixed and non-mixed partial derivatives of arbitrary order.This multivariate form of Hermite sampling will be valid for some classes of multivariate entire functions,satisfying certain growth conditions.We will show that many known results included in Commun Korean Math Soc,2002,17:731-740,Turk J Math,2017,41:387-403 and Filomat,2020,34:3339-3347 are special cases of our results.Moreover,we estimate the truncation error of this sampling based on localized sampling without decay assumption.Illustrative examples are also presented.
基金the Science,Research and Innovation Promotion Funding(TSRI)(Grant No.FRB660012/0168)managed under Rajamangala University of Technology Thanyaburi(FRB66E0646O.4).
文摘This study presents the design of a modified attributed control chart based on a double sampling(DS)np chart applied in combination with generalized multiple dependent state(GMDS)sampling to monitor the mean life of the product based on the time truncated life test employing theWeibull distribution.The control chart developed supports the examination of the mean lifespan variation for a particular product in the process of manufacturing.Three control limit levels are used:the warning control limit,inner control limit,and outer control limit.Together,they enhance the capability for variation detection.A genetic algorithm can be used for optimization during the in-control process,whereby the optimal parameters can be established for the proposed control chart.The control chart performance is assessed using the average run length,while the influence of the model parameters upon the control chart solution is assessed via sensitivity analysis based on an orthogonal experimental design withmultiple linear regression.A comparative study was conducted based on the out-of-control average run length,in which the developed control chart offered greater sensitivity in the detection of process shifts while making use of smaller samples on average than is the case for existing control charts.Finally,to exhibit the utility of the developed control chart,this paper presents its application using simulated data with parameters drawn from the real set of data.
基金supported by the National Key R&D Program of China,No.2022YFC3006302.
文摘Imbalance is a distinctive feature of many datasets,and how to make the dataset balanced become a hot topic in the machine learning field.The Synthetic Minority Oversampling Technique(SMOTE)is the classical method to solve this problem.Although much research has been conducted on SMOTE,there is still the problem of synthetic sample singularity.To solve the issues of class imbalance and diversity of generated samples,this paper proposes a hybrid resampling method for binary imbalanced data sets,RE-SMOTE,which is designed based on the improvements of two oversampling methods parameter-free SMOTE(PF-SMOTE)and SMOTE-Weighted Ensemble Nearest Neighbor(SMOTE-WENN).Initially,minority class samples are divided into safe and boundary minority categories.Boundary minority samples are regenerated through linear interpolation with the nearest majority class samples.In contrast,safe minority samples are randomly generated within a circular range centered on the initial safe minority samples with a radius determined by the distance to the nearest majority class samples.Furthermore,we use Weighted Edited Nearest Neighbor(WENN)and relative density methods to clean the generated samples and remove the low-quality samples.Relative density is calculated based on the ratio of majority to minority samples among the reverse k-nearest neighbor samples.To verify the effectiveness and robustness of the proposed model,we conducted a comprehensive experimental study on 40 datasets selected from real applications.The experimental results show the superiority of radius estimation-SMOTE(RE-SMOTE)over other state-of-the-art methods.Code is available at:https://github.com/blue9792/RE-SMOTE(accessed on 30 September 2024).
基金the Communication University of China(CUC230A013)the Fundamental Research Funds for the Central Universities.
文摘The advent of self-attention mechanisms within Transformer models has significantly propelled the advancement of deep learning algorithms,yielding outstanding achievements across diverse domains.Nonetheless,self-attention mechanisms falter when applied to datasets with intricate semantic content and extensive dependency structures.In response,this paper introduces a Diffusion Sampling and Label-Driven Co-attention Neural Network(DSLD),which adopts a diffusion sampling method to capture more comprehensive semantic information of the data.Additionally,themodel leverages the joint correlation information of labels and data to introduce the computation of text representation,correcting semantic representationbiases in thedata,andincreasing the accuracyof semantic representation.Ultimately,the model computes the corresponding classification results by synthesizing these rich data semantic representations.Experiments on seven benchmark datasets show that our proposed model achieves competitive results compared to state-of-the-art methods.
基金Project supported by the National Key Research and Development Program of China(Grant No.2023YFF1204402)the National Natural Science Foundation of China(Grant Nos.12074079 and 12374208)+1 种基金the Natural Science Foundation of Shanghai(Grant No.22ZR1406800)the China Postdoctoral Science Foundation(Grant No.2022M720815).
文摘The rapid advancement and broad application of machine learning(ML)have driven a groundbreaking revolution in computational biology.One of the most cutting-edge and important applications of ML is its integration with molecular simulations to improve the sampling efficiency of the vast conformational space of large biomolecules.This review focuses on recent studies that utilize ML-based techniques in the exploration of protein conformational landscape.We first highlight the recent development of ML-aided enhanced sampling methods,including heuristic algorithms and neural networks that are designed to refine the selection of reaction coordinates for the construction of bias potential,or facilitate the exploration of the unsampled region of the energy landscape.Further,we review the development of autoencoder based methods that combine molecular simulations and deep learning to expand the search for protein conformations.Lastly,we discuss the cutting-edge methodologies for the one-shot generation of protein conformations with precise Boltzmann weights.Collectively,this review demonstrates the promising potential of machine learning in revolutionizing our insight into the complex conformational ensembles of proteins.
基金The National Key Research and Development Program of China under contract No.2023YFC3107701the National Natural Science Foundation of China under contract No.42375143.
文摘To effectively extract multi-scale information from observation data and improve computational efficiency,a multi-scale second-order autoregressive recursive filter(MSRF)method is designed.The second-order autoregressive filter used in this study has been attempted to replace the traditional first-order recursive filter used in spatial multi-scale recursive filter(SMRF)method.The experimental results indicate that the MSRF scheme successfully extracts various scale information resolved by observations.Moreover,compared with the SMRF scheme,the MSRF scheme improves computational accuracy and efficiency to some extent.The MSRF scheme can not only propagate to a longer distance without the attenuation of innovation,but also reduce the mean absolute deviation between the reconstructed sea ice concentration results and observations reduced by about 3.2%compared to the SMRF scheme.On the other hand,compared with traditional first-order recursive filters using in the SMRF scheme that multiple filters are executed,the MSRF scheme only needs to perform two filter processes in one iteration,greatly improving filtering efficiency.In the two-dimensional experiment of sea ice concentration,the calculation time of the MSRF scheme is only 1/7 of that of SMRF scheme.This means that the MSRF scheme can achieve better performance with less computational cost,which is of great significance for further application in real-time ocean or sea ice data assimilation systems in the future.
基金This present research work was supported by the National Key R&D Program of China(No.2021YFB2700800)the GHfund B(No.202302024490).
文摘Peer-to-peer(P2P)overlay networks provide message transmission capabilities for blockchain systems.Improving data transmission efficiency in P2P networks can greatly enhance the performance of blockchain systems.However,traditional blockchain P2P networks face a common challenge where there is often a mismatch between the upper-layer traffic requirements and the underlying physical network topology.This mismatch results in redundant data transmission and inefficient routing,severely constraining the scalability of blockchain systems.To address these pressing issues,we propose FPSblo,an efficient transmission method for blockchain networks.Our inspiration for FPSblo stems from the Farthest Point Sampling(FPS)algorithm,a well-established technique widely utilized in point cloud image processing.In this work,we analogize blockchain nodes to points in a point cloud image and select a representative set of nodes to prioritize message forwarding so that messages reach the network edge quickly and are evenly distributed.Moreover,we compare our model with the Kadcast transmission model,which is a classic improvement model for blockchain P2P transmission networks,the experimental findings show that the FPSblo model reduces 34.8%of transmission redundancy and reduces the overload rate by 37.6%.By conducting experimental analysis,the FPS-BT model enhances the transmission capabilities of the P2P network in blockchain.
基金supported by the Na‑tional Key R&D Program of China(No.2022YFB3402600)the National Science Fund for Distinguished Young Scholars(No.51925505)+1 种基金the General Program of National Natural Science Foundation of China(No.52275491)Joint Funds of the National Natural Science Foundation of China(No.U21B2081).
文摘Freeform surface measurement is a key basic technology for product quality control and reverse engineering in aerospace field.Surface measurement technology based on multi-sensor fusion such as laser scanner and contact probe can combine the complementary characteristics of different sensors,and has been widely concerned in industry and academia.The number and distribution of measurement points will significantly affect the efficiency of multisensor fusion and the accuracy of surface reconstruction.An aggregation‑value‑based active sampling method for multisensor freeform surface measurement and reconstruction is proposed.Based on game theory iteration,probe measurement points are generated actively,and the importance of each measurement point on freeform surface to multi-sensor fusion is clearly defined as Shapley value of the measurement point.Thus,the problem of obtaining the optimal measurement point set is transformed into the problem of maximizing the aggregation value of the sample set.Simulation and real measurement results verify that the proposed method can significantly reduce the required probe sample size while ensuring the measurement accuracy of multi-sensor fusion.
基金provided by Shaanxi Province’s Key Research and Development Plan(No.2022NY-087).
文摘For the problem of slow search and tortuous paths in the Rapidly Exploring Random Tree(RRT)algorithm,a feedback-biased sampling RRT,called FS-RRT,is proposedbasedon RRT.Firstly,toimprove the samplingefficiency of RRT to shorten the search time,the search area of the randomtree is restricted to improve the sampling efficiency.Secondly,to obtain better information about obstacles to shorten the path length,a feedback-biased sampling strategy is used instead of the traditional random sampling,the collision of the expanding node with an obstacle generates feedback information so that the next expanding node avoids expanding within a specific angle range.Thirdly,this paper proposes using the inverse optimization strategy to remove redundancy points from the initial path,making the path shorter and more accurate.Finally,to satisfy the smooth operation of the robot in practice,auxiliary points are used to optimize the cubic Bezier curve to avoid path-crossing obstacles when using the Bezier curve optimization.The experimental results demonstrate that,compared to the traditional RRT algorithm,the proposed FS-RRT algorithm performs favorably against mainstream algorithms regarding running time,number of search iterations,and path length.Moreover,the improved algorithm also performs well in a narrow obstacle environment,and its effectiveness is further confirmed by experimental verification.
文摘In order to accurately measure an object’s three-dimensional surface shape,the influence of sampling on it was studied.First,on the basis of deriving spectra expressions through the Fourier transform,the generation of CCD pixels was analyzed,and its expression was given.Then,based on the discrete expression of deformation fringes obtained after sampling,its Fourier spectrum expression was derived,resulting in an infinitely repeated"spectra island"in the frequency domain.Finally,on the basis of using a low-pass filter to remove high-order harmonic components and retaining only one fundamental frequency component,the inverse Fourier transform was used to reconstruct the signal strength.A method of reducing the sampling interval,i.e.,reducing the number of sampling points per fringe,was proposed to increase the ratio between the sampling frequency and the fundamental frequency of the grating.This was done to reconstruct the object’s surface shape more accurately under the condition of m>4.The basic principle was verified through simulation and experiment.In the simulation,the sampling intervals were 8 pixels,4 pixels,2 pixels,and 1 pixel,the maximum absolute error values obtained in the last three situations were 88.80%,38.38%,and 31.50%in the first situation,respectively,and the corresponding average absolute error values are 71.84%,43.27%,and 32.26%.It is demonstrated that the smaller the sampling interval,the better the recovery effect.Taking the same four sampling intervals in the experiment as in the simulation can also lead to the same conclusions.The simulated and experimental results show that reducing the sampling interval can improve the accuracy of object surface shape measurement and achieve better reconstruction results.
基金the Researchers Supporting Project number(RSPD2024R848),King Saud University,Riyadh,Saudi Arabia.
文摘Disjoint sampling is critical for rigorous and unbiased evaluation of state-of-the-art(SOTA)models e.g.,Attention Graph and Vision Transformer.When training,validation,and test sets overlap or share data,it introduces a bias that inflates performance metrics and prevents accurate assessment of a model’s true ability to generalize to new examples.This paper presents an innovative disjoint sampling approach for training SOTA models for the Hyperspectral Image Classification(HSIC).By separating training,validation,and test data without overlap,the proposed method facilitates a fairer evaluation of how well a model can classify pixels it was not exposed to during training or validation.Experiments demonstrate the approach significantly improves a model’s generalization compared to alternatives that include training and validation data in test data(A trivial approach involves testing the model on the entire Hyperspectral dataset to generate the ground truth maps.This approach produces higher accuracy but ultimately results in low generalization performance).Disjoint sampling eliminates data leakage between sets and provides reliable metrics for benchmarking progress in HSIC.Disjoint sampling is critical for advancing SOTA models and their real-world application to large-scale land mapping with Hyperspectral sensors.Overall,with the disjoint test set,the performance of the deep models achieves 96.36%accuracy on Indian Pines data,99.73%on Pavia University data,98.29%on University of Houston data,99.43%on Botswana data,and 99.88%on Salinas data.
基金supported by the National Science Foundation(Grant No.DMS-1440415)partially supported by a grant from the Simons Foundation,NSF Grants DMS-1720171 and DMS-2110895a Discovery Grant from Natural Sciences and Engineering Research Council of Canada.
文摘We propose a new framework for the sampling,compression,and analysis of distributions of point sets and other geometric objects embedded in Euclidean spaces.Our approach involves constructing a tensor called the RaySense sketch,which captures nearest neighbors from the underlying geometry of points along a set of rays.We explore various operations that can be performed on the RaySense sketch,leading to different properties and potential applications.Statistical information about the data set can be extracted from the sketch,independent of the ray set.Line integrals on point sets can be efficiently computed using the sketch.We also present several examples illustrating applications of the proposed strategy in practical scenarios.
基金supported by the National Key Research and Development Program of China under Grant 2023YFB2407400。
文摘In the cascaded H-bridge inverter(CHBI)with supercapacitor and dc-dc stage,inherent second-order harmonic power flows through each submodule(SM),causing fluctuations in both the dc-link voltage and the dc-dc current.There exist limitations in handling these fluctuations at variable output frequencies when employing proportional-integral(PI)control to the dc-dc stage.This paper aims to coordinately control these second-order harmonic voltage and current fluctuations in the CHBI.The presented method configures a specific second-order harmonic voltage reference,equipped with a maximum voltage fluctuation constraint and a suitable phase,for the dc-dc stage.A PI-resonant controller is used to track the configured reference.This allows for regulating the second-order harmonic fluctuation in the average dc-link voltage among the SMs within a certain value.Importantly,the second-order harmonic fluctuation in the dc-dc current can also be reduced.Simulation and experimental results demonstrate the effectiveness of the presented method.
基金Project supported by the Key National Natural Science Foundation of China(Grant No.62136005)the National Natural Science Foundation of China(Grant Nos.61922087,61906201,and 62006238)。
文摘Physics-informed neural networks(PINNs)have become an attractive machine learning framework for obtaining solutions to partial differential equations(PDEs).PINNs embed initial,boundary,and PDE constraints into the loss function.The performance of PINNs is generally affected by both training and sampling.Specifically,training methods focus on how to overcome the training difficulties caused by the special PDE residual loss of PINNs,and sampling methods are concerned with the location and distribution of the sampling points upon which evaluations of PDE residual loss are accomplished.However,a common problem among these original PINNs is that they omit special temporal information utilization during the training or sampling stages when dealing with an important PDE category,namely,time-dependent PDEs,where temporal information plays a key role in the algorithms used.There is one method,called Causal PINN,that considers temporal causality at the training level but not special temporal utilization at the sampling level.Incorporating temporal knowledge into sampling remains to be studied.To fill this gap,we propose a novel temporal causality-based adaptive sampling method that dynamically determines the sampling ratio according to both PDE residual and temporal causality.By designing a sampling ratio determined by both residual loss and temporal causality to control the number and location of sampled points in each temporal sub-domain,we provide a practical solution by incorporating temporal information into sampling.Numerical experiments of several nonlinear time-dependent PDEs,including the Cahn–Hilliard,Korteweg–de Vries,Allen–Cahn and wave equations,show that our proposed sampling method can improve the performance.We demonstrate that using such a relatively simple sampling method can improve prediction performance by up to two orders of magnitude compared with the results from other methods,especially when points are limited.