Peer-to-peer(P2P)overlay networks provide message transmission capabilities for blockchain systems.Improving data transmission efficiency in P2P networks can greatly enhance the performance of blockchain systems.Howev...Peer-to-peer(P2P)overlay networks provide message transmission capabilities for blockchain systems.Improving data transmission efficiency in P2P networks can greatly enhance the performance of blockchain systems.However,traditional blockchain P2P networks face a common challenge where there is often a mismatch between the upper-layer traffic requirements and the underlying physical network topology.This mismatch results in redundant data transmission and inefficient routing,severely constraining the scalability of blockchain systems.To address these pressing issues,we propose FPSblo,an efficient transmission method for blockchain networks.Our inspiration for FPSblo stems from the Farthest Point Sampling(FPS)algorithm,a well-established technique widely utilized in point cloud image processing.In this work,we analogize blockchain nodes to points in a point cloud image and select a representative set of nodes to prioritize message forwarding so that messages reach the network edge quickly and are evenly distributed.Moreover,we compare our model with the Kadcast transmission model,which is a classic improvement model for blockchain P2P transmission networks,the experimental findings show that the FPSblo model reduces 34.8%of transmission redundancy and reduces the overload rate by 37.6%.By conducting experimental analysis,the FPS-BT model enhances the transmission capabilities of the P2P network in blockchain.展开更多
Physics-informed neural networks(PINNs)have become an attractive machine learning framework for obtaining solutions to partial differential equations(PDEs).PINNs embed initial,boundary,and PDE constraints into the los...Physics-informed neural networks(PINNs)have become an attractive machine learning framework for obtaining solutions to partial differential equations(PDEs).PINNs embed initial,boundary,and PDE constraints into the loss function.The performance of PINNs is generally affected by both training and sampling.Specifically,training methods focus on how to overcome the training difficulties caused by the special PDE residual loss of PINNs,and sampling methods are concerned with the location and distribution of the sampling points upon which evaluations of PDE residual loss are accomplished.However,a common problem among these original PINNs is that they omit special temporal information utilization during the training or sampling stages when dealing with an important PDE category,namely,time-dependent PDEs,where temporal information plays a key role in the algorithms used.There is one method,called Causal PINN,that considers temporal causality at the training level but not special temporal utilization at the sampling level.Incorporating temporal knowledge into sampling remains to be studied.To fill this gap,we propose a novel temporal causality-based adaptive sampling method that dynamically determines the sampling ratio according to both PDE residual and temporal causality.By designing a sampling ratio determined by both residual loss and temporal causality to control the number and location of sampled points in each temporal sub-domain,we provide a practical solution by incorporating temporal information into sampling.Numerical experiments of several nonlinear time-dependent PDEs,including the Cahn–Hilliard,Korteweg–de Vries,Allen–Cahn and wave equations,show that our proposed sampling method can improve the performance.We demonstrate that using such a relatively simple sampling method can improve prediction performance by up to two orders of magnitude compared with the results from other methods,especially when points are limited.展开更多
Polylactic acid(PLA)is a potential polymer material used as a substitute for traditional plastics,and the accurate molecular weight distribution range of PLA is strictly required in practical applications.Therefore,ex...Polylactic acid(PLA)is a potential polymer material used as a substitute for traditional plastics,and the accurate molecular weight distribution range of PLA is strictly required in practical applications.Therefore,exploring the relationship between synthetic conditions and PLA molecular weight is crucially important.In this work,direct polycondensation combined with overlay sampling uniform design(OSUD)was applied to synthesize the low molecular weight PLA.Then a multiple regression model and two artificial neural network models on PLA molecular weight versus reaction temperature,reaction time,and catalyst dosage were developed for PLA molecular weight prediction.The characterization results indicated that the low molecular weight PLA was efficiently synthesized under this method.Meanwhile,the experimental dataset acquired from OSUD successfully established three predictive models for PLA molecular weight.Among them,both artificial neural network models had significantly better predictive performance than the regression model.Notably,the radial basis function neural network model had the best predictive accuracy with only 11.9%of mean relative error on the validation dataset,which improved by 67.7%compared with the traditional multiple regression model.This work successfully predicted PLA molecular weight in a direct polycondensation process using artificial neural network models combined with OSUD,which provided guidance for the future implementation of molecular weight-controlled polymer's synthesis.展开更多
How to use a few defect samples to complete the defect classification is a key challenge in the production of mobile phone screens.An attention-relation network for the mobile phone screen defect classification is pro...How to use a few defect samples to complete the defect classification is a key challenge in the production of mobile phone screens.An attention-relation network for the mobile phone screen defect classification is proposed in this paper.The architecture of the attention-relation network contains two modules:a feature extract module and a feature metric module.Different from other few-shot models,an attention mechanism is applied to metric learning in our model to measure the distance between features,so as to pay attention to the correlation between features and suppress unwanted information.Besides,we combine dilated convolution and skip connection to extract more feature information for follow-up processing.We validate attention-relation network on the mobile phone screen defect dataset.The experimental results show that the classification accuracy of the attentionrelation network is 0.9486 under the 5-way 1-shot training strategy and 0.9039 under the 5-way 5-shot setting.It achieves the excellent effect of classification for mobile phone screen defects and outperforms with dominant advantages.展开更多
In this paper, we present an interval model of networked control systems with time-varying sampling periods and time-varying network-induced delays and discuss the problem of stability of networked control systems usi...In this paper, we present an interval model of networked control systems with time-varying sampling periods and time-varying network-induced delays and discuss the problem of stability of networked control systems using Lyapunov stability theory. A sufficient stability condition is obtained by solving a set of linear matrix inequalities. In the end, the illustrative example demonstrates the correctness and effectiveness of the proposed approach.展开更多
Hand gestures are a natural way for human-robot interaction.Vision based dynamic hand gesture recognition has become a hot research topic due to its various applications.This paper presents a novel deep learning netwo...Hand gestures are a natural way for human-robot interaction.Vision based dynamic hand gesture recognition has become a hot research topic due to its various applications.This paper presents a novel deep learning network for hand gesture recognition.The network integrates several well-proved modules together to learn both short-term and long-term features from video inputs and meanwhile avoid intensive computation.To learn short-term features,each video input is segmented into a fixed number of frame groups.A frame is randomly selected from each group and represented as an RGB image as well as an optical flow snapshot.These two entities are fused and fed into a convolutional neural network(Conv Net)for feature extraction.The Conv Nets for all groups share parameters.To learn longterm features,outputs from all Conv Nets are fed into a long short-term memory(LSTM)network,by which a final classification result is predicted.The new model has been tested with two popular hand gesture datasets,namely the Jester dataset and Nvidia dataset.Comparing with other models,our model produced very competitive results.The robustness of the new model has also been proved with an augmented dataset with enhanced diversity of hand gestures.展开更多
Graph convolutional networks(GCNs)have received significant attention from various research fields due to the excellent performance in learning graph representations.Although GCN performs well compared with other meth...Graph convolutional networks(GCNs)have received significant attention from various research fields due to the excellent performance in learning graph representations.Although GCN performs well compared with other methods,it still faces challenges.Training a GCN model for large-scale graphs in a conventional way requires high computation and storage costs.Therefore,motivated by an urgent need in terms of efficiency and scalability in training GCN,sampling methods have been proposed and achieved a significant effect.In this paper,we categorize sampling methods based on the sampling mechanisms and provide a comprehensive survey of sampling methods for efficient training of GCN.To highlight the characteristics and differences of sampling methods,we present a detailed comparison within each category and further give an overall comparative analysis for the sampling methods in all categories.Finally,we discuss some challenges and future research directions of the sampling methods.展开更多
We consider solving the forward and inverse partial differential equations(PDEs)which have sharp solutions with physics-informed neural networks(PINNs)in this work.In particular,to better capture the sharpness of the ...We consider solving the forward and inverse partial differential equations(PDEs)which have sharp solutions with physics-informed neural networks(PINNs)in this work.In particular,to better capture the sharpness of the solution,we propose the adaptive sampling methods(ASMs)based on the residual and the gradient of the solution.We first present a residual only-based ASM denoted by ASMⅠ.In this approach,we first train the neural network using a small number of residual points and divide the computational domain into a certain number of sub-domains,then we add new residual points in the sub-domain which has the largest mean absolute value of the residual,and those points which have the largest absolute values of the residual in this sub-domain as new residual points.We further develop a second type of ASM(denoted by ASMⅡ)based on both the residual and the gradient of the solution due to the fact that only the residual may not be able to efficiently capture the sharpness of the solution.The procedure of ASMⅡis almost the same as that of ASMⅠ,and we add new residual points which have not only large residuals but also large gradients.To demonstrate the effectiveness of the present methods,we use both ASMⅠand ASMⅡto solve a number of PDEs,including the Burger equation,the compressible Euler equation,the Poisson equation over an Lshape domain as well as the high-dimensional Poisson equation.It has been shown from the numerical results that the sharp solutions can be well approximated by using either ASMⅠor ASMⅡ,and both methods deliver much more accurate solutions than the original PINNs with the same number of residual points.Moreover,the ASMⅡalgorithm has better performance in terms of accuracy,efficiency,and stability compared with the ASMⅠalgorithm.This means that the gradient of the solution improves the stability and efficiency of the adaptive sampling procedure as well as the accuracy of the solution.Furthermore,we also employ the similar adaptive sampling technique for the data points of boundary conditions(BCs)if the sharpness of the solution is near the boundary.The result of the L-shape Poisson problem indicates that the present method can significantly improve the efficiency,stability,and accuracy.展开更多
The problem of guaranteed cost control for the networked control systems(NCSs) with time-varying delays, time-varying sampling intervals and signals quantization was investigated, wherein the physical plant was contin...The problem of guaranteed cost control for the networked control systems(NCSs) with time-varying delays, time-varying sampling intervals and signals quantization was investigated, wherein the physical plant was continuous-time one, and the control input was discrete-time one. By using an input delay approach and a sector bound method, the network induced delays, quantization parameter and sampling intervals were presented in one framework in the case of the state and the control input by quantized in a logarithmic form. A novel Lyapunov function with discontinuity, which took full advantages of the NCS characteristic information, was exploited. In addition, it was shown that Lyapunov function decreased at the jump instants. Furthermore, the Leibniz-Newton formula and free-weighting matrix methods were used to obtain the guaranteed cost controller design conditions which were dependent on the NCS characteristic information. A numerical example was used to illustrate the effectiveness of the proposed methods.展开更多
This is the second part of our series works on failure-informed adaptive sampling for physic-informed neural networks(PINNs).In our previous work(SIAM J.Sci.Comput.45:A1971–A1994),we have presented an adaptive sampli...This is the second part of our series works on failure-informed adaptive sampling for physic-informed neural networks(PINNs).In our previous work(SIAM J.Sci.Comput.45:A1971–A1994),we have presented an adaptive sampling framework by using the failure probability as the posterior error indicator,where the truncated Gaussian model has been adopted for estimating the indicator.Here,we present two extensions of that work.The first extension consists in combining with a re-sampling technique,so that the new algorithm can maintain a constant training size.This is achieved through a cosine-annealing,which gradually transforms the sampling of collocation points from uniform to adaptive via the training progress.The second extension is to present the subset simulation(SS)algorithm as the posterior model(instead of the truncated Gaussian model)for estimating the error indicator,which can more effectively estimate the failure probability and generate new effective training points in the failure region.We investigate the performance of the new approach using several challenging problems,and numerical experiments demonstrate a significant improvement over the original algorithm.展开更多
Background Functional mapping, despite its proven efficiency, suffers from a “chicken or egg” scenario, in that, poor spatial features lead to inadequate spectral alignment and vice versa during training, often resu...Background Functional mapping, despite its proven efficiency, suffers from a “chicken or egg” scenario, in that, poor spatial features lead to inadequate spectral alignment and vice versa during training, often resulting in slow convergence, high computational costs, and learning failures, particularly when small datasets are used. Methods A novel method is presented for dense-shape correspondence, whereby the spatial information transformed by neural networks is combined with the projections onto spectral maps to overcome the “chicken or egg” challenge by selectively sampling only points with high confidence in their alignment. These points then contribute to the alignment and spectral loss terms, boosting training, and accelerating convergence by a factor of five. To ensure full unsupervised learning, the Gromov–Hausdorff distance metric was used to select the points with the maximal alignment score displaying most confidence. Results The effectiveness of the proposed approach was demonstrated on several benchmark datasets, whereby results were reported as superior to those of spectral and spatial-based methods. Conclusions The proposed method provides a promising new approach to dense-shape correspondence, addressing the key challenges in the field and offering significant advantages over the current methods, including faster convergence, improved accuracy, and reduced computational costs.展开更多
There are two distinct types of domains,design-and cross-classes domains,with the former extensively studied under the topic of small-area estimation.In natural resource inventory,however,most classes listed in the co...There are two distinct types of domains,design-and cross-classes domains,with the former extensively studied under the topic of small-area estimation.In natural resource inventory,however,most classes listed in the condition tables of national inventory programs are characterized as cross-classes domains,such as vegetation type,productivity class,and age class.To date,challenges remain active for inventorying cross-classes domains because these domains are usually of unknown sampling frame and spatial distribution with the result that inference relies on population-level as opposed to domain-level sampling.Multiple challenges are noteworthy:(1)efficient sampling strategies are difficult to develop because of little priori information about the target domain;(2)domain inference relies on a sample designed for the population,so within-domain sample sizes could be too small to support a precise estimation;and(3)increasing sample size for the population does not ensure an increase to the domain,so actual sample size for a target domain remains highly uncertain,particularly for small domains.In this paper,we introduce a design-based generalized systematic adaptive cluster sampling(GSACS)for inventorying cross-classes domains.Design-unbiased Hansen-Hurwitz and Horvitz-Thompson estimators are derived for domain totals and compared within GSACS and with systematic sampling(SYS).Comprehensive Monte Carlo simulations show that(1)GSACS Hansen-Hurwitz and Horvitz-Thompson estimators are unbiased and equally efficient,whereas thelatter outperforms the former for supporting a sample of size one;(2)SYS is a special case of GSACS while the latter outperforms the former in terms of increased efficiency and reduced intensity;(3)GSACS Horvitz-Thompson variance estimator is design-unbiased for a single SYS sample;and(4)rules-ofthumb summarized with respect to sampling design and spatial effect improve precision.Because inventorying a mini domain is analogous to inventorying a rare variable,alternative network sampling procedures are also readily available for inventorying cross-classes domains.展开更多
With the rapid growth of network bandwidth,traffic identification is currently an important challenge for network management and security.In recent years,packet sampling has been widely used in most network management...With the rapid growth of network bandwidth,traffic identification is currently an important challenge for network management and security.In recent years,packet sampling has been widely used in most network management systems.In this paper,in order to improve the accuracy of network traffic identification,sampled NetFlow data is applied to traffic identification,and the impact of packet sampling on the accuracy of the identification method is studied.This study includes feature selection,a metric correlation analysis for the application behavior,and a traffic identification algorithm.Theoretical analysis and experimental results show that the significance of behavior characteristics becomes lower in the packet sampling environment.Meanwhile,in this paper,the correlation analysis results in different trends according to different features.However,as long as the flow number meets the statistical requirement,the feature selection and the correlation degree will be independent of the sampling ratio.While in a high sampling ratio,where the effective information would be less,the identification accuracy is much lower than the unsampled packets.Finally,in order to improve the accuracy of the identification,we propose a Deep Belief Networks Application Identification(DBNAI)method,which can achieve better classification performance than other state-of-the-art methods.展开更多
Material identification is critical for understanding the relationship between mechanical properties and the associated mechanical functions.However,material identification is a challenging task,especially when the ch...Material identification is critical for understanding the relationship between mechanical properties and the associated mechanical functions.However,material identification is a challenging task,especially when the characteristic of the material is highly nonlinear in nature,as is common in biological tissue.In this work,we identify unknown material properties in continuum solid mechanics via physics-informed neural networks(PINNs).To improve the accuracy and efficiency of PINNs,we develop efficient strategies to nonuniformly sample observational data.We also investigate different approaches to enforce Dirichlet-type boundary conditions(BCs)as soft or hard constraints.Finally,we apply the proposed methods to a diverse set of time-dependent and time-independent solid mechanic examples that span linear elastic and hyperelastic material space.The estimated material parameters achieve relative errors of less than 1%.As such,this work is relevant to diverse applications,including optimizing structural integrity and developing novel materials.展开更多
这份报纸学习由使用一个活跃变化的采样时期方法与两导致网络的时间延期和包退学学生为联网的控制系统(NCS ) 设计 H 控制器的问题,在采样时期在一个有限集合切换的地方。一个新奇线性基于评价的方法被建议补偿包退学学生,和 H 控制...这份报纸学习由使用一个活跃变化的采样时期方法与两导致网络的时间延期和包退学学生为联网的控制系统(NCS ) 设计 H 控制器的问题,在采样时期在一个有限集合切换的地方。一个新奇线性基于评价的方法被建议补偿包退学学生,和 H 控制器设计被使用多客观的优化方法论也介绍。模拟结果说明活跃变化的采样时期方法和线性基于评价的包退学学生赔偿的有效性。展开更多
In the recent research of network sampling, some sampling concepts are misunderstood, and the variance of subnets is not taken into account. We propose the correct definition of the sample and sampling rate in network...In the recent research of network sampling, some sampling concepts are misunderstood, and the variance of subnets is not taken into account. We propose the correct definition of the sample and sampling rate in network sampling, as well as the formula for calculating the variance of subnets. Then, three commonly used sampling strategies are applied to databases of the connecting nearest-neighbor(CNN) model, random network and small-world network to explore the variance in network sampling. As proved by the results, snowball sampling obtains the most variance of subnets, but does well in capturing the network structure. The variance of networks sampled by the hub and random strategy are much smaller. The hub strategy performs well in reflecting the property of the whole network, while random sampling obtains more accurate results in evaluating clustering coefficient.展开更多
With the advent of large-scale and high-speed IPv6 network technology, an effective multi-point traffic sampling is becoming a necessity. A distributed multi-point traffic sampling method that provides an accurate and...With the advent of large-scale and high-speed IPv6 network technology, an effective multi-point traffic sampling is becoming a necessity. A distributed multi-point traffic sampling method that provides an accurate and efficient solution to measure IPv6 traffic is proposed. The proposed method is to sample IPv6 traffic based on the analysis of bit randomness of each byte in the packet header. It offers a way to consistently select the same subset of packets at each measurement point, which satisfies the requirement of the distributed multi-point measurement. Finally, using real IPv6 traffic traces, the conclusion that the sampled traffic data have a good uniformity that satisfies the requirement of sampling randomness and can correctly reflect the packet size distribution of full packet trace is proved.展开更多
The bounded consensus tracking problems of second-order multi-agent systems under directed networks with sam- pling delay are addressed in this paper. When the sampling delay is more than a sampling period, new protoc...The bounded consensus tracking problems of second-order multi-agent systems under directed networks with sam- pling delay are addressed in this paper. When the sampling delay is more than a sampling period, new protocols based on sampled-data control are proposed so that each agent can track the time-varying reference state of the virtual leader. By using the delay decomposition approach, the augmented matrix method, and the frequency domain analysis, necessary and sufficient conditions are obtained, which guarantee that the bounded consensus tracking is realized. Furthermore, some numerical simulations are presented to demonstrate the effectiveness of the theoretical results.展开更多
Tree ring dating plays an important role in obtaining past climate information.The fundamental study of obtaining tree ring samples in typical climate regions is particularly essential.The optimum distribution of tree...Tree ring dating plays an important role in obtaining past climate information.The fundamental study of obtaining tree ring samples in typical climate regions is particularly essential.The optimum distribution of tree ring sampling sites based on climate information from the Climate Observation Network(ORPOM model) is presented in this article.In this setup,the tree rings in a typical region are used for surface representation,by applying excellent correlation with the climate information as the main principle.Taking the Horqin Sandy Land in the cold and arid region of China as an example,the optimum distribution range of the tree ring sampling sites was obtained through the application of the ORPOM model,which is considered a reasonably practical scheme.展开更多
One of the key assumptions in respondent-driven sampling (RDS) analysis, called “random selection assumption,” is that respondents randomly recruit their peers from their personal networks. The objective of this stu...One of the key assumptions in respondent-driven sampling (RDS) analysis, called “random selection assumption,” is that respondents randomly recruit their peers from their personal networks. The objective of this study was to verify this assumption in the empirical data of egocentric networks. Methods: We conducted an egocentric network study among young drug users in China, in which RDS was used to recruit this hard-to-reach population. If the random recruitment assumption holds, the RDS-estimated population proportions should be similar to the actual population proportions. Following this logic, we first calculated the population proportions of five visible variables (gender, age, education, marital status, and drug use mode) among the total drug-use alters from which the RDS sample was drawn, and then estimated the RDS-adjusted population proportions and their 95% confidence intervals in the RDS sample. Theoretically, if the random recruitment assumption holds, the 95% confidence intervals estimated in the RDS sample should include the population proportions calculated in the total drug-use alters. Results: The evaluation of the RDS sample indicated its success in reaching the convergence of RDS compositions and including a broad cross-section of the hidden population. Findings demonstrate that the random selection assumption holds for three group traits, but not for two others. Specifically, egos randomly recruited subjects in different age groups, marital status, or drug use modes from their network alters, but not in gender and education levels. Conclusions: This study demonstrates the occurrence of non-random recruitment, indicating that the recruitment of subjects in this RDS study was not completely at random. Future studies are needed to assess the extent to which the population proportion estimates can be biased when the violation of the assumption occurs in some group traits in RDS samples.展开更多
基金This present research work was supported by the National Key R&D Program of China(No.2021YFB2700800)the GHfund B(No.202302024490).
文摘Peer-to-peer(P2P)overlay networks provide message transmission capabilities for blockchain systems.Improving data transmission efficiency in P2P networks can greatly enhance the performance of blockchain systems.However,traditional blockchain P2P networks face a common challenge where there is often a mismatch between the upper-layer traffic requirements and the underlying physical network topology.This mismatch results in redundant data transmission and inefficient routing,severely constraining the scalability of blockchain systems.To address these pressing issues,we propose FPSblo,an efficient transmission method for blockchain networks.Our inspiration for FPSblo stems from the Farthest Point Sampling(FPS)algorithm,a well-established technique widely utilized in point cloud image processing.In this work,we analogize blockchain nodes to points in a point cloud image and select a representative set of nodes to prioritize message forwarding so that messages reach the network edge quickly and are evenly distributed.Moreover,we compare our model with the Kadcast transmission model,which is a classic improvement model for blockchain P2P transmission networks,the experimental findings show that the FPSblo model reduces 34.8%of transmission redundancy and reduces the overload rate by 37.6%.By conducting experimental analysis,the FPS-BT model enhances the transmission capabilities of the P2P network in blockchain.
基金Project supported by the Key National Natural Science Foundation of China(Grant No.62136005)the National Natural Science Foundation of China(Grant Nos.61922087,61906201,and 62006238)。
文摘Physics-informed neural networks(PINNs)have become an attractive machine learning framework for obtaining solutions to partial differential equations(PDEs).PINNs embed initial,boundary,and PDE constraints into the loss function.The performance of PINNs is generally affected by both training and sampling.Specifically,training methods focus on how to overcome the training difficulties caused by the special PDE residual loss of PINNs,and sampling methods are concerned with the location and distribution of the sampling points upon which evaluations of PDE residual loss are accomplished.However,a common problem among these original PINNs is that they omit special temporal information utilization during the training or sampling stages when dealing with an important PDE category,namely,time-dependent PDEs,where temporal information plays a key role in the algorithms used.There is one method,called Causal PINN,that considers temporal causality at the training level but not special temporal utilization at the sampling level.Incorporating temporal knowledge into sampling remains to be studied.To fill this gap,we propose a novel temporal causality-based adaptive sampling method that dynamically determines the sampling ratio according to both PDE residual and temporal causality.By designing a sampling ratio determined by both residual loss and temporal causality to control the number and location of sampled points in each temporal sub-domain,we provide a practical solution by incorporating temporal information into sampling.Numerical experiments of several nonlinear time-dependent PDEs,including the Cahn–Hilliard,Korteweg–de Vries,Allen–Cahn and wave equations,show that our proposed sampling method can improve the performance.We demonstrate that using such a relatively simple sampling method can improve prediction performance by up to two orders of magnitude compared with the results from other methods,especially when points are limited.
基金funded by the Zhejiang Provincial Natural Science Foundation of China(LD21B060001)the National Natural Science Foundation of China(22078296,21576240).
文摘Polylactic acid(PLA)is a potential polymer material used as a substitute for traditional plastics,and the accurate molecular weight distribution range of PLA is strictly required in practical applications.Therefore,exploring the relationship between synthetic conditions and PLA molecular weight is crucially important.In this work,direct polycondensation combined with overlay sampling uniform design(OSUD)was applied to synthesize the low molecular weight PLA.Then a multiple regression model and two artificial neural network models on PLA molecular weight versus reaction temperature,reaction time,and catalyst dosage were developed for PLA molecular weight prediction.The characterization results indicated that the low molecular weight PLA was efficiently synthesized under this method.Meanwhile,the experimental dataset acquired from OSUD successfully established three predictive models for PLA molecular weight.Among them,both artificial neural network models had significantly better predictive performance than the regression model.Notably,the radial basis function neural network model had the best predictive accuracy with only 11.9%of mean relative error on the validation dataset,which improved by 67.7%compared with the traditional multiple regression model.This work successfully predicted PLA molecular weight in a direct polycondensation process using artificial neural network models combined with OSUD,which provided guidance for the future implementation of molecular weight-controlled polymer's synthesis.
文摘How to use a few defect samples to complete the defect classification is a key challenge in the production of mobile phone screens.An attention-relation network for the mobile phone screen defect classification is proposed in this paper.The architecture of the attention-relation network contains two modules:a feature extract module and a feature metric module.Different from other few-shot models,an attention mechanism is applied to metric learning in our model to measure the distance between features,so as to pay attention to the correlation between features and suppress unwanted information.Besides,we combine dilated convolution and skip connection to extract more feature information for follow-up processing.We validate attention-relation network on the mobile phone screen defect dataset.The experimental results show that the classification accuracy of the attentionrelation network is 0.9486 under the 5-way 1-shot training strategy and 0.9039 under the 5-way 5-shot setting.It achieves the excellent effect of classification for mobile phone screen defects and outperforms with dominant advantages.
基金the National Natural Science Foundation of China (No.60674043)
文摘In this paper, we present an interval model of networked control systems with time-varying sampling periods and time-varying network-induced delays and discuss the problem of stability of networked control systems using Lyapunov stability theory. A sufficient stability condition is obtained by solving a set of linear matrix inequalities. In the end, the illustrative example demonstrates the correctness and effectiveness of the proposed approach.
文摘Hand gestures are a natural way for human-robot interaction.Vision based dynamic hand gesture recognition has become a hot research topic due to its various applications.This paper presents a novel deep learning network for hand gesture recognition.The network integrates several well-proved modules together to learn both short-term and long-term features from video inputs and meanwhile avoid intensive computation.To learn short-term features,each video input is segmented into a fixed number of frame groups.A frame is randomly selected from each group and represented as an RGB image as well as an optical flow snapshot.These two entities are fused and fed into a convolutional neural network(Conv Net)for feature extraction.The Conv Nets for all groups share parameters.To learn longterm features,outputs from all Conv Nets are fed into a long short-term memory(LSTM)network,by which a final classification result is predicted.The new model has been tested with two popular hand gesture datasets,namely the Jester dataset and Nvidia dataset.Comparing with other models,our model produced very competitive results.The robustness of the new model has also been proved with an augmented dataset with enhanced diversity of hand gestures.
基金supported by the National Natural Science Foundation of China(61732018,61872335,61802367,61876215)the Strategic Priority Research Program of Chinese Academy of Sciences(XDC05000000)+1 种基金Beijing Academy of Artificial Intelligence(BAAI),the Open Project Program of the State Key Laboratory of Mathematical Engineering and Advanced Computing(2019A07)the Open Project of Zhejiang Laboratory,and a grant from the Institute for Guo Qiang,Tsinghua University.Recommended by Associate Editor Long Chen.
文摘Graph convolutional networks(GCNs)have received significant attention from various research fields due to the excellent performance in learning graph representations.Although GCN performs well compared with other methods,it still faces challenges.Training a GCN model for large-scale graphs in a conventional way requires high computation and storage costs.Therefore,motivated by an urgent need in terms of efficiency and scalability in training GCN,sampling methods have been proposed and achieved a significant effect.In this paper,we categorize sampling methods based on the sampling mechanisms and provide a comprehensive survey of sampling methods for efficient training of GCN.To highlight the characteristics and differences of sampling methods,we present a detailed comparison within each category and further give an overall comparative analysis for the sampling methods in all categories.Finally,we discuss some challenges and future research directions of the sampling methods.
基金Project supported by the National Key R&D Program of China(No.2022YFA1004504)the National Natural Science Foundation of China(Nos.12171404 and 12201229)the Fundamental Research Funds for Central Universities of China(No.20720210037)。
文摘We consider solving the forward and inverse partial differential equations(PDEs)which have sharp solutions with physics-informed neural networks(PINNs)in this work.In particular,to better capture the sharpness of the solution,we propose the adaptive sampling methods(ASMs)based on the residual and the gradient of the solution.We first present a residual only-based ASM denoted by ASMⅠ.In this approach,we first train the neural network using a small number of residual points and divide the computational domain into a certain number of sub-domains,then we add new residual points in the sub-domain which has the largest mean absolute value of the residual,and those points which have the largest absolute values of the residual in this sub-domain as new residual points.We further develop a second type of ASM(denoted by ASMⅡ)based on both the residual and the gradient of the solution due to the fact that only the residual may not be able to efficiently capture the sharpness of the solution.The procedure of ASMⅡis almost the same as that of ASMⅠ,and we add new residual points which have not only large residuals but also large gradients.To demonstrate the effectiveness of the present methods,we use both ASMⅠand ASMⅡto solve a number of PDEs,including the Burger equation,the compressible Euler equation,the Poisson equation over an Lshape domain as well as the high-dimensional Poisson equation.It has been shown from the numerical results that the sharp solutions can be well approximated by using either ASMⅠor ASMⅡ,and both methods deliver much more accurate solutions than the original PINNs with the same number of residual points.Moreover,the ASMⅡalgorithm has better performance in terms of accuracy,efficiency,and stability compared with the ASMⅠalgorithm.This means that the gradient of the solution improves the stability and efficiency of the adaptive sampling procedure as well as the accuracy of the solution.Furthermore,we also employ the similar adaptive sampling technique for the data points of boundary conditions(BCs)if the sharpness of the solution is near the boundary.The result of the L-shape Poisson problem indicates that the present method can significantly improve the efficiency,stability,and accuracy.
基金Project(61104106) supported by the National Natural Science Foundation of ChinaProject(201202156) supported by the Natural Science Foundation of Liaoning Province,ChinaProject(LJQ2012100) supported by Program for Liaoning Excellent Talents in University(LNET)
文摘The problem of guaranteed cost control for the networked control systems(NCSs) with time-varying delays, time-varying sampling intervals and signals quantization was investigated, wherein the physical plant was continuous-time one, and the control input was discrete-time one. By using an input delay approach and a sector bound method, the network induced delays, quantization parameter and sampling intervals were presented in one framework in the case of the state and the control input by quantized in a logarithmic form. A novel Lyapunov function with discontinuity, which took full advantages of the NCS characteristic information, was exploited. In addition, it was shown that Lyapunov function decreased at the jump instants. Furthermore, the Leibniz-Newton formula and free-weighting matrix methods were used to obtain the guaranteed cost controller design conditions which were dependent on the NCS characteristic information. A numerical example was used to illustrate the effectiveness of the proposed methods.
基金supported by the NSF of China(No.12171085)This work was supported by the National Key R&D Program of China(2020YFA0712000)+2 种基金the NSF of China(No.12288201)the Strategic Priority Research Program of Chinese Academy of Sciences(No.XDA25010404)and the Youth Innovation Promotion Association(CAS).
文摘This is the second part of our series works on failure-informed adaptive sampling for physic-informed neural networks(PINNs).In our previous work(SIAM J.Sci.Comput.45:A1971–A1994),we have presented an adaptive sampling framework by using the failure probability as the posterior error indicator,where the truncated Gaussian model has been adopted for estimating the indicator.Here,we present two extensions of that work.The first extension consists in combining with a re-sampling technique,so that the new algorithm can maintain a constant training size.This is achieved through a cosine-annealing,which gradually transforms the sampling of collocation points from uniform to adaptive via the training progress.The second extension is to present the subset simulation(SS)algorithm as the posterior model(instead of the truncated Gaussian model)for estimating the error indicator,which can more effectively estimate the failure probability and generate new effective training points in the failure region.We investigate the performance of the new approach using several challenging problems,and numerical experiments demonstrate a significant improvement over the original algorithm.
基金Supported by the Zimin Institute for Engineering Solutions Advancing Better Lives。
文摘Background Functional mapping, despite its proven efficiency, suffers from a “chicken or egg” scenario, in that, poor spatial features lead to inadequate spectral alignment and vice versa during training, often resulting in slow convergence, high computational costs, and learning failures, particularly when small datasets are used. Methods A novel method is presented for dense-shape correspondence, whereby the spatial information transformed by neural networks is combined with the projections onto spectral maps to overcome the “chicken or egg” challenge by selectively sampling only points with high confidence in their alignment. These points then contribute to the alignment and spectral loss terms, boosting training, and accelerating convergence by a factor of five. To ensure full unsupervised learning, the Gromov–Hausdorff distance metric was used to select the points with the maximal alignment score displaying most confidence. Results The effectiveness of the proposed approach was demonstrated on several benchmark datasets, whereby results were reported as superior to those of spectral and spatial-based methods. Conclusions The proposed method provides a promising new approach to dense-shape correspondence, addressing the key challenges in the field and offering significant advantages over the current methods, including faster convergence, improved accuracy, and reduced computational costs.
基金supported by the Fundamental Research Funds for the Central Universities (Grant No. 2021ZY04)the National Natural Science Foundation of China (Grant No. 32001252)the International Center for Bamboo and Rattan (Grant No. 1632020029)
文摘There are two distinct types of domains,design-and cross-classes domains,with the former extensively studied under the topic of small-area estimation.In natural resource inventory,however,most classes listed in the condition tables of national inventory programs are characterized as cross-classes domains,such as vegetation type,productivity class,and age class.To date,challenges remain active for inventorying cross-classes domains because these domains are usually of unknown sampling frame and spatial distribution with the result that inference relies on population-level as opposed to domain-level sampling.Multiple challenges are noteworthy:(1)efficient sampling strategies are difficult to develop because of little priori information about the target domain;(2)domain inference relies on a sample designed for the population,so within-domain sample sizes could be too small to support a precise estimation;and(3)increasing sample size for the population does not ensure an increase to the domain,so actual sample size for a target domain remains highly uncertain,particularly for small domains.In this paper,we introduce a design-based generalized systematic adaptive cluster sampling(GSACS)for inventorying cross-classes domains.Design-unbiased Hansen-Hurwitz and Horvitz-Thompson estimators are derived for domain totals and compared within GSACS and with systematic sampling(SYS).Comprehensive Monte Carlo simulations show that(1)GSACS Hansen-Hurwitz and Horvitz-Thompson estimators are unbiased and equally efficient,whereas thelatter outperforms the former for supporting a sample of size one;(2)SYS is a special case of GSACS while the latter outperforms the former in terms of increased efficiency and reduced intensity;(3)GSACS Horvitz-Thompson variance estimator is design-unbiased for a single SYS sample;and(4)rules-ofthumb summarized with respect to sampling design and spatial effect improve precision.Because inventorying a mini domain is analogous to inventorying a rare variable,alternative network sampling procedures are also readily available for inventorying cross-classes domains.
基金supported by Key Scientific and Technological Research Projects in Henan Province(Grand No 192102210125)Key scientific research projects of colleges and universities in Henan Province(23A520054)Open Foundation of State key Laboratory of Networking and Switching Technology(Beijing University of Posts and Telecommunications)(SKLNST-2020-2-01).
文摘With the rapid growth of network bandwidth,traffic identification is currently an important challenge for network management and security.In recent years,packet sampling has been widely used in most network management systems.In this paper,in order to improve the accuracy of network traffic identification,sampled NetFlow data is applied to traffic identification,and the impact of packet sampling on the accuracy of the identification method is studied.This study includes feature selection,a metric correlation analysis for the application behavior,and a traffic identification algorithm.Theoretical analysis and experimental results show that the significance of behavior characteristics becomes lower in the packet sampling environment.Meanwhile,in this paper,the correlation analysis results in different trends according to different features.However,as long as the flow number meets the statistical requirement,the feature selection and the correlation degree will be independent of the sampling ratio.While in a high sampling ratio,where the effective information would be less,the identification accuracy is much lower than the unsampled packets.Finally,in order to improve the accuracy of the identification,we propose a Deep Belief Networks Application Identification(DBNAI)method,which can achieve better classification performance than other state-of-the-art methods.
基金funded by the Cora Topolewski Cardiac Research Fund at the Children’s Hospital of Philadelphia(CHOP)the Pediatric Valve Center Frontier Program at CHOP+4 种基金the Additional Ventures Single Ventricle Research Fund Expansion Awardthe National Institutes of Health(USA)supported by the program(Nos.NHLBI T32 HL007915 and NIH R01 HL153166)supported by the program(No.NIH R01 HL153166)supported by the U.S.Department of Energy(No.DE-SC0022953)。
文摘Material identification is critical for understanding the relationship between mechanical properties and the associated mechanical functions.However,material identification is a challenging task,especially when the characteristic of the material is highly nonlinear in nature,as is common in biological tissue.In this work,we identify unknown material properties in continuum solid mechanics via physics-informed neural networks(PINNs).To improve the accuracy and efficiency of PINNs,we develop efficient strategies to nonuniformly sample observational data.We also investigate different approaches to enforce Dirichlet-type boundary conditions(BCs)as soft or hard constraints.Finally,we apply the proposed methods to a diverse set of time-dependent and time-independent solid mechanic examples that span linear elastic and hyperelastic material space.The estimated material parameters achieve relative errors of less than 1%.As such,this work is relevant to diverse applications,including optimizing structural integrity and developing novel materials.
基金Program for New Century Excellent Talents in University(NCET-04-0283)the Funds for Creative Research Groups of China(60521003)+2 种基金Program for Changjiang Scholars and Innovative Research Team in University(IRT0421)the State Key Program of National Natural Science Foundation of China(60534010)National Natural Science Foundation of China(60674021)
文摘这份报纸学习由使用一个活跃变化的采样时期方法与两导致网络的时间延期和包退学学生为联网的控制系统(NCS ) 设计 H 控制器的问题,在采样时期在一个有限集合切换的地方。一个新奇线性基于评价的方法被建议补偿包退学学生,和 H 控制器设计被使用多客观的优化方法论也介绍。模拟结果说明活跃变化的采样时期方法和线性基于评价的包退学学生赔偿的有效性。
基金supported by the Basic Research Fund of Beijing Institute of Technology(20120642008)
文摘In the recent research of network sampling, some sampling concepts are misunderstood, and the variance of subnets is not taken into account. We propose the correct definition of the sample and sampling rate in network sampling, as well as the formula for calculating the variance of subnets. Then, three commonly used sampling strategies are applied to databases of the connecting nearest-neighbor(CNN) model, random network and small-world network to explore the variance in network sampling. As proved by the results, snowball sampling obtains the most variance of subnets, but does well in capturing the network structure. The variance of networks sampled by the hub and random strategy are much smaller. The hub strategy performs well in reflecting the property of the whole network, while random sampling obtains more accurate results in evaluating clustering coefficient.
基金This project was supported by the National Natural Science Foundation of China (60572147,60132030)
文摘With the advent of large-scale and high-speed IPv6 network technology, an effective multi-point traffic sampling is becoming a necessity. A distributed multi-point traffic sampling method that provides an accurate and efficient solution to measure IPv6 traffic is proposed. The proposed method is to sample IPv6 traffic based on the analysis of bit randomness of each byte in the packet header. It offers a way to consistently select the same subset of packets at each measurement point, which satisfies the requirement of the distributed multi-point measurement. Finally, using real IPv6 traffic traces, the conclusion that the sampled traffic data have a good uniformity that satisfies the requirement of sampling randomness and can correctly reflect the packet size distribution of full packet trace is proved.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.60874053 and 61034006)
文摘The bounded consensus tracking problems of second-order multi-agent systems under directed networks with sam- pling delay are addressed in this paper. When the sampling delay is more than a sampling period, new protocols based on sampled-data control are proposed so that each agent can track the time-varying reference state of the virtual leader. By using the delay decomposition approach, the augmented matrix method, and the frequency domain analysis, necessary and sufficient conditions are obtained, which guarantee that the bounded consensus tracking is realized. Furthermore, some numerical simulations are presented to demonstrate the effectiveness of the theoretical results.
基金supported by the National Natural Science Foundation of China (Grant No. 50869005)
文摘Tree ring dating plays an important role in obtaining past climate information.The fundamental study of obtaining tree ring samples in typical climate regions is particularly essential.The optimum distribution of tree ring sampling sites based on climate information from the Climate Observation Network(ORPOM model) is presented in this article.In this setup,the tree rings in a typical region are used for surface representation,by applying excellent correlation with the climate information as the main principle.Taking the Horqin Sandy Land in the cold and arid region of China as an example,the optimum distribution range of the tree ring sampling sites was obtained through the application of the ORPOM model,which is considered a reasonably practical scheme.
文摘One of the key assumptions in respondent-driven sampling (RDS) analysis, called “random selection assumption,” is that respondents randomly recruit their peers from their personal networks. The objective of this study was to verify this assumption in the empirical data of egocentric networks. Methods: We conducted an egocentric network study among young drug users in China, in which RDS was used to recruit this hard-to-reach population. If the random recruitment assumption holds, the RDS-estimated population proportions should be similar to the actual population proportions. Following this logic, we first calculated the population proportions of five visible variables (gender, age, education, marital status, and drug use mode) among the total drug-use alters from which the RDS sample was drawn, and then estimated the RDS-adjusted population proportions and their 95% confidence intervals in the RDS sample. Theoretically, if the random recruitment assumption holds, the 95% confidence intervals estimated in the RDS sample should include the population proportions calculated in the total drug-use alters. Results: The evaluation of the RDS sample indicated its success in reaching the convergence of RDS compositions and including a broad cross-section of the hidden population. Findings demonstrate that the random selection assumption holds for three group traits, but not for two others. Specifically, egos randomly recruited subjects in different age groups, marital status, or drug use modes from their network alters, but not in gender and education levels. Conclusions: This study demonstrates the occurrence of non-random recruitment, indicating that the recruitment of subjects in this RDS study was not completely at random. Future studies are needed to assess the extent to which the population proportion estimates can be biased when the violation of the assumption occurs in some group traits in RDS samples.