Focusing on the problem that it is hard to utilize the web multi-fields information with various forms in large scale web search,a novel approach,which can automatically acquire features from web pages based on a set ...Focusing on the problem that it is hard to utilize the web multi-fields information with various forms in large scale web search,a novel approach,which can automatically acquire features from web pages based on a set of well defined rules,is proposed.The features describe the contents of web pages from different aspects and they can be used to improve the ranking performance for web search.The acquired feature has the advantages of unified form and less noise,and can easily be used in web page relevance ranking.A special specs for judging the relevance between user queries and acquired features is also proposed.Experimental results show that the features acquired by the proposed approach and the feature relevance specs can significantly improve the relevance ranking performance for web search.展开更多
A new mapping approach for automated ontology mapping using web search engines (such as Google) is presented. Based on lexico-syntactic patterns, the hyponymy relationships between ontology concepts can be obtained ...A new mapping approach for automated ontology mapping using web search engines (such as Google) is presented. Based on lexico-syntactic patterns, the hyponymy relationships between ontology concepts can be obtained from the web by search engines and an initial candidate mapping set consisting of ontology concept pairs is generated. According to the concept hierarchies of ontologies, a set of production rules is proposed to delete the concept pairs inconsistent with the ontology semantics from the initial candidate mapping set and add the concept pairs consistent with the ontology semantics to it. Finally, ontology mappings are chosen from the candidate mapping set automatically with a mapping select rule which is based on mutual information. Experimental results show that the F-measure can reach 75% to 100% and it can effectively accomplish the mapping between ontologies.展开更多
Purpose: The aim of this paper is to discuss how the keyword concentration change ratio(KCCR) is used while identifying the stability-mutation feature of Web search keywords during information analyses and predictions...Purpose: The aim of this paper is to discuss how the keyword concentration change ratio(KCCR) is used while identifying the stability-mutation feature of Web search keywords during information analyses and predictions.Design/methodology/approach: By introducing the stability-mutation feature of keywords and its significance, the paper describes the function of the KCCR in identifying keyword stability-mutation features. By using Ginsberg's influenza keywords, the paper shows how the KCCR can be used to identify the keyword stability-mutation feature effectively.Findings: Keyword concentration ratio has close positive correlation with the change rate of research objects retrieved by users, so from the characteristic of the 'stability-mutation' of keywords, we can understand the relationship between these keywords and certain information. In general, keywords representing for mutation fit for the objects changing in short-term, while those representing for stability are suitable for long-term changing objects. Research limitations: It is difficult to acquire the frequency of keywords, so indexes or parameters which are closely related to the true search volume are chosen for this study.Practical implications: The stability-mutation feature identification of Web search keywords can be applied to predict and analyze the information of unknown public events through observing trends of keyword concentration ratio.Originality/value: The stability-mutation feature of Web search could be quantitatively described by the keyword concentration change ratio(KCCR). Through KCCR, the authors took advantage of Ginsberg's influenza epidemic data accordingly and demonstrated how accurate and effective the method proposed in this paper was while it was used in information analyses and predictions.展开更多
The paper presents a novel benefit based query processing strategy for efficient query routing. Based on DHT as the overlay network, it first applies Nash equilibrium to construct the optimal peer group based on the c...The paper presents a novel benefit based query processing strategy for efficient query routing. Based on DHT as the overlay network, it first applies Nash equilibrium to construct the optimal peer group based on the correlations of keywords and coverage and overlap of the peers to decrease the time cost, and then presents a two-layered architecture for query processing that utilizes Bloom filter as compact representation to reduce the bandwidth consumption. Extensive experiments conducted on a real world dataset have demonstrated that our approach obviously decreases the processing time, while improves the precision and recall as well.展开更多
Influenza is a kind of infectious disease, which spreads quickly and widely. The outbreak of influenza has brought huge losses to society. In this paper, four major categories of flu keywords, “prevention phase”, “...Influenza is a kind of infectious disease, which spreads quickly and widely. The outbreak of influenza has brought huge losses to society. In this paper, four major categories of flu keywords, “prevention phase”, “symptom phase”, “treatment phase”, and “commonly-used phrase” were set. Python web crawler was used to obtain relevant influenza data from the National Influenza Center’s influenza surveillance weekly report and about:blank Index. The establishment of support vector regression (SVR), least absolute shrinkage and selection operator (LASSO), convolutional neural networks (CNN) prediction models through machine learning, took into account the seasonal characteristics of the influenza, also established the time series model (ARMA). The results show that, it is feasible to predict influenza based on web search data. Machine learning shows a certain forecast effect in the prediction of influenza based on web search data. In the future, it will have certain reference value in influenza prediction. The ARMA(3,0) model predicts better results and has greater generalization. Finally, the lack of research in this paper and future research directions are given.展开更多
While search engines have become vital tools for searching information on the Internet, privacy issues remain a growing concern due to the technological abilities of search engines to retain user search logs. Although...While search engines have become vital tools for searching information on the Internet, privacy issues remain a growing concern due to the technological abilities of search engines to retain user search logs. Although such capabilities might provide enhanced personalized search results, the confidentiality of user intent remains uncertain. Even with web search query obfuscation techniques, another challenge remains, namely, reusing the same obfuscation methods is problematic, given that search engines have enormous computation and storage resources for query disambiguation. A number of web search query privacy procedures involve the cooperation of the search engine, a non-trusted entity in such cases, making query obfuscation even more challenging. In this study, we provide a review on how search engines work in regards to web search queries and user intent. Secondly, this study reviews material in a manner accessible to those outside computer science with the intent to introduce knowledge of web search engines to enable non-computer scientists to approach web search query privacy innovatively. As a contribution, we identify and highlight areas open for further investigative and innovative research in regards to end-user personalized web search privacy—that is methods that can be executed on the user side without third party involvement such as, search engines. The goal is to motivate future web search obfuscation heuristics that give users control over their personal search privacy.展开更多
With the flood of information on the Web, it has become increasingly necessary for users to utilize automated tools in order to find, extract, filter, and evaluate the desired information and knowledge discovery. In t...With the flood of information on the Web, it has become increasingly necessary for users to utilize automated tools in order to find, extract, filter, and evaluate the desired information and knowledge discovery. In this research, we will present a preliminary discussion about using the dominant meaning technique to improve Google Image Web search engine. Google search engine analyzes the text on the page adjacent to the image, the image caption and dozens of other factors to determine the image content. To improve the results, we looked for building a dominant meaning classification model. This paper investigated the influence of using this model to retrieve more efficient images, through sequential procedures to formulate a suitable query. In order to build this model, the specific dataset related to an application domain was collected;K-means algorithm was used to cluster the dataset into K-clusters, and the dominant meaning technique is used to construct a hierarchy model of these clusters. This hierarchy model is used to reformulate a new query. We perform some experiments on Google and validate the effectiveness of the proposed approach. The proposed approach is improved for in precision, recall and F1-measure by 57%, 70%, and 61% respectively.展开更多
Modem search engines record user interactions and use them to improve search quality. In particular, user click-through has been successfully used to improve click- through rate (CTR), Web search ranking, and query ...Modem search engines record user interactions and use them to improve search quality. In particular, user click-through has been successfully used to improve click- through rate (CTR), Web search ranking, and query rec- ommendations and suggestions. Although click-through logs can provide implicit feedback of users' click preferences, de- riving accurate absolute relevance judgments is difficult be- cause of the existence of click noises and behavior biases. Previous studies showed that user clicking behaviors are bi- ased toward many aspects such as "position" (user's attention decreases from top to bottom) and "trust" (Web site reputa- tions will affect user's judgment). To address these problems, researchers have proposed several behavior models (usually referred to as click models) to describe users? practical browsing behaviors and to obtain an unbiased estimation of result relevance. In this study, we review recent efforts to construct click models for better search ranking and propose a novel convolutional neural network architecture for build- ing click models. Compared to traditional click models, our model not only considers user behavior assumptions as input signals but also uses the content and context information of search engine result pages. In addition, our model uses pa- rameters from traditional click models to restrict the meaning of some outputs in our model's hidden layer. Experimental results show that the proposed model can achieve consider- able improvement over state-of-the-art click models based on the evaluation metric of click perplexity.展开更多
In this paper we discuss three important kinds of Markov chains used in Web search algorithms-the maximal irreducible Markov chain, the miuimal irreducible Markov chain and the middle irreducible Markov chain, We disc...In this paper we discuss three important kinds of Markov chains used in Web search algorithms-the maximal irreducible Markov chain, the miuimal irreducible Markov chain and the middle irreducible Markov chain, We discuss the stationary distributions, the convergence rates and the Maclaurin series of the stationary distributions of the three kinds of Markov chains. Among other things, our results show that the maximal and minimal Markov chains have the same stationary distribution and that the stationary distribution of the middle Markov chain reflects the real Web structure more objectively. Our results also prove that the maximal and middle Markov chains have the same convergence rate and that the maximal Markov chain converges faster than the minimal Markov chain when the damping factor α 〉1/√2.展开更多
Internet users heavily rely on web search engines for their intended information.The major revenue of search engines is advertisements(or ads).However,the search advertising suffers from fraud.Fraudsters generate fake...Internet users heavily rely on web search engines for their intended information.The major revenue of search engines is advertisements(or ads).However,the search advertising suffers from fraud.Fraudsters generate fake traffic which does not reach the intended audience,and increases the cost of the advertisers.Therefore,it is critical to detect fraud in web search.Previous studies solve this problem through fraudster detection(especially bots)by leveraging fraudsters'unique behaviors.However,they may fail to detect new means of fraud,such as crowdsourcing fraud,since crowd workers behave in part like normal users.To this end,this paper proposes an approach to detecting fraud in web search from the perspective of fraudulent keywords.We begin by using a unique dataset of 150 million web search logs to examine the discriminating features of fraudulent keywords.Specifically,we model the temporal correlation of fraudulent keywords as a graph,which reveals a very well-connected community structure.Next,we design DFW(detection of fraudulent keywords)that mines the temporal correlations between candidate fraudulent keywords and a given list of seeds.In particular,DFW leverages several refinements to filter out non-fraudulent keywords that co-occur with seeds occasionally.The evaluation using the search logs shows that DFW achieves high fraud detection precision(99%)and accuracy(93%).A further analysis reveals several typical temporal evolution patterns of fraudulent keywords and the co-existence of both bots and crowd workers as fraudsters for web search fraud.展开更多
Web search query data are obtained to reflect social spots and serve as novel economic indicators. When faced with high-dimensional query data, selecting keywords that have plausible predictive ability and can reduce ...Web search query data are obtained to reflect social spots and serve as novel economic indicators. When faced with high-dimensional query data, selecting keywords that have plausible predictive ability and can reduce dimensionality is critical. This paper presents a new integrative method that combines Hurst Exponent (HE) and Time Difference Correlation (TDC) analysis to select keywords with powerful predictive ability. The method is called the HE-TDC screening method and requires keywords with predictive ability to satisfy two characteristics, namely, high correlation and fluctuation memorability similar to the predicting target series. An empirical study is employed to predict the volume of tourism visitors in the Jiuzhai Valley scenic area. The study shows that keywords selected using HE-TDC method produce a model with better robustness and predictive ability.展开更多
In this paper, first studied are the distribution characteristics of user behaviors based on log data from a massive web search engine. Analysis shows that stochastic distribution of user queries accords with the char...In this paper, first studied are the distribution characteristics of user behaviors based on log data from a massive web search engine. Analysis shows that stochastic distribution of user queries accords with the characteristics of power-law function and exhibits strong similarity, and the user' s queries and clicked URLs present dramatic locality, which implies that query cache and 'hot click' cache can be employed to improve system performance. Then three typical cache replacement policies are compared, including LRU, FIFO, and LFU with attenuation. In addition, the distribution character-istics of web information are also analyzed, which demonstrates that the link popularity and replica pop-ularity of a URL have positive influence on its importance. Finally, variance between the link popularity and user popularity, and variance between replica popularity and user popularity are analyzed, which give us some important insight that helps us improve the ranking algorithms in a search engine.展开更多
The concept of Webpage visibility is usually linked to search engine optimization (SEO), and it is based on global in-link metric [1]. SEO is the process of designing Webpages to optimize its potential to rank high on...The concept of Webpage visibility is usually linked to search engine optimization (SEO), and it is based on global in-link metric [1]. SEO is the process of designing Webpages to optimize its potential to rank high on search engines, preferably on the first page of the results page. The purpose of this research study is to analyze the influence of local geographical area, in terms of cultural values, and the effect of local society keywords in increasing Website visibility. Websites were analyzed by accessing the source code of their homepages through Google Chrome browser. Statistical analysis methods were selected to assess and analyze the results of the SEO and search engine visibility (SEV). The results obtained suggest that the development of Web indicators to be included should consider a local idea of visibility, and consider a certain geographical context. The geographical region that the researchers are considering in this research is the Hashemite kingdom of Jordan (HKJ). The results obtained also suggest that the use of social culture keywords leads to increase the Website visibility in search engines as well as localizes the search area such as google.jo, which localizes the search for HKJ.展开更多
The basic idea behind a personalized web search is to deliver search results that are tailored to meet user needs, which is one of the growing concepts in web technologies. The personalized web search presented in thi...The basic idea behind a personalized web search is to deliver search results that are tailored to meet user needs, which is one of the growing concepts in web technologies. The personalized web search presented in this paper is based on exploiting the implicit feedbacks of user satisfaction during her web browsing history to construct a user profile storing the web pages the user is highly interested in. A weight is assigned to each page stored in the user’s profile;this weight reflects the user’s interest in this page. We name this weight the relative rank of the page, since it depends on the user issuing the query. Therefore, the ranking algorithm provided in this paper is based on the principle that;the rank assigned to a page is the addition of two rank values R_rank and A_rank. A_rank is an absolute rank, since it is fixed for all users issuing the same query, it only depends on the link structures of the web and on the keywords of the query. Thus, it could be calculated by the PageRank algorithm suggested by Brin and Page in 1998 and used by the google search engine. While, R_rank is the relative rank, it is calculated by the methods given in this paper which depends mainly on recording implicit measures of user satisfaction during her previous browsing history.展开更多
现今Web中存在大量缺失、不一致及不精确的数据,而传统的搜索引擎只能根据关键词返回文档片段,无法直接获取目标实体。提出一种新的基于图匹配的实体抽取算法GMEE(Graph Matching Based Entity Extraction),首先将片段按词语分割,进行...现今Web中存在大量缺失、不一致及不精确的数据,而传统的搜索引擎只能根据关键词返回文档片段,无法直接获取目标实体。提出一种新的基于图匹配的实体抽取算法GMEE(Graph Matching Based Entity Extraction),首先将片段按词语分割,进行实体的初步筛选;然后根据各实体之间的结构和语义关系建立“加权语义实体关联图”;最后利用“最大公共子图匹配”策略抽取目标实体。实验结果表明,提出的算法在不需要大量参数训练及传递的情况下,能够对抽取的实体集进行有效的精简,既保证了召回率、准确率,又提高了抽取过程的可解释性。展开更多
基金The National Natural Science Foundation of China(No.60673087)
文摘Focusing on the problem that it is hard to utilize the web multi-fields information with various forms in large scale web search,a novel approach,which can automatically acquire features from web pages based on a set of well defined rules,is proposed.The features describe the contents of web pages from different aspects and they can be used to improve the ranking performance for web search.The acquired feature has the advantages of unified form and less noise,and can easily be used in web page relevance ranking.A special specs for judging the relevance between user queries and acquired features is also proposed.Experimental results show that the features acquired by the proposed approach and the feature relevance specs can significantly improve the relevance ranking performance for web search.
基金The National Natural Science Foundation of China(No60425206,90412003)the Foundation of Excellent Doctoral Dis-sertation of Southeast University (NoYBJJ0502)
文摘A new mapping approach for automated ontology mapping using web search engines (such as Google) is presented. Based on lexico-syntactic patterns, the hyponymy relationships between ontology concepts can be obtained from the web by search engines and an initial candidate mapping set consisting of ontology concept pairs is generated. According to the concept hierarchies of ontologies, a set of production rules is proposed to delete the concept pairs inconsistent with the ontology semantics from the initial candidate mapping set and add the concept pairs consistent with the ontology semantics to it. Finally, ontology mappings are chosen from the candidate mapping set automatically with a mapping select rule which is based on mutual information. Experimental results show that the F-measure can reach 75% to 100% and it can effectively accomplish the mapping between ontologies.
基金supported by National Social Science Foundation of China(Grand No.13&ZD173)
文摘Purpose: The aim of this paper is to discuss how the keyword concentration change ratio(KCCR) is used while identifying the stability-mutation feature of Web search keywords during information analyses and predictions.Design/methodology/approach: By introducing the stability-mutation feature of keywords and its significance, the paper describes the function of the KCCR in identifying keyword stability-mutation features. By using Ginsberg's influenza keywords, the paper shows how the KCCR can be used to identify the keyword stability-mutation feature effectively.Findings: Keyword concentration ratio has close positive correlation with the change rate of research objects retrieved by users, so from the characteristic of the 'stability-mutation' of keywords, we can understand the relationship between these keywords and certain information. In general, keywords representing for mutation fit for the objects changing in short-term, while those representing for stability are suitable for long-term changing objects. Research limitations: It is difficult to acquire the frequency of keywords, so indexes or parameters which are closely related to the true search volume are chosen for this study.Practical implications: The stability-mutation feature identification of Web search keywords can be applied to predict and analyze the information of unknown public events through observing trends of keyword concentration ratio.Originality/value: The stability-mutation feature of Web search could be quantitatively described by the keyword concentration change ratio(KCCR). Through KCCR, the authors took advantage of Ginsberg's influenza epidemic data accordingly and demonstrated how accurate and effective the method proposed in this paper was while it was used in information analyses and predictions.
基金Supported by the National Natural Science Foundation of China (60673139, 60473073, 60573090)
文摘The paper presents a novel benefit based query processing strategy for efficient query routing. Based on DHT as the overlay network, it first applies Nash equilibrium to construct the optimal peer group based on the correlations of keywords and coverage and overlap of the peers to decrease the time cost, and then presents a two-layered architecture for query processing that utilizes Bloom filter as compact representation to reduce the bandwidth consumption. Extensive experiments conducted on a real world dataset have demonstrated that our approach obviously decreases the processing time, while improves the precision and recall as well.
文摘Influenza is a kind of infectious disease, which spreads quickly and widely. The outbreak of influenza has brought huge losses to society. In this paper, four major categories of flu keywords, “prevention phase”, “symptom phase”, “treatment phase”, and “commonly-used phrase” were set. Python web crawler was used to obtain relevant influenza data from the National Influenza Center’s influenza surveillance weekly report and about:blank Index. The establishment of support vector regression (SVR), least absolute shrinkage and selection operator (LASSO), convolutional neural networks (CNN) prediction models through machine learning, took into account the seasonal characteristics of the influenza, also established the time series model (ARMA). The results show that, it is feasible to predict influenza based on web search data. Machine learning shows a certain forecast effect in the prediction of influenza based on web search data. In the future, it will have certain reference value in influenza prediction. The ARMA(3,0) model predicts better results and has greater generalization. Finally, the lack of research in this paper and future research directions are given.
文摘While search engines have become vital tools for searching information on the Internet, privacy issues remain a growing concern due to the technological abilities of search engines to retain user search logs. Although such capabilities might provide enhanced personalized search results, the confidentiality of user intent remains uncertain. Even with web search query obfuscation techniques, another challenge remains, namely, reusing the same obfuscation methods is problematic, given that search engines have enormous computation and storage resources for query disambiguation. A number of web search query privacy procedures involve the cooperation of the search engine, a non-trusted entity in such cases, making query obfuscation even more challenging. In this study, we provide a review on how search engines work in regards to web search queries and user intent. Secondly, this study reviews material in a manner accessible to those outside computer science with the intent to introduce knowledge of web search engines to enable non-computer scientists to approach web search query privacy innovatively. As a contribution, we identify and highlight areas open for further investigative and innovative research in regards to end-user personalized web search privacy—that is methods that can be executed on the user side without third party involvement such as, search engines. The goal is to motivate future web search obfuscation heuristics that give users control over their personal search privacy.
文摘With the flood of information on the Web, it has become increasingly necessary for users to utilize automated tools in order to find, extract, filter, and evaluate the desired information and knowledge discovery. In this research, we will present a preliminary discussion about using the dominant meaning technique to improve Google Image Web search engine. Google search engine analyzes the text on the page adjacent to the image, the image caption and dozens of other factors to determine the image content. To improve the results, we looked for building a dominant meaning classification model. This paper investigated the influence of using this model to retrieve more efficient images, through sequential procedures to formulate a suitable query. In order to build this model, the specific dataset related to an application domain was collected;K-means algorithm was used to cluster the dataset into K-clusters, and the dominant meaning technique is used to construct a hierarchy model of these clusters. This hierarchy model is used to reformulate a new query. We perform some experiments on Google and validate the effectiveness of the proposed approach. The proposed approach is improved for in precision, recall and F1-measure by 57%, 70%, and 61% respectively.
文摘Modem search engines record user interactions and use them to improve search quality. In particular, user click-through has been successfully used to improve click- through rate (CTR), Web search ranking, and query rec- ommendations and suggestions. Although click-through logs can provide implicit feedback of users' click preferences, de- riving accurate absolute relevance judgments is difficult be- cause of the existence of click noises and behavior biases. Previous studies showed that user clicking behaviors are bi- ased toward many aspects such as "position" (user's attention decreases from top to bottom) and "trust" (Web site reputa- tions will affect user's judgment). To address these problems, researchers have proposed several behavior models (usually referred to as click models) to describe users? practical browsing behaviors and to obtain an unbiased estimation of result relevance. In this study, we review recent efforts to construct click models for better search ranking and propose a novel convolutional neural network architecture for build- ing click models. Compared to traditional click models, our model not only considers user behavior assumptions as input signals but also uses the content and context information of search engine result pages. In addition, our model uses pa- rameters from traditional click models to restrict the meaning of some outputs in our model's hidden layer. Experimental results show that the proposed model can achieve consider- able improvement over state-of-the-art click models based on the evaluation metric of click perplexity.
基金Supported by the National Natural Science Foundation of China (No.10371034).Acknowledgements. We thank Zhi-Ming Ma for his valuable suggestion and instruction. We thank Guo-Lie Lan for his discussion.
文摘In this paper we discuss three important kinds of Markov chains used in Web search algorithms-the maximal irreducible Markov chain, the miuimal irreducible Markov chain and the middle irreducible Markov chain, We discuss the stationary distributions, the convergence rates and the Maclaurin series of the stationary distributions of the three kinds of Markov chains. Among other things, our results show that the maximal and minimal Markov chains have the same stationary distribution and that the stationary distribution of the middle Markov chain reflects the real Web structure more objectively. Our results also prove that the maximal and middle Markov chains have the same convergence rate and that the maximal Markov chain converges faster than the minimal Markov chain when the damping factor α 〉1/√2.
基金supported by the National Key Research and Development Program of China under Grant No.2018YFB1800205the National Natural Science Foundation of China under Grant Nos.61725206 and U20A20180CAS-Austria Project under Grant No.GJHZ202114.
文摘Internet users heavily rely on web search engines for their intended information.The major revenue of search engines is advertisements(or ads).However,the search advertising suffers from fraud.Fraudsters generate fake traffic which does not reach the intended audience,and increases the cost of the advertisers.Therefore,it is critical to detect fraud in web search.Previous studies solve this problem through fraudster detection(especially bots)by leveraging fraudsters'unique behaviors.However,they may fail to detect new means of fraud,such as crowdsourcing fraud,since crowd workers behave in part like normal users.To this end,this paper proposes an approach to detecting fraud in web search from the perspective of fraudulent keywords.We begin by using a unique dataset of 150 million web search logs to examine the discriminating features of fraudulent keywords.Specifically,we model the temporal correlation of fraudulent keywords as a graph,which reveals a very well-connected community structure.Next,we design DFW(detection of fraudulent keywords)that mines the temporal correlations between candidate fraudulent keywords and a given list of seeds.In particular,DFW leverages several refinements to filter out non-fraudulent keywords that co-occur with seeds occasionally.The evaluation using the search logs shows that DFW achieves high fraud detection precision(99%)and accuracy(93%).A further analysis reveals several typical temporal evolution patterns of fraudulent keywords and the co-existence of both bots and crowd workers as fraudsters for web search fraud.
文摘Web search query data are obtained to reflect social spots and serve as novel economic indicators. When faced with high-dimensional query data, selecting keywords that have plausible predictive ability and can reduce dimensionality is critical. This paper presents a new integrative method that combines Hurst Exponent (HE) and Time Difference Correlation (TDC) analysis to select keywords with powerful predictive ability. The method is called the HE-TDC screening method and requires keywords with predictive ability to satisfy two characteristics, namely, high correlation and fluctuation memorability similar to the predicting target series. An empirical study is employed to predict the volume of tourism visitors in the Jiuzhai Valley scenic area. The study shows that keywords selected using HE-TDC method produce a model with better robustness and predictive ability.
基金This work was supported by the National Grand Fundamental Research of China ( Grant No. G1999032706).
文摘In this paper, first studied are the distribution characteristics of user behaviors based on log data from a massive web search engine. Analysis shows that stochastic distribution of user queries accords with the characteristics of power-law function and exhibits strong similarity, and the user' s queries and clicked URLs present dramatic locality, which implies that query cache and 'hot click' cache can be employed to improve system performance. Then three typical cache replacement policies are compared, including LRU, FIFO, and LFU with attenuation. In addition, the distribution character-istics of web information are also analyzed, which demonstrates that the link popularity and replica pop-ularity of a URL have positive influence on its importance. Finally, variance between the link popularity and user popularity, and variance between replica popularity and user popularity are analyzed, which give us some important insight that helps us improve the ranking algorithms in a search engine.
文摘The concept of Webpage visibility is usually linked to search engine optimization (SEO), and it is based on global in-link metric [1]. SEO is the process of designing Webpages to optimize its potential to rank high on search engines, preferably on the first page of the results page. The purpose of this research study is to analyze the influence of local geographical area, in terms of cultural values, and the effect of local society keywords in increasing Website visibility. Websites were analyzed by accessing the source code of their homepages through Google Chrome browser. Statistical analysis methods were selected to assess and analyze the results of the SEO and search engine visibility (SEV). The results obtained suggest that the development of Web indicators to be included should consider a local idea of visibility, and consider a certain geographical context. The geographical region that the researchers are considering in this research is the Hashemite kingdom of Jordan (HKJ). The results obtained also suggest that the use of social culture keywords leads to increase the Website visibility in search engines as well as localizes the search area such as google.jo, which localizes the search for HKJ.
文摘The basic idea behind a personalized web search is to deliver search results that are tailored to meet user needs, which is one of the growing concepts in web technologies. The personalized web search presented in this paper is based on exploiting the implicit feedbacks of user satisfaction during her web browsing history to construct a user profile storing the web pages the user is highly interested in. A weight is assigned to each page stored in the user’s profile;this weight reflects the user’s interest in this page. We name this weight the relative rank of the page, since it depends on the user issuing the query. Therefore, the ranking algorithm provided in this paper is based on the principle that;the rank assigned to a page is the addition of two rank values R_rank and A_rank. A_rank is an absolute rank, since it is fixed for all users issuing the same query, it only depends on the link structures of the web and on the keywords of the query. Thus, it could be calculated by the PageRank algorithm suggested by Brin and Page in 1998 and used by the google search engine. While, R_rank is the relative rank, it is calculated by the methods given in this paper which depends mainly on recording implicit measures of user satisfaction during her previous browsing history.
文摘现今Web中存在大量缺失、不一致及不精确的数据,而传统的搜索引擎只能根据关键词返回文档片段,无法直接获取目标实体。提出一种新的基于图匹配的实体抽取算法GMEE(Graph Matching Based Entity Extraction),首先将片段按词语分割,进行实体的初步筛选;然后根据各实体之间的结构和语义关系建立“加权语义实体关联图”;最后利用“最大公共子图匹配”策略抽取目标实体。实验结果表明,提出的算法在不需要大量参数训练及传递的情况下,能够对抽取的实体集进行有效的精简,既保证了召回率、准确率,又提高了抽取过程的可解释性。