Revisited: Deriving crawler start points from visited pages by monitoring HTTP traffic

User Driven Crawling

Yesterday Charles Knight of AltSearchEngines pointed me at an interesting article at BNET “Cisco Files to Patent to Enter the Search Engine Business” .

The title of the mentioned patent application no 20090313241 is “Seeding search engine crawlers using intercepted network traffic” .

That caught my eye, as it describes pretty much the same idea that FAROO is using already for some years.

In our blog post “New active, community directed crawler” we outlined already two years ago how our “Crawler start points are derived from visited pages“ .

We are also using HTTP monitoring to detect the URLs of the visited web pages, by intercepting the TCP network traffic using raw sockets since the initial FAROO release in 2005 .

Instant crawling of all visited web pages and their contained links are part of FAROO since the same time.

In 2007 this was even subject of research in the diploma thesis ”Analysis of the growth of a decentralized Peer-to-Peer search engine index“ of Britta Jerichow at Cologne University of Applied Sciences. Although meanwhile both crawler and index architecture were improved substantially the paper already validated both theoretically and experimentally the principal feasibility of our approach.

Already in a publication from 2001 (in German) I outlined the idea of a distributed peer-to-peer search engine, in which the users as source of the growth of the web content also assure its findability, including a fully automated content ranking by the users.

Application Fields:

Deriving Crawler start points from visited pages is not only important to discover and crawl blind spots in the web. Those blind spots are formed by web pages, which are not connected to the rest of the web. Thus they can’t be found just by traversing links.

But there are four much more important application fields for user driven crawling:

  • First is real-time search. Even for big incumbents in the search engine markets, it is impossible to crawl the whole web (100 billion pages? ) within minutes, to discover new content timely (billion pages per day). Only if the crawler is selectively directed to the new created pages, the web scale real time search becomes feasible and efficient, instead looking for the needle in the hay stack.

    By aggregating and analyzing all visited web pages of our users for discovery and implicit voting, we utilize the “wisdom of crowds”.
    Our users are our scouts. They bring in their collective intelligence and turn the crawler there where new pages emerge.

    We published this back in 2007 at AltSearchEngines Great Debate: Peer-to-Peer (P2P) Search: ” FAROO also uses user powered crawling. Pages which are changing often like, for example, news, are visited frequently by users. And with FAROO they are therefore also re-indexed more often. So the FAROO users implicitly control the distributed crawler in a way that frequently changing pages are kept fresh in the distributed index, while preventing unnecessary traffic on rather static pages.”

  • The second is attention based ranking, used by FAROO since 2005. Meanwhile also many Twitter based real-time search engines rank their results according to the number of votes or mentions of a url.
    It proved to be an efficient ranking method for real-time search, superior to link analysis as there are no incoming links yet, when the content is created.
    While most real time search engines are using an explicit voting, we showed in our blog post “The limits of tweet based web search” that implicit voting by analyzing visited web pages is much more effective.
  • Third is indexing the deep web (sometimes also referred to as hidden web). It consists of web pages that are created solely on demand from a database, if a user searches for a specific product or service. But because there are no incoming links from the web, those pages can’t be discovered and crawled by normal search engines, although they start to work on alternate ways to index the hidden web, which is much bigger than the visible web.
  • Forth is personalization and behavioural targeted online advertising, based on click streams identified from network traffic. This technique got some buzz when it was tested in the UK by Phorm.

Beyond search of course, there is an even wider application field of prioritizing / routing / throttling / blocking / intercepting / logging traffic and users depending on the monitored URL, both from ISP and other authorities.


After all, this looks to me as there were some evidence of Prior Art.

Well, this is not the first time that somebody came across an idea, which already was used sometime before by FAROO. See also “BrowseRank? Welcome to the club!” . And given the level of innovation in our distributed search engine architecture, that breaks with almost every legacy paradigm, this will be probably not the last time.

That’s why we publish our ideas early, even if they inspire our competition sometimes 😉 This prevents that smart solutions are locked by patents of big companies, which are the only ones to have enough resources to patent every idea which come to their mind.

Opposite to every web page detected that makes the web more accessible, every patent issued locks one more piece of the ingenuity of our human species.

The limits of tweet based web search…

…and how to overcome by utilizing the implicit web.

Many of the recent Real-time search engines are based on Twitter. They use the URLs enclosed in tweets for discovery and ranking of new and popular pages.
It might be worth to have a closer look at the quantity structure of the underlying foundation, to explore the feasibility and limits of this approach.

Recently there has been an interesting visualization of Twitter stats. It essentially proves that as with other social service only a small fraction of the users is actively contributing. This lack of representativeness may even put promising ideas like the “wisdom of crowds” into question.

But there is another fact: also those people who are contributing, publish an even smaller fraction of the information they know.

Both factors make up the huge difference in efficiency between implicit and explicit voting. Explicit voting requires the user to actively express his interest e.g. by tweeting a link. For implicit voting no extra user action is required – if a user is visiting a web page this is already counted as vote.

A short calculation:

Twitter now has 44.5 million users and provides about 20,000 Tweets per minute. If every second tweet contains a URL this would be 10,000 URLs per minute.

According to Nielsen the number of visited Web Pages per Person per Month is 1,591.

The 44.5 million users visiting 1.6 million web pages per minute, while explicitely voting only for 10,000 per minute.

Implicit voting and discovery provides 160 times more attention data than explicit.

This means that 280,000 users with implicit voting could provide the same amount of information as 44.5 million users with explicit voting. Or that implicit discovery during one day finds as much web pages as explicit discovery during a half year.

This shows drastically the limits of a web search which is based solely on explicite votes and mentions, and which potential can be leveraged by using the implicite web.

Beyond the mainstream
This becomes even more important, if we look beyond mainstream topics or the English language.
Then its simply impossible to achieve the critical mass of explicite votes in order to have a statistical significant attention based ranking or popularity based discovery.

Time and Votes are precious
Time is also a crucial factor, especially for real time search.
We want to discover a new page as soon as possible. And we want assess almost instantly how popular this new page becomes.
If we fail with a reliable ranking in a short time, this page still will be buried among steady stream of insignificant noise.
But both goals conflict with the fact, that the number of votes is proportional with the observation time. For new pages the small number of explicit votes is not sufficiently representative to provide a reliable ranking.

Again the much higher frequency of implicit votes helps us.

Relevance vs. Equality
But we can also improve on explicit votes. We just should not treat them as equal – because they are not.
Some of them we trust more than others, and with some we share more common interest than with others. For the very same reason, why we follow some people and some not.
This helps us to get more value and meaning out of the very first vote.

FAROO is going into this direction by combining Real-time Search with a Peer-to-peer infrastructure.

A holistic approach
The discovery of topical, fresh and novel information has always been an important aspect of search. But the perception of what recent is, has changed dramatically with the popularity of services like Twitter, and led to Real-time Search engines.

Real-time search shouldn’t be separated, but part of a unified and distributed web search approach.

The era of pure document centered search is over. The equally important role of users and conversation, both as target of search as well as by contributing to discovery and ranking should be reflected in a adequate infrastructure.

A distributed infrastructure
As long as both source and recipients of information are distributed the natural design for search is distributed. P2P provides an efficient alternative to the ubiquios concentration and centralization in search.

A peer-to-peer client allows the implicit discovery and attention ranking of every visited web page. This is important, as the majority of pages also in real time search belongs to the long tail. They appear once or not at all in the Twitter stream, and can’t be discovered and ranked through explicit voting.

In real time search the amount of index data is limited, because only recent documents, with high attention and reputation need to be indexed. This allows a centralized infrastructure at moderate cost. But as soon as search moves beyond the short head of real time search and aims to fully index the long tail of the whole web, then a distributed peer-to-peer architecture provides a huge cost advantage.

There is an interesting reaction from the TRENDPRENEUR blog, which further explores the topic: Link voting: real-time respect

FAROO – Real-time Social Discovery & Search

The discovery of topical, fresh and novel information has always been an important aspect of search. Often recent events in sports, culture and economics are triggering the demand for more information.

But the perception of what recent is, has changed dramatically with the popularity of services like Twitter.
Once an index was considered up to date if pages were re-indexed once a week, but under the term “Real time search” documents are now expected to be found in search results within minutes from their creation.

There are two main challenges:

  • First, the discovery of relevant, changed documents as a brute force approach of indexing the whole web within every minute is not feasible.
  • Second, those documents need to be ranked right away when they appear. With the dramatically increased number of participants in content creation in social networks, blogging and micro-blogging also the amount of noise increased. To make real time search feasible, its necessary to separate the relevant documents from the increased stream of noise. Traditional ranking methods based on links fail, as new documents naturally have no history and record of incoming links. Ranking based on the absolute number of votes again penalizes new documents, which is the opposite of what we want for real time search.

The answer to both challenges is taking the crowd sourced approach to search, where the users are discovering and ranking new and relevant documents.

This sounds familiar to FAROO’s P2P architecture of instant, user driven crawling and attention based ranking (see also) . And in fact all the required genes for real-time search have been inherent parts of FAROO’s P2P architecture, long before real time search became so ubiquitous popular.

To really utilize the wisdom of crowds and deliver competitive results requires a large user base. But we will unleash the power of our approach right now by opening up in several ways:

  • First, with the introduction of attention connectors to other social services we are now able to leverage a much more representative base of attention data for discovery and ranking. We do deep link crawling for all discovered links and use the number of votes among other parameters for ranking.
  • And second, with providing a browser based access to our real time search service we are removing all installation hurdles and platform barriers. Our p2p client additionally offers enhanced privacy, personalized results and continuous search.

So, apart from Social Discovery and Attention Based Ranking how does FAROO differ from other real time search services?

Social Noise Filter
We analyze trust and reputation of the original source and the recommending middle man and the attention and popularity of information among the final consumer in order to separate the relevant documents from the constant real time stream of noise.

Social Tagging
There is nothing as powerful as the human brain for categorizing information. We use again the collective intelligence of the users and aggregate the tags from all users and all connected services for a specific document. Of course you are able to search for tags and use them as filters in the faceted search.

Rich Visual Preview
A picture says a thousand words. Whenever possible a teaser picture from the article is shown in front of the text summary, not just a thumbnail of the whole webpage.
The author is displayed if available, and can be used for filtering.

Sentiment Detection
It’s not just the pure news, but also the emotions which involve us and make information outstanding. FAROO detects and visualizes which kinds of sentiments have been triggered in the conversation.

RSS and ATOM result feeds
You can subscribe to the result streams, applying any combination of the faceted search filters. So you can get notified and browse through the news in you preferred web or client based feed reader.

Multi Language support
The real time search services are still dominated by English content. But meanwhile the country with the most internet users is China, and due to the long tail the vast majority of Internet users use different languages than English. So a language indifferent voting, ranking and searching is certainly not appropriate. Multi language search results come together with a localized user interface.

Faceted Search
Our faceted search enables to navigate a multi-dimensional information space by combining text search with a progressive narrowing of choices in each dimension. This helps to cope with the increasing flow of information by narrowing, drill down, refining and filtering.
Faceted search provides also a simple statistical overview of the current and recent activities in different languages, sources and topics.

Architecture and Approach
But the most signifiant difference is, that for us real time search is just one part of a much broader, unified and distributed web search approach.

We believe that the era of document centered search is over. The equally important role of users and conversation, both as target of search as well as by contributing to discovery and ranking should be reflected in a adequate infrastructure.

As long as both source and recipients of information are distributed the natural design for search is distributed, despite the increasing tendencies to incapacitate the collective force of users by removing the distributed origins of the internet through cloud services and cloud based operating systems. P2P provides an efficient alternative to those concentration and centralization tendencies in search.

In the longer perspective, with an increased peer-to-peer user base the real time search capability based on a client approach with implicit discovery and attention ranking is superior to explicit mentions, as every visited web page is covered. This is important, as the majority of links also in real time search belongs to the long tail. They appear once or not at all in the Twitter stream, and can’t be discovered and ranked by popularity through explicit voting.

In real time search the amount of index data is limited, because only recent documents with high attention and reputation need to be indexed. This allows a centralized infrastructure at moderate cost. But as soon as search moves beyond the short head of real time search and aims to fully index the long tail of the whole web, then our distributed peer-to-peer architecture provides a huge cost advantage.

Scaling & Market Entry Barrier

In web search we have three different types of scaling issues:

1. Search load grows with user number
P2P scales organically, as every additional user also provides additional infrastructure

2. With the growth of the internet more documents needs be indexed (requiring more index space)
P2P scales, as the average hard disk size of the users grows, and the number of users who might provide disk space grows as well

3. With the growth of the internet more documents needs to be crawled in the same time
P2P scales as the average bandwidth per user grows, and the number of users who might take part in crawling grows as well.
Additionally P2P users help to smarten up the crawling by discovering the most relevant and recently changed documents.

For market dominating incumbents the scaling in web search is not so much a problem.
For now they solve it just with money, derived from a quasi advertising monopoly and its giant existing user base. But this brute force approach of replicating the whole internet into one system doesn’t leave the Internet unchanged. It bears the danger that one day the original is replaced by its copy.

But for small companies the huge infrastructure costs are posing an effective market entry barrier. Opposite to other services, where the infrastructure requirements are proportional to the user number, for web search you have to index the whole internet from the first user on, to provide competitive search results.
This is where P2P comes in, effectively reducing the infrastructure costs and lowering the market entry barrier.

Try our beta at or see the screencast:

BrowseRank? Welcome to the club!

Microsoft Research just published a paper “BrowseRank: Letting Web Users Vote for Page Importance” at the SIGIR (Special Interest Group on Information Retrieval) conference this week in Singapore.

This paper describes a method for computing page importance, referred to as BrowseRank.

FAROO has been doing something very similar with its attention based PeerRank for some time already.

FAROO’s “If users spend a long time on a page, visit it often, put it to bookmarks or prints it out, this page goes up in ranking.”
sounds very familiar to
Microsoft’s “The more visits of the page made by the users and the longer time periods spent by the users on the page, the more likely the page is important.”, doesn’t it?

Also the term implicit voting used in the paper caused a kind of Déjà vu: “we are voting automatically on the fly, implicit without manual action.” from our blog post Attention economy, the implicit web and myware.

A very significant difference is though, that FAROO maintains the privacy of the user because it calculates the PeerRank in a decentralized manner, while Microsoft would collect all click streams of all users in a central server.

It’s great to see that also Microsoft’s research paper confirms that attention based ranking is able to outperform PageRank both for relevancy and for spam suppression.

This is certainly an excellent technical paper, but from a scientific publication I would expect previously existing applications of user behavior data for ranking search results to be mentioned in the chapter ‘Related Work’.

Echo in the blogosphere

For a p2p model it is essentially to share a common vision with your users. Therefore it’s always interesting to see how your ideas are discussed and perceived.

A very encouraging and profound example is the ReadWriteWeb blog post “Could P2P Search Change the Game?” by Bernard Lunn.

Additionally here is a short roundup of selected previous blog posts:

Attention economy, the implicit web and myware

The term attention economy describes an actual trend: the increasing “wealth of information creates a poverty of attention”. The separation of important information from the unimportant noise becomes more and more crucial.

We are leveraging the experience of other users, who sacrificed their attention before, and voted implicitly on the content they visited. This saves our time; we can focus on consuming already preselected, relevant information instead of searching for the needle in the haystack again. FAROO uses this wisdom of crowds for its user generated, user centric, attention based ranking.

To find the most relevant information possible, we have to rate the whole web. To ensure an objective ranking, each document has to be rated by many people. But the extra time required for manual voting would prevent the majority of visitors to vote on every document they visit or to vote at all. Only an automatic, implicit rating ensures that each visitor votes for each document he visits.
This is what the implicit web is about. Analyzing our behavior and using traces left during our journey through the web, we are voting automatically on the fly, implicit without manual action. An interesting blog post of Alex Iskold of Read/WriteWeb illustrates this further.

While this seems a useful thing, it raises privacy concerns. We feel and fear our privacy is once more fading away. But than, myware reconciles personalization and privacy. Myware is tracking our behavior, but is not revealing it to any third party, but using it solely to benefit the user.

This describes perfectly the approach of FAROO to use all the implicit information in order to cope with the information overflow and to improve the search experience for the user, without sacrificing privacy. A as the information is not leaving the computer, there is no risk this data could be sold, handed over or leaked from a central repository.

FAROO utilizes the implicit web to direct the crawler to places the users are interested in, to select, rank and personalize results according to the attention users paid to the content visited, and to implement behavior targeting for advertising based on present and past behavior.