No Limits – FAROO iPad / iPhone App v2.0

After 6 months and some incremental updates, we just finished our next major release, with many improvements and new features you will love.

Well, this is not merely an update, it’s rather a complete rewrite. Based on a new flexible architecture all of the previous limitations are gone.

While some of the better newsreaders for iOS are able to provide a dozen sources per page, we decided to remove all those limitations once and for all, both for iPhone and iPad.

The FAROO App now supports an unlimited number of feeds. The animation stays smooth as ever.

But thats not all. Here the complete list of the improvements:

  1. New architecture allows an unlimited number of streams and items per stream.
  2. New Google Reader Synchronization.
  3. New Archive function which allows to store articles for later reading.
  4. New RSS Feed search assistant to search for new feeds by person, topic or blog.
  5. New georgeous animated front page.
  6. New sleek animated transition between stream view and text view.
  7. New central menu for the new functions.
  8. New help function.

As always, this is an Universal App, running both on the iPhone and the iPad. And it is FREE. Get it from the AppStore right now.

New FAROO iPad / iPhone App

Today we are launching our new iOS App, a fresh and visual take on Search & News.

This is an Universal App, running both on the iPhone and the iPad. And it is FREE. Get it from the AppStore right now.

What’s unique

  1. First Web Search App written natively for the iPad .
  2. Combines active Search and passive News discovery (News reader).
  3. Integrated Text Preview eliminates the web page loading time, provides a uniform design across pages and serves as offline reader.
  4. Visual Search Result and Search History streams.
  5. Turns any query into a continuous Search Feed

FAROO is the first Web Search App designed for the iPad platform.

The form factor and touch UI, the visual search stream representation, and the combination of Search and News are quite fundamentally changing the search experience.

Try it out and let us know how you like it.

FAROO Search Grid reaches 1.7 million Peers

Our network just reached 1.7 million peers, global distributed across the continents. This is probably the largest p2p web search grid in the world. This land mark also impressively demonstrates the scalability and maturity of FAROOs technology.

The number of connected computers surpasses even those in the datacenters of the biggest search engines.
FAROO Search Grid
Despite the huge number of peers, their global distribution, and the highly dynamic nature of the network, where peers dynamically joining and departing, we are reaching a latency below a second. The searches are fully distributed, without the support of any centralized entity, or any single FAROO owned server or peer involved.

Imagine there are a million volunteers, just waiting to assist you and to answer your questions. They live all over the planet. But you know only a few of them. And now suddenly you have a question, a very urgent one. It has to be answered within one second. And one of your volonteers has the answer, but you don’t know who.

You can’t call them one by one by phone, you even don’t know their number. Or visit them with your private jet, as you don’t know their address. But you can use FAROO, it’s just doing all the magic for you.

FAROO’s search grid is now powered by a combined 1,000,000,000,000,000,000,000 processor cycles per month. A giant number, right? That’s similar to the number of grains of sand on all the beaches of the planet Earth.

FAROO 3.0 with Next Generation Protocol

There had been some silence on the blog recently. But for a very good reason. All hands have been needed on deck to push forward our most important project so far. During the last months we did a full redesign of our p2p protocol and storage layer. All aimed at higher speed and better efficiency.

Earlier this week when we finished the implementation and updated, it had been really a great moment to see everything working smoothly on the new protocol.

All our performance predictions have come real, and the search latency is truly unique for a such massively distributed and highly dynamic p2p search grid.
target hit
The search speed increased 10 times. The storage efficiency doubled and the storage speed increased ten times. The message size and bandwidth requirements have been reduced to 50%.

Now we are reaching a latency below a second, even if the search results are provided from the second side of the planet. From peers that have been selected in real time from out of million highly dynamic peers, fully distributed, without the support of any centralized entity.

That’s why we decided to kick into next gear and give instant search a try. Now search results will appear instantly while you are still typing in your query.

To benefit from all the improvements you will need to upgrade to the new version. We are still giving final touches, but the new FAROO 3.0 release is imminent.

Six degrees of distribution in search

Crisis reveals character, and this is especially true for distributed systems. Everything beyond the standard case may led to a crisis if not considered beforehand.

Six degrees of distribution

The network wide scale adds a new dimension to everything, completely changing the perspective and puting many centralized approaches into question.

Joining peers, updates and recovery look different from a bird’s eye view than from the ground:

  • When the network size grows the bootstrapping algorithm needs to scale.
  • Even if the whole system fails, and all peers want to reconnect at the same time the system should be able to recover gracefully.
  • Every system needs to evolve over time, hence software distribution is required to work on large scale, perhaps frequently or immediately.

That’s why it is important to look at the scaling of all operational aspects, not only at the main search functionality. The weakest element defines the overall scalability and reliability of a system.

The benefits of a distributed architecture (as low cost, high availability and autonomy) can be fully used only, if the operational side is also fully distributed.

There should nowhere be a centralized element, which can fail, be attacked or blocked as a single point of failure or which does simply not scale. Not for crawling, not for indexing & search, not for ranking and discovery, not for bootstrap, and not for update.

Let’s have a closer look at those six degrees of distribution:

Distributed Crawling

Sometimes only the crawler is distributed, while the index and the search engine are still centralized. An example is the Grub distributed crawler, once used by Wikia Search of Wikipedia Founder Jimmy Wales.

A distributed crawler itself provides only limited benefit. Transferring the crawled pages back to a central server doesn’t save much of the bandwidth, compared to what the server would need to download the pages itself. Additionally there is overhead for reliably distributing the workload across the unreliable crawler elements.

The benefits of such hybrid approach are rather for applications beyond a search engine: if only selected information are transferred back (like scraping email-addresses), and the spider is harder to detect and block for the webmaster, as the load comes from different ip’s.

Distributed crawling will live up to its promises only as part of fully distributed search engine architecture. Where the crawlers are not controlled by a single instance, but crawling autonomous led solely by wisdom of crowd of its users. Huge network wide effects can be achieved by utilizing the geographic or contextual proximity between distributed index and crawler parts.

With FAROO’s user powered crawling pages which are changing often (e.g. news) are also re-indexed more often. So the FAROO users implicitly control the distributed crawler in a way that frequently changing pages are kept fresh in the distributed index, while preventing unnecessary traffic on rather static pages.

Distributed Discovery

Even for big incumbents in the search engine markets, it is impossible to crawl the whole web (100 billion pages?) within minutes, to discover new content timely (billion pages per day). Only if the crawler is selectively directed to the new created pages, the web scale real time search becomes feasible and efficient, instead looking for the needle in the hay stack.

By aggregating and analyzing all visited web pages of our users for discovery, we utilize the “wisdom of crowds”. Our users are our scouts. They bring in their collective intelligence and turn the crawler there where new pages emerge. In addition to instantly indexing all visited web pages our active, community directed crawler is also deriving its crawler start points from discovered pages.

Beyond real time search this is also important to discover and crawl blind spots in the web. Those blind spots are formed by web pages, which are not connected to the rest of the web. Thus they can’t be found just by traversing links.

Distributed discovery also helps indexing the deep web (sometimes also referred to as hidden web). It consists of web pages that are created solely on demand from a database, if a user searches for a specific product or service. But because there are no incoming links from the web, those pages can’t be discovered and crawled by normal search engines, although they start to work on alternate ways to index the hidden web, which is much bigger than the visible web.

Distributed Index & Search

Storing web scale information is not so much of a problem. Expensive are the huge data centers required for answering millions of queries in parallel. The resulting costs of billion dollars can be ommitted can be omitted only with a fully decentralized search engine like FAROO.

Incumbents already envision 10 million servers. A distributed index scales naturally, as more users are also providing the additional infrastructure required for their queries. It also benefits from the increase of hardware ressources, doubling every two years according to Moore’s Law.

Recycling unused computer resources is also much more sustainable than building new giant data centers, which consume more energy than a whole city.

The indexes of all big search engines are distributed across hundreds of thousands computers, within huge data centers. But by distributing the search index to the same to the edge of the network where already both user and content reside, the data have not anymore to travel forth and back to a central search instance, which is consequently eliminated. This prevents not only a single point of failure, but also combines the index distribution across multiple computers with leveraging the geographic proximity normally achieved by spreading multiple data centers across the globe.

Last but not least a distributed index is the only architecture where privacy is system inherent, as opposite to the policy based approaches of centralized search engines where the privacy policy might be subject to changes.

Zooming in from the macroscopic view, every distributed layer has its own challenges again. E.g. for the index peers usually do not behave like they should: They are overloaded, there is user activity, the resource quota is exhausted, they are behind a NAT, their dynamic IP has changed or they just quit.

Those challenges have been perfectly summarized in “The Eight Fallacies of Distributed Computing”. Yet going into all the details and our solutions would certainly go beyond the scope of this post.

Distributed Ranking

A additional benefit is a distributed attention based ranking, utilizing the wisdom of crowds. Monitoring the browsing habits of the users and aggregating those “implicit” votes across the whole web promises a more democratic and timely ranking (important for real time search).

While most real time search engines are using an explicit voting, we showed in our blog post “The limits of tweet based web search” that implicit voting by analyzing visited web pages is much more effective (by two orders of magnitude!).

This also eliminates shortcomings of a Wikipedia like approach where content is contributed in a highly distributed way, but the audit is still centralized. Implicit voting automatically involves everybody in a truly democratic ranking. The groups of adjudicators and users become identical, therefore pulling together for optimum results.

Distributed Bootstrap

The first time when a new peer want to connect to the p2p network, it has to contact to known peers (super peers, root peers, bootstrap peers, rendezvous peers) to learn about the addresses of the other peers. This is called bootstrap process.

The addresses of the known peers are either shipped in a list together with the client software or they are loaded dynamically from web caches.

The new peers then store the addresses of the peers they learned from the super peer. The second time the new peers can directly connect to those addresses, without contacting the super peer first.

But if a peer has been offline for some time, most of the addresses he stored become invalid because they are dynamic IP addresses. If the peer fails to connect to the p2p network using the stored addresses, he starts again the bootstrap process using the super peers.

Scaling
During a strong network growth many peers are accessing to the super peers in order to connect to the p2p network. Then the super peer becomes the bottleneck and a single point of failure in an otherwise fully decentralized system. If super peers become overloaded, no new peers can join the system, which prevents a further network growth.

Recovery
If the whole p2p network breaks down due to a web wide incident and all peers try to reconnect at the same time this leads to a extreme load on the super peers.
This would prevent a fast recovery, as peers would fail to connect but keep tying and causing additional load.
Those problems have been experienced in practice in the Skype network.

Security
Another issue is that the super peers make the whole p2p network vulnerable because of their centralized nature. Both blocking and observing of the whole p2p network become possible just by blocking/observing the few super peer nodes.

FAROO is using a fully distributed bootstrap algorithm, which

  • eliminates the super peers as last centralized element, as bottleneck and single point of failure in an otherwise distributed system.
  • provides an organic scaling also for the bootstrap procedure.
  • ensures a fast recovery in case of a system wide incident.
  • makes the p2p network immune to the blocking or monitoring of super peers.

Distributed Update

The distributed system becomes automatically smarter just by the increasing relevance of the collected attention data.
But you may want to refine the underlying algorithms, to improve the efficiency of the p2p overlay, to extend the data model, or to add new functions. And the example of Windows shows that it might be necessary to apply security patches, network wide, frequently and immediately. Updating p2p clients requires a very efficient software update distribution.

10 million peers and 5 Mbyte client software size would require to distribute 50 Terabytes for a full network update. Even for a 100 Mbit/s network connection a central update would last 50 days, if you manage to evenly distribute updates over time.

FAROO is using a distributed, cell division like update instead, where all peers pass on the DNA of a new version to each other within minutes. Of course there is some signature stuff to ensure the integrity of the network.

Divide and Conquer

By consequently distributing every function we ensured a true scalability of the whole system, while eliminating every single point of failure.
Our peers are not outposts of a centralized system, but rather part of distributed Cyborg (combining the power of users, algorithms & resources) living in the net.

This is a system which works on a quiet sunny day, but also on a stormy one. It would be even suitable to extreme mobile scenarios, where peers are scattered across a battlefield or carried by a rescue team.

The system recovers autonomously from a disaster, even if there is no working central instance left, the surviving peers find itself, forming a powerful working distributed system again once they awake. If you have seen Terminator reassembling after run over by a truck you get the idea 😉

In biology organisms naturally deal with the rise and falls of its cells, which as simple elements form superior systems.
We believe that evolution works in search too, and that the future belongs to multicellulars 😉

Revisited: Deriving crawler start points from visited pages by monitoring HTTP traffic

User Driven Crawling

Yesterday Charles Knight of AltSearchEngines pointed me at an interesting article at BNET “Cisco Files to Patent to Enter the Search Engine Business” .

The title of the mentioned patent application no 20090313241 is “Seeding search engine crawlers using intercepted network traffic” .

That caught my eye, as it describes pretty much the same idea that FAROO is using already for some years.

In our blog post “New active, community directed crawler” we outlined already two years ago how our “Crawler start points are derived from visited pages“ .

We are also using HTTP monitoring to detect the URLs of the visited web pages, by intercepting the TCP network traffic using raw sockets since the initial FAROO release in 2005 .

Instant crawling of all visited web pages and their contained links are part of FAROO since the same time.

In 2007 this was even subject of research in the diploma thesis ”Analysis of the growth of a decentralized Peer-to-Peer search engine index“ of Britta Jerichow at Cologne University of Applied Sciences. Although meanwhile both crawler and index architecture were improved substantially the paper already validated both theoretically and experimentally the principal feasibility of our approach.

Already in a publication from 2001 (in German) I outlined the idea of a distributed peer-to-peer search engine, in which the users as source of the growth of the web content also assure its findability, including a fully automated content ranking by the users.

Application Fields:

Deriving Crawler start points from visited pages is not only important to discover and crawl blind spots in the web. Those blind spots are formed by web pages, which are not connected to the rest of the web. Thus they can’t be found just by traversing links.

But there are four much more important application fields for user driven crawling:

  • First is real-time search. Even for big incumbents in the search engine markets, it is impossible to crawl the whole web (100 billion pages? ) within minutes, to discover new content timely (billion pages per day). Only if the crawler is selectively directed to the new created pages, the web scale real time search becomes feasible and efficient, instead looking for the needle in the hay stack.

    By aggregating and analyzing all visited web pages of our users for discovery and implicit voting, we utilize the “wisdom of crowds”.
    Our users are our scouts. They bring in their collective intelligence and turn the crawler there where new pages emerge.

    We published this back in 2007 at AltSearchEngines Great Debate: Peer-to-Peer (P2P) Search: ” FAROO also uses user powered crawling. Pages which are changing often like, for example, news, are visited frequently by users. And with FAROO they are therefore also re-indexed more often. So the FAROO users implicitly control the distributed crawler in a way that frequently changing pages are kept fresh in the distributed index, while preventing unnecessary traffic on rather static pages.”

  • The second is attention based ranking, used by FAROO since 2005. Meanwhile also many Twitter based real-time search engines rank their results according to the number of votes or mentions of a url.
    It proved to be an efficient ranking method for real-time search, superior to link analysis as there are no incoming links yet, when the content is created.
    While most real time search engines are using an explicit voting, we showed in our blog post “The limits of tweet based web search” that implicit voting by analyzing visited web pages is much more effective.
  • Third is indexing the deep web (sometimes also referred to as hidden web). It consists of web pages that are created solely on demand from a database, if a user searches for a specific product or service. But because there are no incoming links from the web, those pages can’t be discovered and crawled by normal search engines, although they start to work on alternate ways to index the hidden web, which is much bigger than the visible web.
  • Forth is personalization and behavioural targeted online advertising, based on click streams identified from network traffic. This technique got some buzz when it was tested in the UK by Phorm.

Beyond search of course, there is an even wider application field of prioritizing / routing / throttling / blocking / intercepting / logging traffic and users depending on the monitored URL, both from ISP and other authorities.

Conclusion:

After all, this looks to me as there were some evidence of Prior Art.

Well, this is not the first time that somebody came across an idea, which already was used sometime before by FAROO. See also “BrowseRank? Welcome to the club!” . And given the level of innovation in our distributed search engine architecture, that breaks with almost every legacy paradigm, this will be probably not the last time.

That’s why we publish our ideas early, even if they inspire our competition sometimes 😉 This prevents that smart solutions are locked by patents of big companies, which are the only ones to have enough resources to patent every idea which come to their mind.

Opposite to every web page detected that makes the web more accessible, every patent issued locks one more piece of the ingenuity of our human species.