Recent study proves: FAROO safest web search engine

Share onTweet about this on TwitterShare on FacebookShare on Google+Share on RedditBuffer this pageShare on LinkedInShare on VK

A recent study proves that FAROO delivers the cleanest results compared to Google, Bing, Yandex and Twitter by two orders of magnitude.

According to the results published by the study, 1 in every 7.652 results from Google is infected, but only 1 in every 114.500 results from FAROO is suspicious
That means FAROO is 73 times safer than Google. And it shows consistency by repeating the good results from the previous test in 2013.

1016_alle_werte_uebersicht_suchmaschinen_en
Please note that the graphs are in logarithmic scale!

The complete study can be found here: Analysis of 160 Million Websites: Are Google and Other Search Engines Platforms for Distributing Malware?

The Web of Trust

I want to share some insights why FAROO does so much better in the test:

Most search engines are scanning the web pages for malware during the crawling & indexing process and sort out infected links. That helps to reduce the threat but has two serious drawbacks:

  • The malware scanners are unable to detect new malware, before it was spotted by the company, analyzed and the scanner updated.
  • There is a security gap between two consecutive scans of the page. Search engines are re-indexing and re-scanning pages that already in the index and were declared save only at certain intervals. For pages that are less popular that distance might be several month, and during this time the threat will remain undetected and the infected results are served to the user.

FAROO does not use this fragile approach of scanning web pages to detect malware. Instead FAROO relies on a concept which can be described as “web of trust”.
The web basically consists of web pages (content) connected by links (trust). For the ranking of search results we use both content of web pages and the links between them. But to determine the thrust or authority of web pages we relie solely on links.

This comprises three basic concepts:

  • Trustworthy sources: The links have to come from trustworthy sources, with a limited distance within the chain of trust.
  • Multiple independent sources: An information is deemed trustworthy if it is referred (linked) to by multiple independent sources
  • “Time proven” links: The links have to stay online for an certain amount of time to be considered reliable (e.g. links from Wikipedia pages will be removed after short time if they have been found irrelevant or malicious).

Trust, Relevance & Completeness

Our approach does not only help to improve security but also relevance.
Most search engines are feeding the index with everything they can find, and afterwards try to find the needle in the haystack among all the irrelevant content by ranking.
The careful selection of content from reliable and trusted sources with authority, and those content referred and recommended by them ensures that there is almost only relevant content into the index in the first place.

One might think that this kind of focused crawling could lead to limited content. But that is not the case. The fact that Faroo’s index is not as comprehensive as Google’s is caused solely by its comparatively limited resources.
The “web of trust” concept does not compromise the richness and comprehensiveness of results, even not for long tail queries in expert domains.
This is because all web pages are allowed, once the web site domain has been approved by two reliable and trusted sources and has got a reference from them.

Share onTweet about this on TwitterShare on FacebookShare on Google+Share on RedditBuffer this pageShare on LinkedInShare on VK

Elias-Fano: quasi-succinct compression of sorted integers in C#

Share onTweet about this on TwitterShare on FacebookShare on Google+Share on RedditBuffer this pageShare on LinkedInShare on VK

Introduction

This blog post explores the Elias-Fano encoding, which allows as a very efficient compression of sorted lists of integers, in the context of Information retrieval (IR).

Elias-Fano encoding is quasi succinct, which means it is almost as good as the best theoretical possible compression scheme for sorted integers. While it can be used to compress any sorted list of integers, we will use it for compressing posting lists of inverted indexes.

While gap compression has been around for over 30 years, and some of the foundations of Elias-Fano encoding even date back to a 1972 publication by Peter Elias, Elias Fano encoding itself has been published in 2012. Being a rather recent development beyond the papers there is not much actual implementation code available. That’s why I want to contribute my implementation of that beautiful and efficient algorithm as Open Source.

Elias-Fano compression uses delta coding, also known as gap compression. It is an invertible transformation that maps large absolute integers to smaller integers of gaps/deltas, that require less bits. The list of integers is sorted and then the delta values (gaps) between two consecutive values are calculated. As the deltas are always smaller than the absolute values we can encode them with fewer bits and thus achieve a compression (this is true for any delta compression).

Elias-Fano as any gap compression requires the lists to be sorted. Therefore it is applicable only when the original order of elements is without meaning and can be lost.

Elias-Fano encodes the gaps within the average distance with fewer bits than the rare outliers above, by splitting the encoding of a gap in low bits and high bits:

  • The low bits l= log(u/n) are stored explicitly.
  • The remaining high bits are stored in unary coding

This requires at most 2 + log(u/n) bits per element and is quasi succinct with less than half a bit away from the succinct bound!

Inverted index and posting lists
While Elias-Fano encoding can be used to compress any sorted list of integers, a typical application in Information Retrieval is compressing posting lists of inverted indexes as core of a search engine. Hence here comes a short recap of both posting lists and inverted indexes:

An Inverted Index is the central data structure of most information retrieval systems and search engines. It maps terms of a vocabulary V to their locations of occurrences in a collection of documents:

  • A dictionary maps each term of the vocabulary to a separate posting list.
  • A posting list is a list of the document ids (DocID) of all documents where the term appears in the text.

A document id is the index number of an document within a directory of all documents. A document id is usually represented by an integer, hence posting list are lists of integers. A 32 bit unsigned integer allows the addressing of 4,294,967,296 (4 billion) documents, while an 64 bit unsigned integer allows the addressing of 18,446,744,073,709,551,616 (18 quintillion) documents.

Of course we could use a posting list with URLs instead of DocIDs. But URLs (77 byte average) take much more memory than DocIDs (4 or 8 byte), and a document is referred to from all posting lists of the terms (300 terms/document average) contained within the document text.
For 1 billion pages it would be 300*77byte*1 billion = 23 TB for posting lists of DocID vs. (77+300*4)*1 billion= 1277 GB for posting lists of URL, which is a factor of 18.

Index time and retrieval time

At index time the inverted index is created. After a crawler fetches the documents (web pages) from the web, they are parsed into single terms (any HTML markup is stripped). Duplicate terms are removed (unless you create an positional index where the position of every occurrence of a word within a page is stored). During this step also the term frequency (number off occurrences of a term within a document) can be counted for TF/IDF ranking.

For each term of a document the document id is inserted into the posting list of that term. Static inverted indexes are built once (e.g. with MapReduce) and never updated. Dynamic inverted indexes can be updated incrementally in real time or in batches (e.g. with MapReduce).

At search time for each term contained in the query the corresponding posting list is retrieved from the inverted index.

Boolean queries are performed by intersecting the posting lists of multiple terms, so that only those DocID are added to the result list which occur an all of the posting lists (AND) or in any of the posting lists (OR)

Performance and scaling

Implementing an inverted index seems pretty straight forward.
But when it comes to billions of documents, an index which has to be updated in real time, and queries of many concurrent users that require very low response times we have to give more thought on the data structures and implementation

Low memory consumption and fast response time are key performance indicators of inverted indexes and information retrieval systems (the latter with additional KPI as precision, recall and ranking). Posting list are responsible for most of the memory consumption in an inverted index (apart from than the storage of the documents itself). Therefore the reduction of the memory consumption of posting lists is paramount. Some believe that Index compression is one of the main differences between a “toy” indexer and one that works on real-world collections.

Posting list compression

This can be achieved by different posting list compression algorithms. All of the compression algorithms listed below use an invertible transformation that maps large integers of the DocIDs to smaller integers, that require less bits.

Posting list compression reduces the size, thus either less memory is required for a certain number of documents or more documents can be indexed in a certain amount of memory. Also, by reducing the size of a posting list, storing and accessing the posting list in much faster RAM becomes feasible instead storing and retrieving the posting list from slower hard disk or SSD. This leads to faster indexing and query response times.

Posting list compression comes with a cost of additional compression time (index time) and decompression time (query time). For performance comparison of different compression algorithms and implementations always the triple of compression ratio, compression time and decompression time should be considered.

For efficient query processing and intersection (with techniques as skipping) the compression algorithm should support direct access with only partial decompression of the posting list.

Posting list compression algorithms

bitstuffing/bitpacking
Instead of being fixed-size (32 or 64 bits per value), integer values can have any size. The number of bits per DocID is chosen as small as possible, but so that the largest DocID can be still encoded. Storing 17-bits integers with bitpacking achieves a 47% reduction compared to an unsigned 32 bit integer! There is a speed penalty when the number of bits per DocID is no multiple of 8 and therefore byte borders are crossed.

binary packing/frame of reference (FOR)
Similar to bitpacking, but the posting list is partitioned into smaller blocks first. Then each block is compressed separately. Adapting the number of bits to the range of each block individually allows a more effective compression, but comes at the cost of increased overhead as minimum value, length of block, and number of bits/DocID need to be stored for each block.

Patched frame of reference (PFOR)
Similar to frame of reference, but within a block those DocIDs are identified which as outliers unnecessary expand the value range, leading to more bits/DocID and thus prevent an effective compression. Outlier DocIDs are then separately encoded.

delta coding / gap compression
The DocID of the posting list are sorted and then the delta values (gaps) between two consecutive DocIDs are calculated. As the deltas are always smaller than the absolute values we can encode them with fewer bits and thus achieve a compression.

Elias-Fano coding

The most efficient of the Elias compression family, and quasi succinct, which means it is almost as good as the best theoretical possible compression scheme. It can still be further improved by splitting the posting list into blocks and compressing them individually (partitioned Elias-Fano coding).
It compresses gaps of sorted integers (DocIDs): Given n (number of DocIDs) and u (maximum DocID value = number of indexed docs) we have a monotone sequence 0 = x0, x1, x2, … , xn-1 = u, with strictly monotone/increasing DocIDs, no duplicate DocIDs allowed and strictly positive deltas, no zero allowed.

Elias-Fano encodes the gaps within the average distance with fewer bits than the rare outliers above, by splitting the encoding of a gap in low bits and high bits:

  • The low bits l= log(u/n) are stored explicitly.
  • The remaining high bits are stored in unary coding

This requires at most 2 + log(u/n) bits per element and is quasi succinct with less than half a bit away from the succinct bound! The compression ratio depends highly (and solely) on the average delta between DocIDs/items (delta = gap = value range/number of values = number of indexed docs/posting list length):

  • 1 billion docs/10 million DocIDs = 100 (delta) = 8,6 bit/DocID max (8,38 real)
  • 1 billion docs/100 million DocID =10 (delta) = 5,3 bit/DocID max (4,76 real)

Papers:
http://vigna.di.unimi.it/ftp/papers/QuasiSuccinctIndices.pdf
http://shonan.nii.ac.jp/seminar/029/wp-content/uploads/sites/12/2013/07/Sebastiano_Shonan.pdf
http://www.di.unipi.it/~ottavian/files/elias_fano_sigir14.pdf
http://hpc.isti.cnr.it/hpcworkshop2014/PartitionedEliasFanoIndexes.pdf

Implementation specifics

Because the algorithm itself is quite straightforward, but used for huge posting lists, the optimization potential lay in a careful implementation rather than in optimizing the algorithm itself.
Reusing predefined arrays instead of dynamically creating and increasing Lists, preventing if/then branches to allow efficient processor caching, using basic types instead of objects, plain variables instead indexed array cells and generally shaving the cost of every single operation.

Algorithm-wise a translation table is used to decode/decompress the high bits of up to 8 DocIDs which may be contained within a single byte in parallel.

Posting List Compression Benchmark

The benchmark evaluates how well the Elias-Fano algorithm and our implementation perform for different posting list sizes, number of indexed documents and average delta in respect to the key performance indicators (KPI) compression ratio, compression time and decompression time.

We are using synthetic data for the following reasons: even for web scale Big Data they are easy and fast to obtain and exchange without legal restrictions, their properties are easier to understand and to adapt to specific requirements, they don’t need to be stored but can be recreated on demand or on the fly. As the creation of massive test data is often faster than loading from disk, this less influences the benchmark. Creation on the fly makes huge test sets possible, which would not fit into RAM as a whole.

number of DocID
(posting list length)
indexed docs delta (*) uncompressed size (**) compressed size bits/docid calculated bits/docid measured compression ratio compression time decompression time
10 1 billion 100,000,000 40 41 28.50 32.80 0.98 0 ms 0 ms
100 1 billion 10,000,000 400 315 25.25 25.20 1.27 0 ms 0 ms
1,000 1 billion 1,000,000 4,000 2,686 21.93 21.49 1.49 0 ms 0 ms
10,000 1 billion 100,000 40,000 22,610 18.61 18.09 1.77 0 ms 0 ms
100,000 1 billion 10,000 400,000 184,855 15.29 14.79 2.16 1 ms 1 ms
1,000,000 1 billion 1,000 4,000,000 1,436,895 11.97 11.50 2.78 12 ms 7 ms
10,000,000 1 billion 100 40,000,000 10,134,762 8.64 8.11 3.95 99 ms 80 ms
100,000,000 1 billion 10 400,000,000 59,448,464 5.32 4.76 6.73 1,013 ms 795 ms
1.000,000,000 1 billion 1 4.000,000,000 125,000,006 2.00 1.00 32.00 6,298 ms 6,748 ms

(*) Delta d is the distance between two DocIDs in a sorted posting list of a certain term. Delta d depends on the length l of the posting list and the number of indexed pages (delta = gap = value range/number of values = number of indexed docs/posting list length). This also means that the term occurs every d pages, e.g. if delta d=10 then the term occurs on every 10th page. Delta is the only factor which determines the compression ratio (compressibility).

(**) 32 bit unsigned integer = 4 Byte/DocID.

Hardware: Intel Core i7-6700HQ (4 core, up to 3.50 GHz) 16 GB DDR4 RAM
Software: Windows 10 64-Bit, .NET Framework 4.6.1
Tests were executed in a single thread, multiple threads would be used in a multi user/multi query scenario

Index compression estimation

The compression ratio highly (and only) depends on the average delta between DocIDs/values (delta = gap = value range/number of values = number of indexed docs/posting list length). For frequent terms the average delta between DocIDs is smaller and the compression ratio higher (few bits/DocID), for rare terms the average delta between DocIDs is higher and the compression ratio lower (more bits/DocID). Therefore we need to know the term frequency (and thus the average delta between DocIDs of that term) for every term of the whole corpus to be indexed.

In order to calculate the compression ratio and the size of the whole compressed index (= the sum of all compressed posting lists, and not only the size of a single posting list) we have to take into account the distribution of the length of the posting lists respective the distribution of deltas between posting lists. The distribution of natural language follows the Zipf’s law.

Zipf’s Law, Heap’s Law and Long tail

Zipf’s Law states that the frequency of any word is inversely proportional to its rank in the frequency table. Thus the most frequent word will occur approximately twice as often as the second most frequent word, three times as often as the third most frequent word, etc. In an English corpus the word “the” is the most frequently occurring word, which accounts for 6% of all word occurrences. The second-place word “of” accounts for 3% of words (1/2 of “the”), followed by “and” with 2% (1/3 of “the”).

The probability Pr for a term of rank r can be calculated with the following formula:
Pr= P1 * 1/r , where P1 is the probability of the most frequent term, which is between 0.06 and 0.1 in English depending on corpus. Phil Goetz states P1 as a function of the Vocabulary V (the number of distinct words) : P1=1/ln(1.78*V)

In the Oxford English Corpus the following probabilities are observed (with P1≈0.09):

Vocabulary size % of content Examples
10 25% the, of, and, to, that, have
100 50% from, because, go, me, our, well, way
1000 75% girl, win, decide, huge, difficult, series
7000 90% tackle, peak, crude, purely, dude, modest
50,000 95% saboteur, autocracy, calyx, conformist
>1,000,000 99% laggardly, endobenthic, pomological

The vocabulary size v is the number of terms with rank r <= v. All probabilities derived from Zip's law are approximations which differ between different corpora and languages.

Zipf’s Law is based on the Harmonic series ( 1 + 1/2 +1/3 . . . + 1/n ). The divergence of the Harmonic series has been proved already 1360 by Nicole Oresme and later by Jakob Bernoulli. That means that the sum of the series H(n)=∞ for n=∞.
An approximation for the sum H(n) ≈ ln n + γ , where γ is the Euler-Mascheroni constant with an value of 0,5772156649…

Heaps’ law is an empirical law which describes the number of distinct words (Vocabulary V) in a text (document or set of documents) as a function of the text length:
V = Kn^b
where the vocabulary V is the number of distinct words in an instance text of size n. K and b are free parameters determined empirically. With English text corpora, typically K is between 10 and 100, and b is between 0.4 and 0.6.

Zipf’s law on word frequency and Heaps’ law on the growth of distinct words are observed in Indo-European language family, but it does not hold for languages like Chinese, Japanese and Korean.

The long tail is the name for a long-known feature of some statistical distributions (such as Zipf, power laws, Pareto distributions and general Lévy distributions). In “long-tailed” distributions a high-frequency or high-amplitude population is followed by a low-frequency or low-amplitude population which gradually “tails off” asymptotically. The events at the far end of the tail have a very low probability of occurrence.

As a rule of thumb, for such population distributions the majority of occurrences (more than half, and where the Pareto principle applies, 80%) are accounted for by the first 20% of items in the distribution. What is unusual about a long-tailed distribution is that the most frequently occurring 20% of items represent less than 50% of occurrences; or in other words, the least frequently occurring 80% of items are more important as a proportion of the total population.

Sources
https://en.wikipedia.org/wiki/Long_tail
https://en.wikipedia.org/wiki/Zipf%27s_law
https://moz.com/blog/illustrating-the-long-tail
https://blogemis.com/2015/09/26/zipfs-law-and-the-math-of-reason/
http://mathworld.wolfram.com/ZipfsLaw.html
http://www.cs.sfu.ca/CourseCentral/456/jpei/web%20slides/L06%20-%20Text%20statistics.pdf

Synthetic posting list creation

The distribution of the length of the posting lists follows Zipf’s law. But we have to distinguish positional posting list and non-positional posting list:

  • Positional posting list can contain multiple postings per document. Frequent terms occur multiple multiple times per document, and each occurrence is stored together with its position within the document. This structure is helpful for supporting phrase and proximity queries.
  • Non-positional posting list store only one posting per document. They record in which documents the term occurs at least once, the positions of the occurrences within the document are not stored.

For the creation of synthetic posting lists we need to calculate the length l of the posting list for every term. For positional posting lists we can use the following formula:

Posting list length (for term of rank r) : l = postings * MostFrequentTermProbability / r)

where

  • postings = indexedDocs * uniqueTermsPerDoc
  • indexedDocs is the number of all documents in the corpus to be indexed
  • uniqueTermsPerDoc is about 300
  • MostFrequentTermProbability is about 0.06 in an English corpus
  • rank is the rank of the term in the frequency table. The term frequencies in the table are distributed by Zipf’s law.
  • Postings is the number of all DocIDs in the index, which is the same as the sum of all postingListLength in the index.

For non-positional posting lists we can use the following formulas

Posting list length (for term with rank r) : l = Pterm_r_in_doc * indexedDocs

where

  • probability of term with rank r (Zip’s law): Pterm_r = MostFrequentTermProbability / r
  • probability of term with any rank other than r : Pterm_not_r = 1-Pterm_r
  • probability of term with rank r occurs not in a doc (with t terms per doc): Pterm_r_not_in_doc = Pterm_not_r ^ t
  • probability of term with rank r occurs at least once in a doc : Pterm_r_in_doc = 1 – Pterm_r_not_in_doc

The maximum posting list length l <= indexedDocs . This is because even if frequent terms occur multiple times within a document, in a non-positional index they are indexed only once per document. For each posting list we then create DocIDs up to the calculated Posting list length. The value of the DocIDs is randomly selected between 1 and indexedDocs. We have to prevent duplicate DocID within a posting list, e.g. by using a hash set to check whether a DocID already exists or not.

Real-world posting list data

While we are using synthetic data it is also possible to use real-world data for testing. There are several data sets available, although for some you have to pay:

Wikipedia dump

Gov2: TREC 2004 Terabyte Track test collection, consisting of 25 million .gov web pages crawled in early 2004 (24,622,347 docs, 35,636,425 terms, 5,742,630,292 postings)

ClueWeb09: ClueWeb 2009 TREC Category B collection, consisting of 50 million English web pages crawled between January and February 2009 (50,131,015 docs, 92,094,694 terms, 15,857,983,641 postings)

The last two data sets are also available for free in a processed, anonymized form without term names.

While in synthetic data the DocIDs are usually random, in Real-world data sets the cluster properties of docIDs (some terms are more dense in some parts of the collection than in others because the pages of a domain have been indexed consecutively) can be exploited. This may lead to additional compression.

Stop words and the resolving power of terms

H.P.Luhn wrote 1958 in the IBM Journal about the “resolving power of significant words”, featuring a word frequency diagram with the word frequencies distributed according to zipfs law. There he defined an lower and an upper cut-off limit for word frequencies, where only within that “sweet spot” the words where significant and had resolving or discriminatory power in queries. The terms outside the two limits would be excluded as non-significant, being too common or too rare. For the 20 most frequent terms this is very easy to comprehend: they are appearing in almost all documents of the collection and results would stay the same whether or not those terms are in the query.

For the most frequent words to be irrelevant and excluded, this resembles the concept of stop words. If we look at the 100 most common words in English we can immediately see the low resolving power. If we exclude the 100 most common words we lose almost nothing in result quality, but can significantly improve indexing performance and save space (50% for the Oxford English Corpus).

For 1 billion documents with 300 unique terms each we would spare 50 billion docIDs to be indexed. The Posting List for the most frequent term “the” alone would contain 1 million DocIDs, and the Posting List for the 100th popular term “us” would still contain 180 million DocIDs.

Of course we have to be careful when dealing with meaningful combinations of frequent words as “The Who” or “Take That”.

Index Compression Benchmark

The benchmark evaluates how well the Elias-Fano algorithm and our implementation perform for different numbers of indexed documents in respect to the key performance indicators (KPI) compression ratio, compression time and decompression time. This time we are benchmarking the whole index (all documents from a corpus are indexed) instead of single posting lists.

Again we are using synthetic data for the reasons stated above.

indexed pages vocabulary uncompressed size (**) compressed size bits/docid calculated bits/docid measured compression ratio compression time decompression time
1 million 1 billion 1,200 MB
10 million 1 billion 12 GB
100 million 1 billion 120 GB
1 billion 1 billion 1,200 GB

(*) average word length, vocabulary, including/excluding 100 most frequent words (stop words). Do not contribute to meaningful results (paper)

(**) Uncompressed index size = 300 unique words/page * number of indexed pages * byte/DocID (32 bit unsigned integer = 4 Byte/DocID)

Hardware: Intel Core i7-6700HQ (6MB Cache, up to 3.50 GHz) 16 GB DDR4 RAM
Software: Windows 10 64-Bit, .NET Framework 4.6.1

Compressed intersection

Over 70% of the web queries contain multiple query terms. For those Boolean queries intersecting the posting lists of all query terms is required. When posting lists are compressed, they need to be uncompressed before or during the intersection.

Naive approach: decompress the whole posting lists for each query term and keep them during the intersection in RAM. This leads to high decompression time and memory consumption.

Improved approach: decompress only the currently compared items of the posting lists on the fly and discard them immediately after comparison. Terminate the decompression and intersection as soon as top-k ranked results are retrieved.

Github

The C# implementation of the Elias-Fano compression is released on GitHub as Open Source under the GNU Lesser General Public License (LGPL):
https://github.com/wolfgarbe/EliasFanoCompression

  • EliasFanoInitTable
  • EliasFanoCompress
  • EliasFanoDecompress
  • SortedRandomIntegerListGenerator: generates a sorted list of random integers from 2 parameters: number of items (length of posting list), range of items (number of indexed pages)
  • ZipDistributedPostingListGenerator: generates complete set posting lists with zipfian distributed length (word frequency)
Share onTweet about this on TwitterShare on FacebookShare on Google+Share on RedditBuffer this pageShare on LinkedInShare on VK

Faroo Website and API are UP again! + outlook

Share onTweet about this on TwitterShare on FacebookShare on Google+Share on RedditBuffer this pageShare on LinkedInShare on VK

Update

After hours of phone calls with the 1und1 hosting company, 4 different 1und1 service people consecutively denied they have a problem and closed the tickets. They said a multi-hour hardware stress test on our server went flawless.

Only after making the issue public on Twitter they had a ticket with a server admin re-opened. Then they localized and fixed a network problem and replaced the server hardware. Now everything is running again.

We are sorry for the problems the API outage has caused with your services.

Outlook

Currently we are working on a complete new index architecture. It will allow many order of magnitude faster crawling speed and larger index size.
Additionally there will be frequently requested features as HTTPS support, domain filter, more languages and returning result numbers>100.

Here are some of the advanced query operators which will be supported:

  • “” (Phrase)
  • – (NOT)
  • intitle:, allintitle:
  • intext:, allintext:
  • inurl:, allinurl:
  • site:

There will be a second API to create a Search Engine as a Service, where you can have your own content indexed and searched. This will include an very fast approximate/fuzzy search option.

Share onTweet about this on TwitterShare on FacebookShare on Google+Share on RedditBuffer this pageShare on LinkedInShare on VK

Update on the Faroo website and API service interruption.

Share onTweet about this on TwitterShare on FacebookShare on Google+Share on RedditBuffer this pageShare on LinkedInShare on VK

Update: Faroo website and API are up again!

Both Faroo website (www.faroo.com) and API are unavailable since 26/05/2016 03:26:05. We are working to fix it.

The server is up, but not reachable from internet, even if booted from a fresh rescue OS with default configuration. Therefore we suspect there is a problem with the network, network adapter, hardware firewall or IP filtering.

After hours on the phone with their support staff our hosting company 1und1 was not yet able to fix the problem. They agreed to reopen a ticket for their server admins and keep us updated.

Currently it does not look like a quick fix, but setting up the whole system from scratch at a different hosting company.

Visit https://twitter.com/faroo_p2p for updates.

We are aware that the API outage is affecting both you and your users and we are working to resolve the issue and bring the service back on as soon as possible.

Share onTweet about this on TwitterShare on FacebookShare on Google+Share on RedditBuffer this pageShare on LinkedInShare on VK

Very fast Data cleaning of product names, company names & street names

Share onTweet about this on TwitterShare on FacebookShare on Google+Share on RedditBuffer this pageShare on LinkedInShare on VK

The correction of product names, company names, street names & addresses is a frequent task of data cleaning and deduplication. Often those names are misspelled, either due to OCR errors or mistakes of the human data collectors.

The difference is that those names often consist of multiple words, white space and punctuation. For large data or even Big data applications also speed is very important.

Our algorithm supports both requirements and is up to 1 million times faster compared to conventional approaches (see benchmark). The C# source code is available as Open Source in another Blog post and GitHub). A simple modification of the original source code will add support of names with multiple words, white space and punctuation:

Instead of 357 CreateDictionary("big.txt",""); which parses the a given text file into single words simply use CreateDictionaryEntry("company/street/product name", "") to add company, street & product names to the dictionary.

Then with Correct("misspelled street",""); you will get the correct street name from the dictionary. In line 35..38 you may specify whether you want only the best match or all matches within a certain edit distance (number of character operations difference):

35 private static int verbose = 0;
36 //0: top suggestion
37 //1: all suggestions of smallest edit distance
38 //2: all suggestions <= editDistanceMax (slower, no early termination)

For every similar term (or phrase) found in the dictionary the algorithm gives you the Damerau-Levenshtein edit distance to your input term (look for suggestion.distance in the source code). The edit distance describes how many characters have been added, deleted, altered or transposed between the input term and the dictionary term. This is a measure of similarity between the input term (or phrase) and similar terms (or phrases) found in the dictionary.

Share onTweet about this on TwitterShare on FacebookShare on Google+Share on RedditBuffer this pageShare on LinkedInShare on VK

Fast approximate string matching with large edit distances in Big Data

Share onTweet about this on TwitterShare on FacebookShare on Google+Share on RedditBuffer this pageShare on LinkedInShare on VK

1000x faster

1 million times faster spelling correction for edit distance 3
After my blog post 1000x times faster spelling correction got more than 50.000 views I revisited both algorithm and implementation to see if it could be further improved.

While the basic idea of Symmetric Delete spelling correction algorithm remains unchanged the implementation has been significantly improved to unleash the full potential of the algorithm.

This results in a 10 times faster spelling correction and 5 times faster dictionary generation and 2…7 times less memory consumption in v3.0 compared to v1.6 .

Compared to Peter Norvig’s algorithm it is now 1,000,000 times faster for edit distance=3 and 10,000 times faster for edit distance=2.

In Norvig’s tests 76% of spelling errors had an edit distance 1. 98.9% of spelling errors got covered with edit distance 2. For simple spelling correction of natural language with edit distance 2 the accuracy is good enough and the performance Norvig’s algorithm is sufficient.

The speed of our algorithm enables edit distance 3 for spell checking and thus improves the accuracy by 1%. Beyond the accuracy improvement the speed advantage of our algorithm is useful for automatic spelling correction in large corpora as well as in search engines, where many requests in parallel need to be processed.

Billion times faster approximate string matching for edit distance > 4
But the true potential of the algorithm lies in edit distances > 3 and beyond spell checking.

The many orders of magnitude faster algorithm opens up new application fields for approximate string matching and a scaling sufficient for big data and real-time. Our algorithm enables fast approximate string and pattern matching with long strings or feature vectors, huge alphabets, large edit distances, in very large data bases, with many concurrent processes and real time requirements.

Application fields:

  • Spelling correction in search engines, with many parallel requests
  • Automatic Spelling correction in large corpora
  • Genome data analysis,
  • Matching DNA sequences
  • Browser fingerprint analysis
  • Realtime Image recognition (search by image, autonomous cars, medicine)
  • Face recognition
  • Iris recognition
  • Speech recognition
  • Voice recognition
  • Feature recognition
  • Fingerprint identification
  • Signature Recognition
  • Plagiarism detection (in music /in text)
  • Optical character recognition
  • Audio fingerprinting
  • Fraud detection
  • Address deduplication
  • Misspelled names recognition
  • Spectroscopy based chemical and biological material identification
  • File revisioning
  • Spam detection
  • Similarity search,
  • Similarity matching
  • Approximate string matching,
  • Fuzzy string matching,
  • Fuzzy string comparison,
  • Fuzzy string search,
  • Pattern matching,
  • Data cleaning
  • and many more

Edit distance metrics
While we are using the Damerau-Levenshtein distance for spelling correction for other applications it could be easily exchanged with the Levenshtein distance or similar other edit distances by simply modifying the respective function.

In our algorithm the speed of the edit distance calculation has only a very small influence on the overall lookup speed. That’s why we are using only a basic implementation rather than a more sophisticated variant.

Benchmark
Because of all the applications for approximate string matching beyond spell check we extended the benchmark to lookups with higher edit distances. That’s where the power of the symmetric delete algorithm truly shines and excels other solutions. With previous spell checking algorithms the required time explodes with larger edit distances.

Below are the results of a benchmark of our Symmetric Delete algorithm and Peter Norvig’s algorithm for different edit distances, each with 1000 lookups:

input term best correction edit distance maximum edit distance SymSpell
ms per 1000 lookups
Peter Norvig
ms per 1000 lookups
factor
marsupilamimarsupilami no correction* >20 9 568,568,000
marsupilamimarsupilami no correction >20 8 161,275,000
marsupilamimarsupilami no correction >20 7 37,590,000
marsupilamimarsupilami no correction >20 6 5,528,000
marsupilamimarsupilami no correction >20 5 679,000
marsupilamimarsupilami no correction >20 4 46,592
marsupilami no correction >4 4 459
marsupilami no correction >4 3 159 159,421,000 1:1,000,000
marsupilami no correction >4 2 31 257,597 1:8,310
marsupilami no correction >4 1 4 359 1:90
hzjuwyzacamodation accomodation 10 10 7,598,000
otuwyzacamodation accomodation 9 9 1,727,000
tuwyzacamodation accomodation 8 8 316,023
uwyzacamodation accomodation 7 7 78,647
wyzacamodation accomodation 6 6 19,599
yzacamodation accomodation 5 5 2,963
zacamodation accomodation 4 4 727
acamodation accomodation 3 3 180 173,232,000 1:962,000
acomodation accomodation 2 2 33 397,271 1:12,038
hous hous 1 1 24 161 1:7
house house 0 1 1 3 1:3

*Correct or unknown word, which is not in the dictionary and there are also no suggestions within an edit distance of <=maximum edit distance. This is a quite common case (e.g. rare words, new words, domain specific words, foreign words, names), in applications beyond spelling correction (e.g. fingerprint recognition) it might be the default case.

For the benchmark we used the C# implementation of our SymSpell as well as a faithful C# port from Lorenzo Stoakes of Peter Norvig’s algorithm, which has been extended to support edit distance 3. The use of C# implementations for both cases allows to focus solely on the algorithm and should exclude language specific bias.

Dictionary corpus:
The English text corpus used to generate the dictionary used in the above benchmarks has a size 6.18 MByte, 1,105,286 terms, 29,157 unique terms, longest term with 18 characters.
The dictionary size and the number of indexed terms have almost no influence on the average lookup time of o(1).

Speed gain
The speed advantage grows exponentially with the edit distance:

  • For an edit distance=1 it’s 1 order of magnitude faster,
  • for an edit distance=2 it’s 4 orders of magnitude faster,
  • for an edit distance=3 it’s 6 orders of magnitude faster.
  • for an edit distance=4 it’s 8 orders of magnitude faster.

Computational complexity and findings from benchmark
Our algorithm is constant time ( O(1) time ), i.e. independent of the dictionary size (but depending on the average term length and maximum edit distance), because our index is based on a Hash Table which has an average search time complexity of O(1).

Precalculation cost
In our algorithm we need auxiliary dictionary entries with precalculated deletes and their suggestions. While the number of the auxiliary entries is significant compared to the 29,157 original entries the dictionary size grows only sub-linear with edit distance: log(ed)

maximum edit distance number of dictionary entries (including precalculated deletes)
20 11,715,602
15 11,715,602
10 11,639,067
9 11,433,097
8 10,952,582
7 10,012,557
6 8,471,873
5 6,389,913
4 4,116,771
3 2,151,998
2 848,496
1 223,134

The precalculation costs consist of additional memory usage and creation time for the auxiliary delete entries in the dictionary:

cost maximum edit distance SymSpell Peter Norvig factor
memory usage 1 32 MB 229 MB 1:7.2
memory usage 2 87 MB 229 MB 1:2.6
memory usage 3 187 MB 230 MB 1:1.2
dictionary creation time 1 3341 ms 3640 ms 1:1.1
dictionary creation time 2 4293 ms 3566 ms 1:0.8
dictionary creation time 3 7962 ms 3530 ms 1:0.4

Due to an efficient implementation those costs are negligible for edit distances <=3:

  • 7 times less memory requirement and a similar dictionary creation time (ed=1).
  • 2 times less memory requirement and a similar dictionary creation time (ed=2).
  • similar memory requirement and a 2 times higher dictionary creation time (ed=3).

Source code
The C# implementation of our Symmetric Delete Spelling Correction algorithm is released on GitHub as Open Source under the GNU Lesser General Public License (LGPL).

C# (original)
https://github.com/wolfgarbe/symspell

Ports
The following third party ports to other programming languages have not been tested by myself whether they are an exact port, error free, provide identical results or are as fast as the original algorithm:

Java (third party port)
https://github.com/gpranav88/symspell

Javascript (third party port)
https://github.com/itslenny/SymSpell.js
https://github.com/dongyuwei/SymSpell
https://github.com/IceCreamYou/SymSpell

Swift (third party port)
https://github.com/Archivus/SymSpell

Ruby (third party port)
https://github.com/PhilT/symspell

Python (third party port)
https://github.com/dominedo/spark-n-spell/blob/master/symspell_python.py

Comparison to other approaches and common misconceptions

A Trie as standalone spelling correction
Why don’t you use a Trie instead of your algorithm?
Tries have a comparable search performance to our approach. But a Trie is a prefix tree, which requires a common prefix. This makes it suitable for autocomplete or search suggestions, but not applicable for spell checking. If your typing error is e.g. in the first letter, than you have no common prefix, hence the Trie will not work for spelling correction.

A Trie as replacement for the hash table
Why don’t you use a Trie for the dictionary instead of the hash table?
Of course you could replace the hash table with a Trie (that is just a arbitrary lookup component of O(1) speed for a *single* lookup) at the cost of added code complexity, but without performance gain.
A HashTable is slower than a Trie only if there are collisions, which are unlikely in our case. For a maximum edit distance of 2 and an average word length of 5 and 100,000 dictionary entries we need to additionally store (and hash) 1,500,000 deletes. With a 32 bit hash (4,294,967,296 possible distinct hashes) the collision probability seems negligible.
With a good hash function even a similarity of terms (locality) should not lead to increased collisions, if not especially desired e.g. with Locality sensitive hashing.

BK-Trees
Would be BK-Trees an alternative option?
Yes, but BK-Trees have a search time of O(log dictionary_size), whereas our algorithm is constant time ( O(1) time ), i.e. independent of the dictionary size.

Ternary search tree
Why don’t you use a ternary search tree?
The lookup time in a Ternary Search Tree is O(log n), while it is only 0(1) in our solution. Also, while a Ternary Search Tree could be used for the dictionary lookup instead of a hash table, it doesn’t address the spelling error candidate generation. And the tremendous reduction of the number of spelling error candidates to be looked-up in the dictionary is the true innovation of our Symmetric Delete Spelling Correction algorithm.

Precalculation
Does the speed advantage simply comes from precalulation of candidates?
No! The speed is a result of the combination of all three components outlined below:

  • Pre-calculation, i.e. the generation of possible spelling error variants (deletes only) and storing them at index time is just the first precondition.
  • A fast index access at search time by using a hash table with an average search time complexity of O(1) is the second precondition.
  • But only our Symmetric Delete Spelling Correction on top of this allows to bring this O(1) speed to spell checking, because it allows a tremendous reduction of the number of spelling error candidates to be pre-calculated (generated and indexed).
  • Applying pre-calculation to Norvig’s approach would not be feasible because pre-calculating all possible delete + transpose + replace + insert candidates of all terms would result in a huge time and space consumption.

Correction vs. Completion
How can I add auto completion similar to Google’s Autocompletion?
There is a difference between correction and suggestion/completion!

Correction: Find the correct word for a word which contains errors. Missing letters/errors can be on start/middle/end of the word. We can find only words equal/below the maximum edit distance, as the computational complexity is dependent from the edit distance.

Suggestion/completion: Find the complete word for an already typed substring (prefix!). Missing letters can be only at the end of the word. We can find words/word combinations of any length, as the computational complexity is independent from edit distance and word length.

The code above implements only correction, but not suggestion/completion!
It still finds suggestions/completions equal/below the maximum edit distance, i.e. it starts to show words only if there are <= 2 letters missing (for maximum edit distance=2). Nevertheless the code can be extended to handle both correction and suggestion/completion. During the process of dictionary creation you have to add also all substrings (prefixes only!) of a word to the dictionary, when you are adding a new word to the dictionary. All substring entries of a specific term then have to contain a link to the complete term. Alternatively, for suggestion/completion you could use a completely different algorithm/structure like a Trie, which inherently lists all complete words for a given prefix.

Share onTweet about this on TwitterShare on FacebookShare on Google+Share on RedditBuffer this pageShare on LinkedInShare on VK

How 1000 Apps are using the FAROO Search API

Share onTweet about this on TwitterShare on FacebookShare on Google+Share on RedditBuffer this pageShare on LinkedInShare on VK

 
During the last 9 months more than 1000 companies and developers subscribed to our API, with more than 100 new applications every month.

API subscriptions

Today we want to share what are the typical use cases for our search API:

commercial_use_by_segment

commercial_use_by_region

academic_use_by_region

An interesting discovery is the fact that our search API is mainly used to data mine the big data of the web, instead of plain web search.

We turn the whole web into a giant database which is queried and analyzed by AI services, Data mining and Business intelligence applications. Big data becomes accessible and can be queried within milliseconds. Apps save the lead time for crawling the vast amount of pages themselves.

Share onTweet about this on TwitterShare on FacebookShare on Google+Share on RedditBuffer this pageShare on LinkedInShare on VK

FAROO introduces API keys

Share onTweet about this on TwitterShare on FacebookShare on Google+Share on RedditBuffer this pageShare on LinkedInShare on VK

keyWith 1 million free queries per months we offer a really ample API rate limit. Three orders of magnitude of what the incumbents provide.

But some users still use multiple servers, fake user agents & referers to circumvent the already generous rate limit. Unfortunately it seems that abuse is proportional to freedom and goodwill. This is not only unfair, but also impacts reliability, performance and long term perspective of our free service for all users.

While the API stays free, from 1. July 2013 we are introducing API keys for better service protection. The new API key registration adds an extra step before using the API, but it offers also some benefits:

  • Better prevention of API abuse, ensuring a reliable service for everyone.
  • We can inform you whenever the API is about to change.
  • We can inform you when you are exceeding the rate limit, instead of blocking.
  • We can inform you about syntax or encoding problems of your query.
  • As reference for support requests.

If your application is using the FAROO API, and you do not have an API key yet, please register as soon as possible to ensure an uninterrupted service.

We hope you continue to enjoy our API and build the search you want!

Share onTweet about this on TwitterShare on FacebookShare on Google+Share on RedditBuffer this pageShare on LinkedInShare on VK

Leistungsschutzrecht aus Sicht einer Suchmaschine

Share onTweet about this on TwitterShare on FacebookShare on Google+Share on RedditBuffer this pageShare on LinkedInShare on VK

Lex Google from a search engines perspective – a German law threatening the internet as we know it.

Go directly to the robots.txt extension proposal ‘Freedom of Citation License v1.0″
disarm
Worum gehts

Leistungsschutzrecht für Presseverlage durch das Achte Gesetz zur Änderung des Urheberrechtsgesetzes

Hier die entscheidenden Passagen:

§ 87f (1) Der Hersteller eines Presseerzeugnisses (Presseverleger) hat das ausschließliche Recht, das Presseerzeugnis oder Teile hiervon zu gewerblichen Zwecken öffentlich zugänglich zu machen, es sei denn, es handelt sich um einzelne Wörter oder kleinste Textausschnitte. Ist das Presseerzeugnis in einem Unternehmen hergestellt worden, so gilt der Inhaber des Unternehmens als Hersteller.

§ 87g (2) Das Recht erlischt ein Jahr nach der Veröffentlichung des Presseerzeugnisses.

§ 87g (4) Zulässig ist die öffentliche Zugänglichmachung von Presseerzeugnissen oder Teilen hiervon, soweit sie nicht durch gewerbliche Anbieter von
Suchmaschinen oder gewerbliche Anbieter von Diensten erfolgt, die Inhalte entsprechend aufbereiten.

Dieses Gesetz tritt am … [einsetzen: erster Tag des dritten auf die Verkündung im Bundesgesetzblatt folgenden Kalendermonats] in Kraft.

Als Begründung für die Ausnahme einzelner Wörter oder kleinster Textausschnitte von der Vergütungspflicht (§ 87f – fett markiert) wurde das Grundrecht auf Information genannt und darauf verwiesen, dass der Bundesgerichtshof 2011 entschieden hatte, dass Google „Thumbnails“ genannte Vorschaubilder in Suchergebnissen zeigen darf.

Update Juni 2014
Trotzdem klagt die VG Media gegen Google auf Zahlungen nach dem Leistungschutzrecht. Laut Heise werden Forderungen auch gegenüber der Deutsche Telekom, Microsoft, Yahoo sowie 1&1 erhoben. An der VG Media sind die Verlage Springer (Bild, Welt), Burda (Focus), Funke (WAZ, Hamburger Abendblatt), Madsack (Hannoversche Allgemeine, Leipziger Volkszeitung), M. DuMont Schauberg (Kölner Stadtanzeiger, Express) und Aschendorff (Westfälische Nachrichten) beteiligt.
Dieser zu erwartende Rechtsstreit ist Folge der von uns kritisierten Unklarheit des Gesetzes bezüglich der vergütungsfreien Anzahl von Worten/Zeichen.

Unklarheit mit Methode

Ein Gesetz sollte Rechtssicherheit schaffen und nicht Unsicherheit und Grauzonen.
Rahmenbedingungen müssen klar definiert sein, und Angebote die darunter fallen gekennzeichnet werden, und zwar maschinenlesbar:

• Was ist ein Presseerzeugnis und wer ist damit ein Presseverleger (zählen Blogs dazu? Wenn ja, dann lehnt die Mehrheit der Presseverleger das Gesetz zu ihrem vermeintlichen Schutz ab, wenn nein weshalb wird dann eine Minderheit privilegiert und das gesamte Internet als Geisel zur Durchsetzung ihrer Interessen genommen)
• Welche Angebote stammen von einem Presseverleger und fallen damit unter das Gesetz, wenn deren Kennzeichnung nicht vorgeschrieben ist
• Wie viele Worte/Zeichen sind zulässig (ohne diese Angabe ist alles was länger als zwei Worte ist Russisch Roulette)
• Fallen Worte in einem Link (sprechende URL) auch unter die Beschränkung? Eventuell wird dadurch eine Verlinkung unmöglich.
• Dürfen Sprechende URL auch angezeigt werden, oder müssen sie vor dem Nutzer verborgen werden (der weiß dann nicht auf was er klickt)
• Werden Worte in einem Link (Sprechende URL) auch gezählt, wenn Titel und sprechende URL identisch sind, wird doppelt gezählt?
• Wann ist ein Jahr vorbei (wie erfährt man das Veröffentlichungsdatum, wenn dessen Angabe nicht vorgeschrieben ist)
• Was ist gewerbliche Nutzung? Ist der Blog eines Unternehmens eine solche, oder ein Blog mit Werbung? Oder definiert sich gewerbliche Nutzung über eine große Anzahl von Nutzern oder veröffentlichten Zitaten (analog Filesharing Urteilen)

Eine zentrale Clearingstelle sollte Angaben über Veröffentlichungsdatum und ob eine Seite dem Leistungsschutzrecht unterliegt manipulationssicher verwalten, um Abmahnfallen zu vermeiden.

Die Mär vom Parasiten

Die offizielle Begründung: „Der Presseverleger wird so vor der systematischen Nutzung seiner verlegerischen Leistung durch gewerbliche Anbieter von Suchmaschinen und von gewerblichen Diensten, die Inhalte entsprechend aufbereiten, geschützt, die ihr spezifisches Geschäftsmodell gerade auf diese Nutzung ausgerichtet haben.“

Nun, Suchmaschinen haben ihr Geschäftsmodell nicht auf die systematische Nutzung verlegerischer Leistung aufgebaut. Suchmaschinen gab es lange bevor die Presseverleger das Internet für sich entdeckt haben. Sie erbringen eine eigenständige Leistung der Indizierung von Informationen um deren quellenübergreifende Auffindbarkeit sicherzustellen. Die Mehrzahl der Suchergebnisse stammt eben nicht von den Presseverlegern, sondern von Blogs, Foren, Unternehmenswebseiten, Wikipedia, Quora, Stackoverflow, Linkedin, Amazon, Social Networks, Open Source Projekten, akademische Seiten, privaten Homepages¸ Vereinen, Schulen, Städten und Gemeinden um nur einige zu nennen.

Warum ist Google so erfolgreich? Weil es sich auf Kosten der Presseverleger bereichert? Nein, weil Millionen von Firmen so sehr von einer Auflistung in den Suchergebnissen profitieren, das sie bereit sind dafür viel Geld zu bezahlen. Sowohl für Adwords um in der Werbung aufzutauchen als auch für SEO, um in den organischen Ergebnissen vordere Plätze einzunehmen. Auch die Presseverleger profitieren so sehr von Google, den zusätzlichen Besuchern und Werbeerlösen, dass sie bisher die robots.txt nicht geändert haben, um Google an der „systematischen Nutzung ihrer verlegerischen Leistung“ zu hindern.

Ein alternatives Geschäftsmodell von Suchmaschinen ist übrigens „Paid Inclusion“, also die bezahlte Aufnahme einer Webseite in den Index einer Suchmaschine. Ein Geschäftsmodell das Suchmaschinen vor der systematischen Nutzung ihres Traffics durch gewerbliche Presseverleger schützt, die ihr spezifisches Geschäftsmodell gerade auf diese Nutzung ausgerichtet haben 😉

Suchmaschinen und Aggregatoren sind nicht parasitär, sondern von allen Bürgern in einer Informationsgesellschaft dringend benötigte Tools um die Informationsflut beherrschen zu können, Vielfalt zu sichern und Bias zu vermeiden.

Presseverleger geben Geld aus um Content zu erstellen. Ein großer Teil der Besucher / des Umsatzes kommen über Suchmaschinen (Quelle: Verband Deutscher Zeitschriftenverleger)

Suchmaschinen geben Geld aus um Content durchsuchbar zu machen. Nur ein kleiner Teil der Suchergebnisse / des Umsatzes kommen von Presseverlegern.

Es gibt also eine Symbiose, bei der die Presseverleger von Suchmaschinen deutlich mehr profitieren als umgekehrt:
• wenn der Presseverleger an den Umsätzen der Suchmaschinen beteiligt werden will, müsste er die Suchmaschinen auch an seinen Umsätzen beteiligen
• wenn ein Presseverleger die Suchmaschinen an seinen Kosten beteiligen will, müssten die Suchmaschinen ihn auch an ihren Kosten beteiligen.
• wenn der Presseverleger eine Gebühr für Zitate erhebt, müsste er eine für das Listing seiner Seiten in der Suchmaschine bezahlen.

Wo bleibt das Leistungsschutzrecht für Suchmaschinen?

Entwicklung und Betrieb von Suchmaschinen kosten Geld. Viel Geld. Suchmaschinen sind für die Gesellschaft mindestens so systemrelevant wie Presseverleger, um Transparenz und Vielfalt von Informationen und die Auffindbarkeit vorhandenen Wissens zu sichern.
Warum sollen Presseverleger von den Investitionen der Suchmaschinen kostenlos profitieren, die Suchmaschinen von denen der Presseverleger aber nicht?
Nun gibt es in Deutschland mehr Presseverleger als Suchmaschinen. Werden deshalb Gesetze für Presseverleger und nicht für Suchmaschinen oder gar den Bürger gemacht?

Der Kuckuck im fremden Netz.

Das Internet wurde nicht von den Presseverlegern erfunden. Es wurde erfolgreich durch Offenheit, Austausch und Verlinkung. Den Walled-Garden Gegenentwurf Compuserve kennt heute kaum jemand mehr. Suchmaschinen gab es lange bevor die Presseverleger das Internet für sich entdeckt haben.
An den vielen Besuchern im Internet im Allgemeinen, und an den durch die Suchmaschinen generierten im Speziellen ist man natürlich interessiert, aber die Offenheit die dazu geführt hat will man beseitigen. Fail.

Der Effekt

Alle verlieren. Presseverleger, Suchmaschinen und vor allen die Nutzer.

Es ist kaum zu erwarten das die Suchmaschinen auf Basis eines einseitigen Gesetzes jetzt die Wirtschaftsförderung für Verlage und überholte Geschäftsmodelle übernehmen werden.

Möglicherweise werden jedoch Publikationen deutscher Presseverleger aus dem Index der Suchmaschinen verschwinden, und damit weiter an Bedeutung verlieren. Andererseits ist dies eine Chance für non-mainstream Medien und internationale Player, die Filter Bubble zu durchbrechen und die Informationsvielfalt zu stärken.

Oder Suchmaschinen ziehen sich aus Deutschland zurück, und betreiben den Service zukünftig aus Ländern mit liberaler Gesetzgebung. Wird dann der Zugang zu Google aus Deutschland blockiert wie in China, Nord Korea oder Iran? Endet Freiheit da, wo sie mit Lobbyinteressen kollidiert?

Durch deutsche Politik wird so Suchmaschinen-Innovation aus Deutschland behindert, und damit die Vormachtstellung von US Diensten in Deutschland zementiert.
Startups werden die Aufbürdung zusätzlicher Kosten und das finanzielle Risiko durch Abmahnungen vermeiden und Länder mit freiem Internet wählen. Wenn schon kein Hightech, dann wenigstens Wirtschaftsförderung für Anwälte, Gerichte und Abmahnindustrie.

Die Bundeskanzlerin eröffnet die CeBIT unter dem Motto “Shareconomy”, nachdem gerade ein Gesetz beschlossen wurde dass das Gegenteil bewirkt.

Das Ziel

Neben den ohnehin erheblichen Entwicklungs- und Betriebskosten sollen Suchmaschinen zukünftig auch noch dafür bezahlen, dass sie den Content der Presseverleger indizieren, ihnen Besucher zuführen, und damit deren Werbeumsätze erhöhen.

In einer Marktwirtschaft würden die Suchmaschinen dies zukünftig nur noch für den Großteil der Inhalte tun, die das Leistungsschutzrecht nicht betrifft (Blogs, Foren, Unternehmenswebseiten, Wikipedia, Amazon, Social Networks, Open Source, akademische Seiten, private Homepages¸ Vereine, Schulen, Städte und Gemeinden etc. ).
Die Verlagsangebote würden aus den Suchergebnissen verschwinden, und durch freie Inhalte ersetzt werden.

Die Schwammigkeit des Gesetzes führt jedoch zu Rechtsunsicherheit, die verhindert vom Gesetz betroffene Inhalte eindeutig zu identifizieren und auszuschließen.
Diese Unsicherheit zwingt die Suchmaschinen entweder zur vollständigen Aufgabe, macht sie zum permanenten Ziel der Abmahnindustrie, oder zwingt sie eben doch eine Art Schutzgebühr zu zahlen, selbst um unbehelligt auch nur den Großteil der vom Gesetz nicht berührten (aber als solche nicht identifizierbaren) Inhalte zu indizieren.

Natürlich sollte jeder das Verfügungsrecht über seine Inhalte haben. Dazu gibt es seit vielen Jahren die robots.txt, die von allen großen Suchmaschinen respektiert wird.
Die Presseverleger die sich gegen die kostenlose Nutzung ihrer Inhalte sträuben, haben also ein einfaches Mittel dieses zu verhindern, ganz ohne Gesetz. Sie setzen es jedoch nicht ein, da sie sich bewusst sind in welchem Maße sie von den Suchmaschinen profitieren.
Stattdessen nutzt diese kleine Minderheit der Inhaltsanbieter ihren Einfluss auf Politik und Gesetzgebung, um zusätzlich zum Besucherstrom auch noch einen Revenuestream von den Suchmaschinen zu erzwingen. Kollateralschäden, die die Grundfesten des Internets erschüttern werden dabei billigend in Kauf genommen. Im Ergebnis leiden alle User und Inhaltsanbieter unter der Beschränkung aller Suchergebnisse auf den Titel.

Das dies nicht nur die einseitige Sichtweise einer betroffen Suchmaschine ist zeigen die folgenden Pressestimmen:

Die Tageszeitung Grafschafter Nachrichten schreibt, im Streit mit Google “gehe es im Kern darum, dass der amerikanische Konzern ein Geschäftsmodell gefunden habe, das den deutschen Verlagen bislang fehle. Und anstatt sich selbst kreativ und mutig auf die gewaltigen Herausforderungen einzustellen, die der Medienwandel mit sich bringe, machten es sich viele Verlage und ihr Verband allzu leicht, indem sie sich zurücklehnen, die Hand aufhalten und vom Erfolg anderer profitieren wollen” .

Die ZEIT ONLINE schreibt “Um dieses Verbot [Presseerzeugnis oder Teile hiervon zu gewerblichen Zwecken öffentlich zugänglich zu machen] geht es den Verlagen gar nicht. Sie wollen ein anderes. Denn Verlage machen ihre Artikel und Geschichten im Internet bewusst jedem zugänglich und verdienen damit Geld. Allerdings gibt es jemanden, der noch viel mehr Geld verdient, da er anders als irgendwelche deutschen Medien weltweit agiert, mit einer Suchmaschine einen ziemlich sinnvollen Dienst bietet und Milliarden Menschen erreicht: Google. Von dem Geld dieses Unternehmens wollen Verlage etwas abhaben, auch wenn ihr Geschäftsmodell ein anderes ist als das der Suchmaschine.”

Ohne dass die Presseverleger gezwungen sind ihre Inhalte die unter das Leistungsschutzrecht fallen zu kennzeichnen wird das gesamte Internet als Geisel genommen.

Das dunkle Geheimnis

Die eigentliche Problematik des Leistungsschutzrechts bleibt bisher weitgehend unbemerkt. Die Unterscheidung der mehrheitlichen kostenlosen Inhalte von den kostenpflichtigen Inhalten ist nicht möglich.

Natürlich sollen Urheber oder Eigentümer über die Verwertung ihres Content entscheiden dürfen. Selbst wenn sie die Nutzung verbieten, und dies den Nutzern und der Gesellschaft schadet. Die Alternative ist ja nur einen Link entfernt. Selbst wenn sie einen Preis festlegen, der ungerechtfertigt oder zu hoch ist. Dann wird eben nicht gekauft. Schön wärs.

Das Leistungsschutzrecht legt zwar die Kostenpflicht bestimmter Angebote fest, nicht aber deren Kennzeichnung. Eine manuelle Abfrage der Kostenpflicht oder Vertragsverhandlung ist bei Millionen Anbietern (100 Milliarden Seiten von 200 Millionen Domains im Web) unmöglich. Und sich über einen Preis zu einigen bevor überhaupt bekannt ist wie häufig der Content in einem Suchergebnis auftaucht ist kaum möglich. Ein Großteil der potentiellen Suchergebnisse gehört zum Long Tail, wird also sehr selten oder nie angezeigt. Trotzdem soll dafür gezahlt werden.

Dadurch wird der rechtssichere Betrieb von Suchmaschinen unmöglich gemacht, sei denn man unterwirft sich einer pauschalen Zahlung, egal ob und in welchem Umfang man vom Gesetz betroffene Inhalte nutzt. Die naheliegende und legale Variante sich auf kostenfreie Angebote zu beschränken, wird durch deren fehlende Identifizierbarkeit verhindert. Sicher ein Versehen. Oder „ein Angebot, das man nicht ablehnen kann“™

Der Ausweg

Da die Presseverleger nicht die unter das Leistungsschutzrecht fallenden Inhalte markieren, hilft nur die maschinenlesbare Kennzeichnung nicht unter das Leistungsschutzrecht fallender Inhalte, durch eine Koalition von Inhaltanbietern und Suchmaschinen im Interesse aller Nutzer.

Die Anbieter können in einer maschinenlesbaren Lizenz die kostenlose Nutzung von Überschriften und Textausschnitten einer definierbaren Länge gestatten.

Maschinenlesbare Kennzeichnung durch Erweiterung der robots.txt:

Title-lenght: 80

Snippet-length: 160

Image-size: 200x200

SpeakingUrl-display: yes

Copyright: 'Freedom of Citation License v1.0: Free manual and/or 
automatized citation is granted for the content hosted on this 
domain, within the restrictions defined in disallowed,
titleLength, snippetLength, imageSize, speakingUrlDisplay as long 
as there is a link to the original source. 
All rights from the German Leistungsschutzrecht are waived.'
Ranking

Die Aufgabe einer Suchmaschine besteht darin, den Nutzer bei der Suche nach für Ihn relevanten Webseiten zu unterstützen. Die Suchmaschine übernimmt dabei einerseits eine Vorauswahl durch die angezeigte Ergebnisliste, andererseits entscheidet der Nutzer dann auf Basis der in Title, Snippet, Url und Thumbnail enthaltenen Informationen ob er einen Link aufruft.
Die Aufgabe der Suchmaschine besteht also auch darin den Nutzer bei seiner Entscheidung durch die Anzeige ausreichender Information über die Webseite zu unterstützen.

Webseiten, die nur begrenzte Information zur Verfügung stellen und damit eine informierte Entscheidung des Nutzers erschweren, und deren Relevanz für den Nutzer schwer ersichtlich ist, werden deshalb im Ranking abgewertet. An der Spitze der Suchergebnisseite werden Seiten angezeigt, bei denen der Anbieter die Suchmaschine dabei unterstützt, dass der Nutzer die für ihn relevantesten Informationen auszuwählen kann.

Was macht FAROO?

Wir haben unsere Suche kostenneutral an das Leistungsschutzrecht angepasst:

  • Es werden nur noch Suchergebnisse angezeigt, deren Quellen nicht unter das Leistungsschutzrecht fallen oder die eine kostenfreie Nutzung gestatten.
  • Suchergebnisse, deren Quellen unter das Leistungsschutzrecht fallen werden nicht mehr angezeigt.
  • Für Quellen die unter das Leistungsschutzrecht fallen, und eine Anzeige in den Suchergebnissen wünschen, erheben wir eine Listungsgebühr in Höhe der dadurch im Rahmen des Leistungsschutzrechts entstehenden Kosten.
Nicht unter das Leistungsschutzrecht fallenden Quellen werden wie folgt identifiziert:

  • eine manuell erstellte Whitelist
  • eine Erweiterung der robots.txt
Update: Inzwischen beziehen die ersten Verlage und Publikationen Stellung zum Leistungsschutzrecht und gestatten ausdrücklich die Nutzung von Snippets: Heise, Golem, Gamona, Winfuture, Techstage, PC-Welt, t3n, iBusiness, l-iz.
Dies ist ein erster wichtiger Schritt, aber für Dienste die Web-Scale arbeiten führt kein Weg an einer maschinenlesbaren Lizenz/Freigabe vorbei. Jeden Tag werden zwei Millionen neue Domains registriert. Die in Newsmeldungen enthaltenen Freigaben einiger Quellen sind da nur ein Tropfen auf den heißen Stein.

FAROO unterstützt IGEL, die Inititive gegen Leistungsschutzrecht.

Share onTweet about this on TwitterShare on FacebookShare on Google+Share on RedditBuffer this pageShare on LinkedInShare on VK