1000x Faster Spelling Correction algorithm

1000x faster

Update: We released a C# implementation as Open Source.
Update2: We are 100.000 times faster for edit distance=3.
Update3: Spelling correction is now also part of FAROO search.
Update4: The source code is now also on GitHub.

Recently I answered a question on Quora about spelling correction for search engines. When I described our algorithm I was pointed to Peter Norvig’s page where he outlined his approach.

Both algorithms are based on Edit distance (Damerau-Levenshtein distance).
Both try to find the dictionary entries with smallest edit distance from the query term.
If the edit distance is 0 the term is spelled correctly, if the edit distance is <=2 the dictionary term is used as spelling suggestion.
But our way to search the dictionary is different, resulting in a significant performance gain and language independence.

Three ways to search for minimum edit distance in a dictionary:

1. Naive approach
The obvious way of doing this is to compute the edit distance from the query term to each dictionary term, before selecting the string(s) of minimum edit distance as spelling suggestion. This exhaustive search is inordinately expensive.
Source: Christopher D. Manning, Prabhakar Raghavan & Hinrich Schütze: Introduction to Information Retrieval.

The performance can be significantly improved by terminating the edit distance calculation as soon as a treshold of 2 or 3 has been reached.

2. Peter Norvig
Generate all possible terms with an edit distance <=2 (deletes + transposes + replaces + inserts) from the query term and search them in the dictionary.
For a word of length n, an alphabet size a, an edit distance d=1, there will be n deletions, n-1 transpositions, a*n alterations, and a*(n+1) insertions, for a total of 2n+2an+a-1 terms at search time.
Source: Peter Norvig: How to Write a Spelling Corrector.

This is much better than the naive approach, but still expensive at search time (114,324 terms for n=9, a=36, d=2) and language dependent (because the alphabet is used to generate the terms, which is different in many languages and huge in Chinese: a=70,000 Unicode Han characters)

3. Symmetric Delete Spelling Correction (FAROO)
Generate terms with an edit distance <=2 (deletes only) from each dictionary term and add them together with the original term to the dictionary. This has to be done only once during a pre-calculation step.
Generate terms with an edit distance <=2 (deletes only) from the input term and search them in the dictionary.
For a word of length n, an alphabet size of a, an edit distance of 1, there will be just n deletions, for a total of n terms at search time.

This is three orders of magnitude less expensive (36 terms for n=9 and d=2) and language independent (the alphabet is not required to generate deletes).
The cost of this approach is the pre-calculation time and storage space of x deletes for every original dictionary entry, which is acceptable in most cases.

The number x of deletes for a single dictionary entry depends on the maximum edit distance: x=n for edit distance=1, x=n*(n-1)/2 for edit distance=2, x=n!/d!/(n-d)! for edit distance=d (combinatorics: k out of n combinations without repetitions, and k=n-d),
E.g. for a maximum edit distance of 2 and an average word length of 5 and 100,000 dictionary entries we need to additionally store 1,500,000 deletes.

Remark 1: During the precalculation, different words in the dictionary might lead to same delete term: delete(sun,1)==delete(sin,1)==sn.
While we generate only one new dictionary entry (sn), inside we need to store both original terms as spelling correction suggestion (sun,sin)

Remark 2: There are four different comparison pair types:

  1. dictionary entry==input entry,
  2. delete(dictionary entry,p1)==input entry
  3. dictionary entry==delete(input entry,p2)
  4. delete(dictionary entry,p1)==delete(input entry,p2)

The last comparison type is required for replaces and transposes only. But we need to check whether the suggested dictionary term is really a replace or an adjacent transpose of the input term to prevent false positives of higher edit distance (bank==bnak and bank==bink, but bank!=kanb and bank!=xban and bank!=baxn).

Remark 3: Instead of a dedicated spelling dictionary we are using the search engine index itself. This has several benefits:

  1. It is dynamically updated. Every newly indexed word, whose frequency is over a certain threshold, is automatically used for spelling correction as well.
  2. As we need to search the index anyway the spelling correction comes at almost no extra cost.
  3. When indexing misspelled terms (i.e. not marked as a correct in the index) we do a spelling correction on the fly and index the page for the correct term as well.

Remark 4: We have implemented query suggestions/completion in a similar fashion. This is a good way to prevent spelling errors in the first place. Every newly indexed word, whose frequency is over a certain threshold, is stored as a suggestion to all of its prefixes (they are created in the index if they do not yet exist). As we anyway provide an instant search feature the lookup for suggestions comes also at almost no extra cost. Multiple terms are sorted by the number of results stored in the index.

Reasoning
In our algorithm we are exploiting the fact that the edit distance between two terms is symmetrical:

  1. We can generate all terms with an edit distance <2 from the query term (trying to reverse the query term error) and checking them against all dictionary terms,
  2. We can generate all terms with an edit distance <2 from each dictionary term (trying to create the query term error) and check the query term against them.
  3. We can combine both and meet in the middle, by transforming the correct dictionary terms to erroneous strings, and transforming the erroneous input term to the correct strings.
    Because adding a char on the dictionary is equivalent to removing a char from the input string and vice versa, we can on both sides restrict our transformation to deletes only.

We are using variant 3, because the delete-only-transformation is language independent and three orders of magnitude less expensive.

Computational Complexity
Our algorithm is constant time ( O(1) time ), i.e. independent of the dictionary size (but depending on the average term length and maximum edit distance), whereas e.g. BK-Trees have a search time of O(log dictionary_size).

Application
Possible application fields of our algorithm are those of fast approximate dictionary string matching: spell checkers for word processors and search engines, correction systems for optical character recognition, natural language translation based on translation memory, record linkage, de-duplication, matching DNA sequences, fuzzy string searching and fraud detection.

———

BTW, by using a similar principle our web search is three orders of magnitude more efficient as well. While Google touches 1000 servers for every query, we need to query just one (server/peer).
That’s not because of DHT! Vice versa, because even for a complex query in a web scale index only one of the servers needs to be queried, it enables the use of DHT for web search.
Our algorithm improves the efficiency of central servers in a data center to the same extent.

9 thoughts on “1000x Faster Spelling Correction algorithm

  1. Pingback: Quora

  2. Pingback: 1000x Faster Spelling Correction: Source Code released « FAROO Blog

  3. Pingback: Spelling correction, Query completion and Instant search « FAROO Blog

  4. Just for the sake of discussion, another option is using a bk-tree, which can be done as long as your distance function maintains triangular inequality (which I believe the do distance function does.)

  5. BK-Trees have a search time of O(log n) for n=dictionary_size.
    Our algorithm is constant time ( O(1) time ), i.e. independent of the dictionary size (but depending on the average term length and maximum edit distance).

  6. Thanks Wolf. I’m glad i found this, i’d been researching this issue on and off for a while, your approach makes perfect sense. I benchmarked it against our current methods and it indeed performed well. We released our code for a similar approach written in golang: https://github.com/sajari/fuzzy

  7. This symmetric deletion method allows corrections of up to edit distance 4. eg:
    “abcd” and “cdef” have edit distance 4, but by apply two deletions to each term, you can get them to match: abcd => cd and cdef => cd .

    This means that you are going to end up considering far more correction candidates than Norvig’s algorithm, which can be very costly.

  8. @Jason: see Remark 2 of the above blog post. In comparison type 4 (applying deletes on both dictionary term and input term) we need to calculate
    the true Damerau–Levenshtein distance in order to prevent false positives of higher edit distance (e.g. false positives of edit distance=4).

    But this does NOT mean that we have to calculate all candidates for edit distance 4:
    1. with symmetric deletion method we need to calculate much less candidates (at search time) and need much less comparisons for the same edit distance than we would have with Norvig’s algorithm (only deletes instead of deletes + transposes + replaces + inserts).
    2. of those fewer candidates only a fraction (type 4) requires the calculation of the true Damerau-Levenshtein distance to exclude false positives.
    3. our benchmark of both algorithms in the follow-up post ( http://blog.faroo.com/2012/06/24/1000x-faster-spelling-correction-source-code-released/ ) proves that our algorithm is 3 to 5 orders of magnitude faster for the same edit distance.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>