Saturday, 11 July 2015

Solr : " You complete me! "

Introduction

If there's one thing that months of Solr-user mailing list have taught me is that the Autocomplete feature in a Search Engine is vital and around Solr Autocomplete there's as much hype as confusion.
In this blog I am going to try to clarify as much as possible all the kind of Suggesters that can be used in Solr, exploring in details how they work and showing some real world example.
It's not scope of this blog post to explore in details the configurations.
Please use the official wiki [1] and this really interesting blog post [2] to integrate this resource.
Let's start with the definition of the Suggester component.

Solr Suggester

From the official Solr wiki [1]:
" The SuggestComponent in Solr provides users with automatic suggestions for query terms. You can use this to implement a powerful auto-suggest feature in your search application.
This approach utilizes Lucene's Suggester implementation and supports all of the lookup implementations available in Lucene.
The main features of this Suggester are:
  • Lookup implementation pluggability
  • Term dictionary pluggability, giving you the flexibility to choose the dictionary implementation
  • Distributed support "
For the details of the configuration parameter I suggest you the official wiki as a reference.
Our focus will be the practical use of the different Lookup Implementation , with clear examples.

Term Dictionary

The Term Dictionary defines the way the terms (source for the suggestions) are retrieved.
There are different ways of retrieving the terms, we are going to focus on the DocumentDictionary ( the most common and simple to use).
For details about the other Dictionaries implementation please refer to the official documentation as usual. 
The DocumentDictionary uses the Lucene Index to provide the list of possible suggestions, and specifically a field is set to be the source for these terms.

Suggester Building

Building a suggester is the process of : 
  • retrieving the terms (source for the suggestions) from the dictionary
  • build the data structures that the Suggester requires for the lookup at query time
  • Store the data structures in memory/disk
The produced data structure will be stored in memory in first place.
It is suggested to additionally store on disk the built data structures, in this way it will available without rebuilding, when it is not in memory anymore.
For example when you start up Solr, the data will be loaded from disk to the memory without any rebuilding to be necessary.
This parameter is:
storeDir” for the FuzzyLookup
indexPath” for theAnalyzingInfixLookup

The built data structures will be later used by the suggester lookup strategy, at query time.
In details, for the DocumentDictionary during the building process, for ALL the documents in the index :
  • the stored content of the configured field is read from the disk ( stored="true" is required for the field to have the Suggester working)
  • the compressed content is decompressed ( remember that Solr stores the plain content of a field applying a compression algorithm [3] )
  • the suggester data structure is built
We must be really careful here to this sentence :
"for ALL the documents" -> no delta dictionary building is happening 

So extra care every time you decide to build the Suggester !
Two suggester configuration are strictly related to this observation :


Parameter Description
buildOnCommit or buildOnOptimize If true then the lookup data structure will be rebuilt after each soft-commit. If false, the default, then the lookup data will be built only when requested by query parameter suggest.build=true.
Because of the previous observation is quite easy to understand that the buildOnCommit is highly discouraged.
buildOnStartup If true then the lookup data structure will be built when Solr starts or when the core is reloaded. If this parameter is not specified, the suggester will check if the lookup data structure is present on disk and build it if not found.
Again, is highly discouraged to set this to true, or our Solr cores could take really long time to start up.

A good consideration at this point would be to introduce a delta approach in the dictionary building.
Could be a good improvement, making more sense out of the "buildOnCommit" feature.
I will follow up verifying the technical feasibility of this solution.
Now let's step to the description of the various lookup implementations with related examples.

Note: when using the field type "text_en" we refer to a simple English analyser with soft stemming and stop filter enabled.
The simple corpus of document for the examples will be the following :

[
      {
        "id":"44",
        "title":"Video gaming: the history"},
      {
        "id":"11",
        "title":"Video games are an economic business"},
      {
        "id":"55",
        "title":"The new generation of PC and Console Video games"},
      {
        "id":"33",
        "title":"Video games: multiplayer gaming"}]





And a simple synonym mapping : multiplayer, online

AnalyzingLookupFactory

<lst name="suggester">
  <str name="name">AnalyzingSuggester</str>
  <str name="lookupImpl">AnalyzingLookupFactory</str>
  <str name="dictionaryImpl">DocumentDictionaryFactory</str>
  <str name="field">title</str>
  <str name="weightField">price</str>
  <str name="suggestAnalyzerFieldType">text_en</str>
</lst>


Description
Data Structure FST
Building For each Document, the stored content from the field is analyzed according to the suggestAnalyzerFieldType.
The tokens produced are added to the Index FST.
Lookup strategy The query is analysed,  the tokens produced are added to the query FST.
An intersection happens between the Index FST and the query FST.
The suggestions are identified starting at the beginning of the field content.
Suggestions returned The entire content of the field . 

This suggester is quite powerful as it allows to provide suggestions at the beginning of a field content, taking advantage of the analysis chain provided with the field.
It will be possible in this way to provide suggestions considering synonyms, stop words, stemming and any other token filter used in the analysis.

Let's see some example:

Query to autocomplete Suggestions Explanation
"Video gam"
  • "Video gaming: the history"
  • "Video games are an economic business"
  • "Video game: multiplayer gaming"
The suggestions coming are simply the result of the prefix match. No surprises so far.
"Video Games"
  • "Video gaming: the history"
  • "Video games are an economic business"
  • "Video game: multiplayer gaming"
The input query is analysed, and the tokens produced are the following : "video" "game".
The analysis was applied at building time as well, producing the same stemmed terms for the beginning of the titles.
"video gaming" -> "video" "game"
 "video games" -> "video" "game"
So the prefix match applies.

"Video game econ"
  • "Video games are an economic business"
In this case we can see that the stop words were not considered when building the index FST. Note :
position increments MUST NOT be preserved for this example to work, see the configuration details.
"Video games online ga"
  • "Video game: multiplayer gaming"
Synonym expansion has happened and the match is returned as online and multiplayer are considered synonyms by the suggester, based on the analysis applied.


FuzzyLookupFactory

<lst name="suggester">
  <str name="name">FuzzySuggester</str>
  <str name="lookupImpl">FuzzyLookupFactory</str> 
  <str name="dictionaryImpl">DocumentDictionaryFactory</str>
  <str name="field">title</str>
  <str name="weightField">price</str>
  <str name="suggestAnalyzerFieldType">text_en</str>
</lst>



Description
Data StructureFST
BuildingFor each Document, the stored content from the field is analyzed according to the suggestAnalyzerFieldType.
The tokens produced are added to the Index FST.
Lookup strategyThe query is analysed,  the tokens produced are then expanded producing for each token all the variations accordingly to the max edit configured for the String distance function configured ( default is Levestein Distance[4]).
The finally produced tokens are added to the query FST keeping the variations.
An intersection happens between the Index FST and the query FST.
The suggestions are identified starting at the beginning of the field content.
Suggestions returnedThe entire content of the field . 

This suggester is quite powerful as it allows to provide suggestions at the beginning of a field content, taking advantage of a fuzzy search on top of the analysis chain provided with the field.
It will be possible in this way to provide suggestions considering synonymsstop words, stemming and any other token filter used in the analysis and support also misspelled terms by the user.
It is an extension of the Analysis lookup.

IMPORTANT : Remember the proper order of processing happening at query time :

  • FIRST, the query is analysed, and tokens produced
  • THEN, the tokens are expanded with the inflections based on the Edit distance and distance algorithm configured


Let's see some example:

Query to autocompleteSuggestionsExplanation
"Video gmaes"
  • "Video gaming: the history"
  • "Video games are an economic business"
  • "Video game: multiplayer gaming"
The input query is analysed, and the tokens produced are the following : "video" "gmae".
Then the FST associated is expanded with new statuses containing the inflections for each token.
For example "game" will be added to the query FST because it has a distance of 1 from the original token.
And the prefix matching is working fine returning the expected suggestions.
"Video gmaing"
  • "Video gaming: the history"
  • "Video games are an economic business"
  • "Video game: multiplayer gaming"
The input query is analysed, and the tokens produced are the following : "video" "gma".
Then the FST associated is expanded with new statuses containing the inflections for each token.
For example "gam" will be added to the query FST because it has a distance of 1 from the original token.
So the prefix match applies.

"Video gamign"
  • No suggestion returned
This can seem odd at first, but it is coherent with the Look up implementation.
The input query is analysed, and the tokens produced are the following : "video" "gamign".
Then the FST associated is expanded with new statuses containing the inflections for each token.
For example "gaming" will be added to the query FST because it has a distance of 1 from the original token.
But no prefix matching will apply because in the Index FST we have "game", the stemmed token for "gaming"

AnalyzingInfixLookupFactory

<lst name="suggester">
  <str name="name">AnalyzingInfixSuggester</str>
  <str name="lookupImpl">AnalyzingInfixLookupFactory</str> 
  <str name="dictionaryImpl">DocumentDictionaryFactory</str>
  <str name="field">title</str>
  <str name="weightField">price</str>
  <str name="suggestAnalyzerFieldType">text_en</str>
</lst>


Description
Data StructureAuxiliary Lucene Index
BuildingFor each Document, the stored content from the field is analyzed according to the suggestAnalyzerFieldType and then additionally EdgeNgram token filtered.
Finally an auxiliary index is built with those tokens.
Lookup strategyThe query is analysed according to the suggestAnalyzerFieldType.
Than a phrase search is triggered against the Auxiliary Lucene index
The suggestions are identified starting at the beginning of each token in the field content.
Suggestions returnedThe entire content of the field . 

This suggester is really common nowadays as it allows to provide suggestions in the middle of a field content, taking advantage of the analysis chain provided with the field.
It will be possible in this way to provide suggestions considering synonymsstop words, stemming and any other token filter used in the analysis and match the suggestion based on internal tokens.

Let's see some example:

Query to autocompleteSuggestionsExplanation
"gaming"
  • "Video gaming: the history"
  • "Video games are an economic business"
  • "Video game: multiplayer gaming"
The input query is analysed, and the tokens produced are the following : "game" .
In the Auxiliary Index , for each of the field content we have the EdgeNgram tokens:
"v","vi","vid"… , "g","ga","gam","game" .
So the match happens and the suggestion are returned
"ga"
  • "Video gaming: the history"
  • "Video games are an economic business"
  • "Video game: multiplayer gaming"
The input query is analysed, and the tokens produced are the following : "ga" .
In the Auxiliary Index , for each of the field content we have the EdgeNgram tokens:
"v","vi","vid"… , "g","ga","gam","game" .
So the match happens and the suggestion are returned

"game econ"
  • "Video games are an economic business"
Stop words will not appear in the Auxiliary Index.
Both "game" and "econ" will be, so the match applies.

BlendedInfixLookupFactory

We are not going to describe the details  of this lookup strategy as it's pretty much the same of the AnalyzingInfix.
The only difference appears scoring the suggestions, to weight prefix matches across the matched documents. The score will be higher if a hit is closer to the start of the suggestion or vice versa.

FSTLookupFactory

<lst name="suggester">
  <str name="name">FSTSuggester</str>
  <str name="lookupImpl">FSTLookupFactory</str> 
  <str name="dictionaryImpl">DocumentDictionaryFactory</str>
  <str name="field">title</str>
</lst>


Description
Data StructureFST
BuildingFor each Document, the stored content is added to the Index FST.
Lookup strategyThe query is added to the query FST.
An intersection happens between the Index FST and the query FST.
The suggestions are identified starting at the beginning of the field content.
Suggestions returnedThe entire content of the field . 

This suggester is quite simple as it allows to provide suggestions at the beginning of a field content, with an exact prefix match.

Let's see some example:

Query to autocompleteSuggestionsExplanation
"Video gam"
  • "Video gaming: the history"
  • "Video games are an economic business"
  • "Video game: multiplayer gaming"
The suggestions coming are simply the result of the prefix match. No surprises so far.
"Video Games"
  • No Suggestions
The input query is not analysed,  and no field content in the documents starts with that exact prefix

"video gam"
  • no Suggestions
The input query is not analysed,  and no field content in the documents starts with that exact prefix
"game"
  • no Suggestions
This lookup strategy works only at the beginning of the field content. So no suggestion is returned.


For the following lookup strategy we are going to use a slightly modified corpus of documents :

[
      {
        "id":"44",
        "title":"Video games: the history"},
      {
        "id":"11",
        "title":"Video games the historical background"},
      {
        "id":"55",
        "title":"Superman, hero of the modern time"},
      {
        "id":"33",
        "title":"the study of the hierarchical faceting"}]

FreeTextLookupFactory

<lst name="suggester">
  <str name="name">FreeTextSuggester</str>
  <str name="lookupImpl">FreeTextLookupFactory</str> 
  <str name="dictionaryImpl">DocumentDictionaryFactory</str>
  <str name="field">title</str>
  <str name="ngrams">3</str>
  <str name="separator"> </str>
  <str name="suggestFreeTextAnalyzerFieldType">text_general</str>
</lst>


Description
Data StructureFST
BuildingFor each Document, the stored content from the field is analyzed according to the suggestFreeTextAnalyzerFieldType.
As a last token filter is added a ShingleFilter with a minShingle=2 and maxShingle=<ngrams>.
The final tokens produced are added to the Index FST.
Lookup strategyThe query is analysed according to the suggestFreeTextAnalyzerFieldType.
As a last token filter is added a ShingleFilter with a minShingle=2 and maxShingle=<ngrams>.
Only the latest "ngrams" tokens will be evaluated to produce
Suggestions returnedngram tokens suggestions

This lookup strategy is completely different from the others seen so far, its main difference is that the suggestions are ngram tokens ( and NOT the full content of the field).
We must take extra care in using this suggester as it is quite easily prone to errors, some guidelines :

  • Don't use an heavy Analyzers, the suggested terms will come from the index, so be sure they are meaningful tokens. A really basic analyser is suggested, stop words and stemming are not 
  • Be sure you use the proper separator(' ' is suggested), the default will be encoded in "#30;"
  • ngrams parameter will set the last n tokens to be considered from the query


Let's see some example:

Query to autocompleteSuggestionsExplanation
"video g"
  • "video gaming"
  • "video games"
  • "generation"
The input query is analysed, and the tokens produced are the following : "video g" "g" 
The analysis was applied at building time as well, producing 2-3 shingles.
"video g" matches by prefix 2 shingles from the index FST .
"g" matches by prefix 1 shingle from the index FST.

"games the h"
  • "games the history"
  • "games the historical"
  • "the hierarchical"
  • "hero"
The input query is analysed, and the tokens produced are the following : "games the h" "the h""h" 
The analysis was applied at building time as well, producing 2-3 shingles.
"games the h" matches by prefix 2 shingles from the index FST .
"the h" matches by prefix 1 shingle from the index FST.
"h" matches by prefix 1 shingle from the index FST.


[1] Suggester Solr wiki
[2] Solr suggester
[3] Lucene Storing Compression
[4] Levenstein Distance

14 comments:

  1. hi there!
    Thanks for your post - very helpful info that I couldnt find elsewhere! I wonder if you can help me with a problem. I am trying to use FreeTextLookupFactory lookup to provide suggestions that are part of the actual indexed content field. But I keep getting solr errors like:
    IllegalArgumentException: tokens must not contain separator byte
    Would it be possible for you to provide an example field and type definition that can be used with this? Perhaps my field set up is incorrect. Thanks very much!

    ReplyDelete
    Replies
    1. Hi Unkwown, unfortunately i missed this comment!
      Have your solved your problem ?
      What was the solution ?
      This kind of suggester is actually not using the field type, but the specific analysis you specify in the suggester conf .
      Be careful to the note about the separator, it was a tricky one !
      Cheers

      Delete
  2. Hi, a very good analysis on different suggesters. Can you please explain about 'context filtering' in AnalyzingInfixSuggester. Just curious about how the filtering happens in this case over auxiliary lucene index.

    ReplyDelete
    Replies
    1. Hey Arsha,
      thanks for the comment :)
      For the context filtering, what happens is we actually add to the auxiliary index data structure, the field we want to filter later on.
      Then it is possible to configure a query and filter the results ( suggestions) by the content of that field.
      I will take a note and add a deep analysis of the feature in the blog post :)
      Thanks for the feedback !

      Delete
    2. http://jirasearch.mikemccandless.com/search.py?index=jira uses context param to uses both AnalyzingInfixSuggester "context" feature, to only show suggestions for the project you've drilled into, and its "payload" feature, to hold the metadata behind each suggestion

      Delete
    3. Hi Shyamsunder,
      you are correct, the context filtering is used in Michael portal :)
      But what about the "payload" ?
      Which metadata are you referring to ? I can see only the title in the suggestions ( but I just quickly played with it)

      Cheers

      Delete
  3. Hi Alex, thanks for the detailed information on suggesters with examples. Solr Suggestor Wiki is confusing and misleading - https://cwiki.apache.org/confluence/display/solr/Suggester. They should link to this page on that page.

    ReplyDelete
    Replies
    1. Thank you very much Shyamsunder!
      Much appreciated!

      Delete
  4. About getting matches for "Video gamign" using FuzzyLookupFactory, what if we apply analysis on spelling correction of "gamign", i.e., "gaming" to get stemmed tokens. This way we get results.

    ReplyDelete
    Replies
    1. Hi Shyamsunder, you mean using an analyzer that performs spell correction ( dictionary based ? ) and then stemming ?
      It could be possible.
      First we define a TokenFilter that does the spell correction based on a dictionary ( it is actually a good idea, but I think it doesn't exist out of the box).
      Then we can specify a stemming token filter, and the game is done.

      This is actually a good idea, and can be potentially useful is a number of use cases :

      https://issues.apache.org/jira/browse/SOLR-9429

      Delete
    2. You got it. Thanks for considering my idea.

      Delete
    3. This comment has been removed by the author.

      Delete
  5. Hi, in case of AnalyzingInfixSuggester if the auxiliary index build is in progress when "suggest.build = true" will the suggestions work? during this interval?

    ReplyDelete
  6. Shouldn't this:

    text_general

    point to the respective suggest type, not just text_general? Or you have just modified text_general to include the Shingle filter? Could you also provide your fieldType definition?

    I am trying to set this up and i'm running into all sorts of problems, one of the most common ones is this : IllegalArgumentException: tokens must not contain separator byte.


    ReplyDelete