[[hunspell]] === Hunspell Stemmer

Elasticsearch provides ((("dictionary stemmers", "Hunspell stemmer")))((("stemming words", "dictionary stemmers", "Hunspell stemmer")))dictionary-based stemming via the http://bit.ly/1KNFdXI[`hunspell` token filter]. Hunspell http://hunspell.sourceforge.net/[_hunspell.sourceforge.net_] is the spell checker used by Open Office, LibreOffice, Chrome, Firefox, Thunderbird, and many other open and closed source projects.

Hunspell dictionaries((("Hunspell stemmer", "obtaining a Hunspell dictionary"))) can be obtained from the following:

A Hunspell dictionary consists of two files with the same base name--such as en_US—but with one of two extensions:

.dic::

Contains all the root words, in alphabetical order, plus a code representing
all possible suffixes and prefixes (which collectively are known as _affixes_)

.aff::

Contains the actual prefix or suffix transformation for each code listed
in the `.dic` file

==== Installing a Dictionary

The Hunspell token ((("Hunspell stemmer", "installing a dictionary")))filter looks for dictionaries within a dedicated Hunspell directory, which defaults to ./config/hunspell/. The .dic and .aff files should be placed in a subdirectory whose name represents the language or locale of the dictionaries. For instance, we could create a Hunspell stemmer for American English with the following layout:

[source,text]

config/ └ hunspell/ <1> └ en_US/ <2> ├ en_US.dic ├ en_US.aff

      └ settings.yml <3>

<1> The location of the Hunspell directory can be changed by setting indices.analysis.hunspell.dictionary.location in the config/elasticsearch.yml file.

<2> en_US will be the name of the locale or language that we pass to the hunspell token filter.

<3> Per-language settings file, described in the following section.

==== Per-Language Settings

The settings.yml file contains settings((("Hunspell stemmer", "per-language settings"))) that apply to all of the dictionaries within the language directory, such as these:

[source,yaml]


ignore_case: true strict_affix_parsing: true


The meaning of these settings is as follows:

ignore_case::

+

Hunspell dictionaries are case sensitive by default: the surname Booker is a different word from the noun booker, and so should be stemmed differently. It may seem like a good idea to use the hunspell stemmer in case-sensitive mode,((("Hunspell stemmer", "using in case insensitive mode"))) but that can complicate things:

  • A word at the beginning of a sentence will be capitalized, and thus appear to be a proper noun.
  • The input text may be all uppercase, in which case almost no words will be found.
  • The user may search for names in all lowercase, in which case no capitalized words will be found.

As a general rule, it is a good idea to set ignore_case to true.

--

strict_affix_parsing::

The quality of dictionaries varies greatly.((("Hunspell stemmer", "strict_affix_parsing"))) Some dictionaries that are available online have malformed rules in the .aff file. By default, Lucene will throw an exception if it can't parse an affix rule. If you need to deal with a broken affix file, you can set strict_affix_parsing to false to tell Lucene to ignore the broken rules.((("strict_affix_parsing")))

.Custom Dictionaries


If multiple dictionaries (.dic files) are placed in the same directory, ((("Hunspell stemmer", "custom dictionaries")))they will be merged together at load time. This allows you to tailor the downloaded dictionaries with your own custom word lists:

[source,text]

config/ └ hunspell/ └ en_US/ <1> ├ en_US.dic ├ en_US.aff <2> ├ custom.dic

      └ settings.yml

<1> The custom and en_US dictionaries will be merged.

<2> Multiple .aff files are not allowed, as they could use conflicting rules.

The format of the .dic and .aff files is discussed in <>.


==== Creating a Hunspell Token Filter

Once your dictionaries are installed on all nodes, you can define a hunspell token filter((("Hunspell stemmer", "creating a hunspell token filter"))) that uses them:

[source,json]

PUT /my_index { "settings": { "analysis": { "filter": { "en_US": { "type": "hunspell", "language": "en_US" <1> } }, "analyzer": { "en_US": { "tokenizer": "standard", "filter": [ "lowercase", "en_US" ] } } } }

}

<1> The language has the same name as the directory where the dictionary lives.

You can test the new analyzer with the analyze API, and compare its output to that of the english analyzer:

[source,json]

GET /my_index/_analyze?analyzer=en_US <1> reorganizes

GET /_analyze?analyzer=english <2>

reorganizes

<1> Returns organize

<2> Returns reorgan

An interesting property of the hunspell stemmer, as can be seen in the preceding example, is that it can remove prefixes as well as as suffixes. Most algorithmic stemmers remove suffixes only.

[TIP]

Hunspell dictionaries can consume a few megabytes of RAM. Fortunately, Elasticsearch creates only a single instance of a dictionary per node. All shards that use the same Hunspell analyzer share the same instance.

==================================================

[[hunspell-dictionary-format]] ==== Hunspell Dictionary Format

While it is not necessary to understand the((("Hunspell stemmer", "Hunspell dictionary format"))) format of a Hunspell dictionary in order to use the hunspell tokenizer, understanding the format will help you write your own custom dictionaries. It is quite simple.

For instance, in the US English dictionary, the en_US.dic file contains an entry for the word analyze, which looks like this:

[source,text]

analyze/ADSG

The en_US.aff file contains the prefix or suffix rules for the A, G, D, and S flags. Each flag consists of a number of rules, only one of which should match. Each rule has the following format:

[source,text]

[type] [flag] [letters to remove] [letters to add] [condition]

For instance, the following is suffix (SFX) rule D. It says that, when a word ends in a consonant (anything but a, e, i, o, or u) followed by a y, it can have the y removed and ied added (for example, ready -> readied).

[source,text]

SFX D y ied aeiouy

The rules for the A, G, D, and S flags mentioned previously are as follows:

[source,text]

SFX D Y 4 SFX D 0 d e <1> SFX D y ied aeiouy SFX D 0 ed ey SFX D 0 ed [aeiou]y

SFX S Y 4 SFX S y ies aeiouy SFX S 0 s [aeiou]y SFX S 0 es [sxzh] SFX S 0 s sxzhy <2>

SFX G Y 2 SFX G e ing e <3> SFX G 0 ing e

PFX A Y 1

PFX A 0 re . <4>

<1> analyze ends in an e, so it can become analyzed by adding a d.

<2> analyze does not end in s, x, z, h, or y, so it can become analyzes by adding an s.

<3> analyze ends in an e, so it can become analyzing by removing the e and adding ing.

<4> The prefix re can be added to form reanalyze. This rule can be combined with the suffix rules to form reanalyzes, reanalyzed, reanalyzing.

More information about the Hunspell syntax can be found on the http://bit.ly/1ynGhv6[Hunspell documentation site].

powered by Gitbook该页面构建时间: 2017-08-11 12:51:16

results matching ""

    No results matching ""