Now it’s time to get acquainted with Elasticsearch. This NoSQL database is used to store logs, analyze information and - most importantly - search. Actually, it is a search by json documents via engine based on Apache Lucene. Of course Elasticsearch has many other features like shading and replication out of the box, the settings of which I will not touch yet.
The purpose of this article is to explain the basic things that you will encounter when implementing a search. As it is not strange, there is not much detailed and structured information on the basics of ES on the Internet, so it seems to me that this text will be useful. Of course, official documentation is pretty good, but it’s reading can take a long time. More of that, and examples there are given on some abstract data, which is not clear where they came from. Usually Elasticsearch is used in the so-called ELK stack:
The headquarters of Elasticsearch is located in Amsterdam, so some things may seem very strange. For example, where the branches 3.x and 4.x have disappeared - even the developers themselves do not know ¯ \ _ (ツ) _ / ¯.
Well, we have clean Ubuntu 16.04 for deploy the system of index information on the site. I decided to put version 5.6 as the most stable in terms of features and documentation. In general, Elasticsearch has many stable versions:
Installation process for deb-based systems is download and run dpkg -i elasticsearch.deb
. Elasticsearch use 2Gb RAM by default, but minimal requirement is 512Mb. You can decrease this parameter in /etc/elasticsearch/jvm.options
file or via command line arguments -Xms512m
and -Xmx512m
, or if you use Docker image via environment variable ES_JAVA_OPTS="-Xms512m -Xmx512m"
. Just to be sure that Elasticsearch up and running send GET request to GET http://localhost:9200/
. Response must have json with some system information: the cluster name and any versions, etc…
You can interact with Elasticsearch via raw HTTP request using curl or wget, but I prefer Developer tools in Kibana.
As I said earlier, Elasticsearch has only a REST API to work with everything: creating an index or parser, searching, aggregating, retrieving help information… In the future, instead of the detailed GET http://localhost: 9200/_cluster/health
I will use the abbreviated version of GET /_cluster/health
, omitting the server name and port. Also, the query has optional flags, for example, pretty, which formats the output into a human-readable json, or explain, necessary for analysis of the execution plan. Each response has some technical information: how long it took to complete the query, the number of shards, the number of found objects, etc. By the way, by default maximum records in response are limited by 10 entries. If you need more, you can specify values size and from in the query parameters or open scrollable search - an analog server cursor for relational databases.
As an example, I will work with a database of tweets. We implement there search by content, tags, autocompletion … All documents are stored in the index, which is created by the command: PUT /twitter/
. Now you can put our objects there, and ES will try to derive a data scheme, which is called mapping. Of course, this method of its task is far from ideal, in addition, mapping is something more than a set of fields with their types, but it will be easier to understand the material.
The main thing is that created: true exists in response. In that case _id
is specified in the request path, but if it is not, it will be created automatically. By the way, Elasticsearch is distributed system, and it is necessary to somehow resolve conflicts, here it is done using the field _version
.
Review mapping for twitter index GET /twitter/_mapping/
:
Response is described as “for index twitter
specified mapping for document tweet
which has 3 text fields and one datetime”. ES make no difference between a single row and an array, so the tags are still represented as normal strings, but that’s because each element of that array is still processed separately. Let’s try to add one more type of document to the same index:
The object was successfully added and indexed. So, it is possible to add various docs to the same index, and we can find all of them using one query in ES.
Back to the mapping of field subject
:
In this case, the subject
field is treated as a normal text file with the default analyzer and as a whole keyword (tag). Analyzers will be discribed a little lower, but here I want to pay your attention to the fact that the same field can be analyzed simultaneously in different ways: as an entire string, as separate tokens, as a set of letters for autocompletion… And in queries you can use them simultaneously. The mapping for such fields looks something like this:
Try to change analyzer for existing field:
We’ve got error 400: illegal_argument_exception. This is because you can not change the parameters of an existing field, but you can create a new one:
Mapping of subjects
looks like field sets with different analyzers:
However, adding a field in this way will not result in its indexing - for this you need to re-save each document or call POST twitter/_update_by_query
:
We figured out how Elasticsearch handles objects, this is enough to practice writing queries to it. First, we’ll make sure that the data is still in the database: GET /twitter/tweet/1/
. The document we wrote down in the previous step should return. And now try to find tweets that were made after September 1, 2010:
Let’s try to search for objects that have the word “Sunny”:
Because we searched in the index, then both objects like “tweet”, and “retweet”. In total there were 3 objects, one of which belongs to Dublin - let’s try to exclude it:
I’m added a node with the type bool, which can combine several must
requests, build a negative must_not
or filter filter
(about the type should
better refer to documentation). must
differs from filter
in that queries in it are involved in the calculation of relevance. Elasticsearch is famous for its fuzzy search, we will try to search for some similar text:
There was not any document? It’s happen because we should take into account in what field representation the search is performed. If you remember the mapping for the subject field, then it has additional fields: “raw” and “en”.
Done - there was even a record about Dublin! Great, let’s try to search “Sunrise”:
Do not believe me? Install the Kibana and check it :)
Result is empty. The analyzer “english” does not understand such a strong difference in morphology. The search for the words “Dublin” or “First” will also be successful, but how to understand why our “sun” was not found? Let’s turn to analyzers!
For debug the analyzer (you can add your own), ES provides the endpoint _analyze
, with which you can evaluate what information will be put in the index:
So, the index have only token “sunni”, which can not be found at all on the request of “sun”. If you look at the list of analyzers, then it is quite voluminous. Most popular:
Analyzer is a common concept for converting a string into a set of tokens, it includes:
char_filter
- processes the entire string to be parsed. For example, standard installation have a script html_strip
, which deletes HTML tags, however you can write your owntokenizer
- split string to tokens (may be only one)filter
- processes each token separately (leads to lowercase, removes stop words, etc.), including can also add synonymsFor example, we’ll write an analyzer for the autocomplete. The basic idea is to split all the words into N-grams and search for the occurrence of text from the input field to this array. First of all, let’s define with the tokenizer - how we will allocate tokens from the text. I think it will work with standard
, which will leave only the words (not to be confused with the analyzer standard
, which also includes leads to lowercase!). Which we will already cut into N-grams (before this we need to close the index POST /twitter/_close
, and open after POST /twitter/_open
):
Hmm, Elasticsearch even said {"acknowledged": true
} is 200 OK
in his dialect. Let’s check the analyzer’s work in some way:
We have a list of tokens in response: [‘su’, ‘sun’, ‘sunn’, ‘sunny’, ‘du’ …]. Does it work?! Let’s connect one more sub-field in subject and check that:
And we see our records about the sunny cities! However, according to the word ‘Sundfs’ they will also be founded - this is already a mess. It’s because the same analyzer is used for searching too: Sunsdfs
is divided into N-grams [‘co’, ‘sol’, ‘sun’, ‘sund’, ‘sundf’, ‘sundfs’], which intersect with stored in the index. The output is to treat the search object as one word. This is regulated by the search_analyzer
setting for the field:
After the field mapping is updated, the autocomplete is working fine!
I hope that now you have an understanding of how to work with Elasticsearch, what are mappers and analyzers, and how to build the simplest queries. I tried to describe the required minimum set of knowledge so that you could independently organize a search for your content. On this for now, however, Elasticsearch still has many other interesting things that go beyond the scope of this article. Here is just a small list of topics:
GET / _cat
_source
, size
, sort
, etc._reindex
, _aliases
)_explain
_score
)