Back
Oct 22, 2016

Solr Sharding

When dealing with one of our projects (LookSMI media monitoring platform) we have to handle the huge volume of data – and its quantity is constantly growing. At the same time, we must run quick searches with smart rules. To make this work, the whole index should be placed into RAM.

LookSMI Case: Using Shards for the News Portal

It is obvious that when millions of records are being added to the index regularly, RAM size would never be enough. Eventually, you will have to divide index into several parts in order to run search on several machines simultaneously.

LookSMI is dealing with the news, which means that the portal is heavily working with the new records while almost not using the older ones. LookSMI utilizes Solr as a full-text search engine. To ensure fast search within recent records, we decided to shard LookSMI's index.

Sharding is a type of database partitioning that separates very large databases the into smaller and faster, parts called data shards. The word shard itself means a 'small part of a whole'. Technically, sharding means horizontal partitioning but in practice, the term is often used to refer to any database partitioning that is meant to make a very large database more manageable and easier to search through.

Accordingly, we partitioned LookSMI's search index into shards where each shard corresponds with one month. A limited number of shards are set to be active concurrently – just a few last months. Thus, all data a user needs is housed in RAM.

Whenever there is a need, older shards can be activated to accomplish necessary request – and then deactivated. Moreover, when filtering the search by predefined date, we can engage just a part of active shards so that the search is performed only on those servers where the data for the specified period is located.

Steb-by-step Guide to Arranging Solr Sharding

There are several ways to arrange sharding in Solr. The easiest one is to divide index into a few cores. Hence, when carrying out a search query, Solr should be commanded to perform search throughout several cores simultaneously.

First, you should create cores with the identical structure. This can be easily done with the help of cores’ pattern.

For starters, copy configuration files into the configsets folder:

cp sorl/data//conf solr/data/configsets/conf

Then specify the folder’s name when creating the core:

mkdir solr/data/configsets/
cp solr/data/configsets/conf solr/data/configsets//conf

We have automated the process of creating the cores so that new cores are proactively set up every month:

http:///solr/admin/cores?action=CREATE&name=&instanceDir=path/to/instance&configSet=path/to/instance/`

When the cores are created, revise the code so that data would be added to a specific core while performing a concurrent search over the other cores:

http:///solr/select?q=*:*&shards=http:///solr/,http:///solr/`

Then, revise the code so that data would be recorded to a corresponding core:

http:///solr//update -H

Cores can be located on different servers. Furthermore, they can be arranged not only by time periods but also by categories.

Don’t be afraid to create multiple cores as resource overheads under these conditions are quite small. On the other hand, sharding makes database systems smoothly scalable and helps to deal with the problem of slower response times for growing indexes.

Subscribe for the news and updates

More thoughts
Mar 26, 2025Technology
Common Mistakes and Recommendations for Test Cases

The article highlights common test case mistakes, offers ways to fix them, and provides practical tips to improve and optimize test cases.

Apr 19, 2022Technology
Improve efficiency of your SELECT queries

SQL is a fairly complicated language with a steep learning curve. For a large number of people who make use of SQL, learning to apply it efficiently takes lots of trials and errors. Here are some tips on how you can make your SELECT queries better. The majority of tips should be applicable to any relational database management system, but the terminology and exact namings will be taken from PostgreSQL.

Sep 21, 2020Technology
How to Optimize Django ORM Queries

Django ORM is a very abstract and flexible API. But if you do not know exactly how it works, you will likely end up with slow and heavy views, if you have not already. So, this article provides practical solutions to N+1 and high loading time issues. For clarity, I will create a simple view that demonstrates common ORM query problems and shows frequently used practices.

Sep 23, 2010Technology
Dynamic class generation, QuerySetManager and use_for_related_fields

It appears that not everyone knows that in python you can create classes dynamically without metaclasses. I'll show an example of how to do it.So we've learned how to use custom QuerySet to chain requests:Article.objects.old().public()Now we need to make it work for related objects:user.articles.old().public()This is done using use_for_related_fields, but it needs a little trick.

Mar 6, 2010Technology
Ajax form validation

There was a task to submit form with ajax, with server side validation of course. Obvious solution is to do validation and return json with erros. I didn't like idea of writing separate view for validation and then inserting errors in form html on client side. Especially since I already had a generic template for django form with errors display. In this article I'll describe how I solved the task.

Feb 18, 2010Technology
User profiles with inheritance in Django

Usually users' profiles are stored in single model. When there are multiple user types, separation is made by some field like user_type.Situation is a little more complicated when different data is needed for each user type.In this article I'll describe how I solve this task.