May 15, 2024
by Jacek Migdal
Elasticsearch and its open-source sibling, OpenSearch are the most popular full-text search engines, yet their schema approach is widely misunderstood. We at Quesma are building a database gateway that helps people choose the best tool for the job and wanted to share some of our experiences building support for Elasticsearch and OpenSearch database engines.
Most databases (like PostgreSql) define schema upfront or can accept any JSON (example: MongoDB), but Elasticsearch/OpenSearch is different.
Feel free to follow the instructions using the local Docker container:
Schema defined by first JSON
There's a common misconception that Elasticsearch/OpenSearch are schema-less. While this is partially true, you can explicitly create a new index with a JSON:
It feels fine until you get an HTTP 400 error when trying to insert:
You get an error “mapper_parsing_exception”, with cause “failed to parse field [user.id] of type [long] in document…”.
The same JSON for the new index would work and would define a different schema.
You usually won’t realise this behavior for a while. For performance, most of the insert/ingestion is through the “sample/_bulk” API.
Then, even when you drop data, you still get lovely HTTP 200 responses; however errors are buried in the response JSON.
HTTP/1.1 200 OK, yet two documents were dropped.
Countless Elasticsearch users are confused by this and express their pain online. Elasticsearch's antidote to that problem is just dropping the fields by default instead of whole documents.
Defining schema through mapping can help, yet there are no easy ways to never leave any data behind while taking advantage of types. Moreover, updating a mapping cannot be done on a live index and it requires reindexing the whole doc collection. The community tries to avoid this flaw by defining mapping for everything.
It’s not OK for a database to drop your data, especially in observability and security contexts. In many of those setups, you have limited control over what data you receive. You may miss the root cause of an outage or security breach.
I beg you not to create too many fields
You may also be tempted to use many fields, such as one per customer.
Though please don’t run it against a real cluster:
This would not work completely, as at some point you hit a maximum of 1000 fields and each JSON key adds two fields. You can configure this limit larger, but it makes the situation even worse.
First of all, too many fields can damage your UI and dashboard experience.
Before, you could rely on a field list (see a left menu with nice names and types):
After that, it becomes polluted with additional fields and makes your filtering experience way worse (see the left menu is unusable):
Secondly, it can reduce your search performance.
Many folks fell into that trap through mistake, but it is also a way to troll or burn bridges. I would not recommend it, both search performance as well as UX would suffer and most ELKs are prone.
Though new fields are helpful, there is no easy way to manage them (e.g. allow the user to create 10 new ones), make rules to control them or a way to fix it once you land in that mess.
Another way to hit it is if your multi-tenant SaaS app allows custom attributes. Even if there is a limit of 20 per customer, this might explode, and you need to implement mapping outside of Elasticsearch.
The documents look like JSON, but they are not
Suppose that in your systems, there are multiple ways to authenticate.
How about storing an array:
It feels like it works:
However, it doesn’t behave that through search:
Creating another:
Both documents appear identically in the search:
So, though the search returns the exact JSON you inserted using _source, internally, each document is a collection of arrays for each field. The more accurate representation would be:
This has many confusing implications beyond this article. You won’t be disappointed that many overrides exist, such as nested or flattened types. Also, there are runtime and strict mappings.
Is there a hope?
These are just three rough edges, but unfortunately, there are way more. Every successful technology has its strengths and pitfalls.
Elasticsearch/OpenSearch feels easy to get started, and most of the time, you won’t hit those until you reach some scale. It is worth remembering, especially if you plan to or already building your own do-it-yourself Observability or Security application. If you are not ready for this - consider a hosted option.
Fixing underlying issues without breaking backward compatibility is tough, especially in databases. That’s the problem we are looking to solve at Quesma.
Customers would benefit from:
A way to experiment and safe step-by-step migration to a better schema. Double-write to old and new and be able to tweak.
Implicit schema created after 1000+ JSONs, not a single one. Similarly, new top-level fields should only be created if they are regular.
No data is left behind. A queryable catch-all should be available for parts of the data that don’t match the schema.
Traditionally, database vendors are very conservative, but Quesma, as a gateway, got the opportunity to innovate and show the better way while preserving the good parts of a full-text search database.
Stay tuned. The better world is coming.
Thank you, Ivan Brusic and Quesma Team for providing feedback on the draft.