Nov 10, 2023
This is second part of a log series about what we do and why at Quesma, if you have not read the first part - you can find it here. In the first part, we discussed how difficult and challenging it is to innovate the database layer of our applications due to the complexity and risk involved.
After contemplating all the feedback we received from our market peers - CTOs, chief architects, and DB admins, we realized that the situation is very similar to what our industry has already been through some time ago: the monolithic application era…"
The way we used to build applications…
Remember those times when there was one code repo, everyone was writing the same language, compiling at the same time for hours and hoping all dependencies would work? When we had to release once or twice per year, because we had to synchronize 10 or 15 teams to complete their job exactly at the same date, so the quality assurance could start looking at frozen code for next three months, making sure all that had been changed throughout that time still worked?
Remember how impossible it was to release one smaller piece of the application faster without breaking others, having to roll-out or roll-back all or nothing? And the most amusing thing: here’s the team that feels its piece would be better written in another technology (e.g. programming language) - what a heresy!
Does this sound familiar? Isn't it similar to the problems you can encounter when you try to change part of the whole of your DB layer today?
In the monolith applications we had a single set of DB connection libraries that all modules of the monolith had to use to connect to it. If you wanted to change something in the DB, you’d have to change the code in all modules and/or also in connection libraries or replace them all together. Something to plan in your 1-year release roadmap and 3 months testing plan as well.
The industry has already solved these types of problems for other application layers already: we have broken down the monolith. Every team can design, build and release its own piece of the application, a microservice, in its own chosen technology with its own pace and roadmap, releasing as fast as they need without fear that it is going to break all other components. Each microservice can even be deployed in a different place (e.g. with different cloud providers) if that makes it happier there and there’s no worry about latency.
But what made it possible? What’s the secret sauce that glues all of these pieces together? APIs of course. As long as the format and definition of the API remains intact, there’s little room for error and incompatibility between different microservices being developed in separation. It was enough to agree on this microservice compatibility layer (which by the way sometimes also plays the load-balancing and security role) to open up a new world of innovation and fast-paced, agile development.
Is the revolution done? Or there’s still work to do?
Did the microservice (r)evolution make the innovation of DB layer easier? Or is it that it’s actually easier for the average IT organization to introduce a new programming language or a cloud service than a new database technology?
Let’s think about what happened to this single DB connection layer that was connecting our old-school monolith to the database? Is it somewhere there living as another microservice, easy to develop, change and modernize? As a matter of fact - what we did - is we distributed that layer chop by chop to all individual services that need to connect to the DB and left it for them to handle. It has upsides for sure: every team and service is responsible for its own connection the the DB back-end (or back-ends). But what happens if you want to change something in the database that breaks compatibility? Well, you also have to introduce a compatible change in the DB connection elements of each microservice.
All of them? YES.
All at once? YES.
What happens if even one breaks? Well, you have to roll everything back.
Does this sound more like agile microservice development or more like a monolith to you?
Has the microservices revolution reached the database layer of our application stack? Or is it still incomplete? I think you already have an idea where we are heading with this, but let's leave these details for the next part…