<antirez>

antirez 3711 days ago. 292524 views.
The first commit I can find in my git history about Redis Cluster is dated March 29 2011, but it is a “copy and commit” merge: the history of the cluster branch was destroyed since it was a total mess of work-in-progress commits, just to shape the initial idea of API and interactions with the rest of the system.

Basically it is a roughly 4 years old project. This is about two thirds the whole history of the Redis project. Yet, it is only today, that I’m releasing a Release Candidate, the first one, of Redis 3.0.0, which is the first version with Cluster support.

An erratic run
—

To understand why it took so long is straightforward: I started the cluster project with a lot of rush, in a moment where it looked like Redis was going to be totally useless without an automatic way to scale.
It was not the right moment to start the Cluster project, simply because Redis itself was too immature, so we didn't yet have a solid “single instance” story to tell.

While I did the error of starting a project with the wrong timing, at least I didn’t fell in the trap of ignoring the requests arriving from the community, so the project was stopped and stopped an infinite number of times in order to provide more bandwidth to other fundamental features. Persistence, replication, latency, introspection, received a lot more care than cluster, simply because they were more important for the user base.

Another limit of the project was that, when I started it, I had no clue whatsoever about distributed programming. I did a first design that was horrible, and managed to capture well only what were the “products” requirement: low latency, linear scalability and small overhead for small clusters. However all the details were wrong, and it was far more complex than it had to be, the algorithms used were unsafe, and so forth.

While I was doing small progresses I started to study the basics of distributed programming, redesigned Redis Cluster, and applied the same ideas to the new version of Sentinel. The distributed programming algorithms used by both systems are still primitive since they are asynchronous replicated, eventually consistent systems, so I had no need to deal with consensus and other non trivial problems. However even when you are addressing a simple problem, compared to writing a CP store at least, you need to understand what you are doing otherwise the resulting system can be totally wrong.

Despite all this problems, I continued to work at the project, trying to fix it, fix the implementation, and bring it to maturity, because there was this simple fact, like an elephant into a small room, permeating all the Redis Community, which is: people were doing again and again, with their efforts, and many times in a totally broken way, two things:

1) Sharding the dataset among N nodes.
2) A responsive failover procedure in order to survive certain failures.

Problem “2” was so bad that at some point I decided to start the Redis Sentinel project before Cluster was finished in order to provide an HA system ASAP, and one that was more suitable than Redis Cluster for the majority of use cases that required just “2” and not “1”.

Finally I’m starting to see the first real-world result of this efforts, and now we have a release candidate that is the fundamental milestone required to get adoption, fix the remaining bugs, and improve the system in a more incremental way.

What it actually does?
—

Redis Cluster is basically a data sharding strategy, with the ability to reshard keys from one node to another while the cluster is running, together with a failover procedure that makes sure the system is able to survive certain kinds of failures.

From the point of view of distributed databases, Redis Cluster provides a limited amount of availability during partitions, and a weak form of consistency. Basically it is neither a CP nor an AP system. In other words, Redis Cluster does not achieve the theoretical limits of what is possible with distributed systems, in order to gain certain real world properties.

The consistency model is the famous “eventual consistency” model. Basically if nodes get desynchronized because of partitions, it is guaranteed that when the partition heals, all the nodes serving a given key will agree about its value.

However the merge strategy is “last failover wins”, so writes received during network partitions can be lost. A common example is what happens if a master is partitioned into a minority partition with clients trying to write to it. If when the partition heals, in the majority side of the partition a slave was promoted to replace this master, the writes received by the old master are lost.

This in turn means that Redis Cluster does not have to take meta data in the data structures in order to attempt a value merge, and that the fancy commands and data structures supported by Redis are also supported by Redis Cluster. So no additional memory overhead, no API limits, no limits in the amount of elements a value can contain, but less safety during partitions.

It is trivial to understand that in a system designed like Redis Cluster is, nodes diverging are not good, so the system tries to mitigate its shortcomings by trying to limit the probability of two nodes diverging (and the amount of divergence). This is achieved in a few ways:

1) The minority side of a partition becomes not available.
2) The replication is designed so that usually the reply to the client, and the replication stream to slaves, is sent at the same time.
3) When multiple slaves are available to failover a master, the system will try to pick the one that appears to be less diverging from the failed master.

This strategies don’t change the theoretical properties of the system, but add some more real-world protection for the common Redis Clusters failure modes.

For the Redis API and use case, I believe this design makes sense, but in the past many disagreed. However my opinion is that each designer is free to design a system as she or he wishes, there is just one rule: say the truth, so Redis Cluster documents its limits and failure modes clearly in the official documentation.

It’s the user, and the use case at hand, that will make a system useful or not. My feeling is that after six years users continued to use Redis even without any clustering support at all, because the use case made this possible, and Redis offers certain specific features and performances that made it very suitable to address certain problems. My hope is that Redis Cluster will improve the life of many of those users.

The road ahead
—

Finally we have a minimum viable product to ship, which is stable enough for users to seriously start testing and in certain cases adopt it already. The more adoption, the more we improve it. I know this from Redis and Sentinel: now there is the incremental process that moves a software forward from usable to mature. Listening to users, fixing bugs, covering more code in tests, …

At the same time, I’m starting to think at the next version of Redis Cluster, improving v1 with many useful things that was not possible to add right now, like multi data center support, more write safety in the minority partition using commands replay, automatic nodes balancing (now there is to reshard manually if certain nodes are too empty and other too full), and many more things.

Moreover, I believe Redis Cluster could benefit from a special execution mode specifically designed for caching, where nodes accept writes to hash slots they are not in charge for, in order to stay available in a minority partition.

There is always time to improve and fix our implementation and designs, but focusing too much on how we would like some software to be, has the risk of putting it in the vaporware category for far longer than needed. It’s time to let it go. Enjoy Redis Cluster!

Redis Cluster RC1 is available both as '3.0.0-rc1' tag at Github, or as a tarball in the Redis.io download page at http://redis.io/download
🚀 Dear reader, the first six chapters of my AI sci-fi novel, WOHPE, are now available as a free eBook. Click here to get it.
blog comments powered by Disqus
: