<antirez>

antirez 1751 days ago. 129230 views.
[Note: this post no longer describes the client side implementation in the final implementation of Redis 6, that changed significantly, see https://redis.io/topics/client-side-caching]

The New York Redis day was over, I get up at the hotel at 5:30, still pretty in sync with the Italian time zone and immediately went walking on the streets of Manhattan, completely in love with the landscape and the wonderful feeling of being just a number among millions of other numbers. Yet I was thinking at the Redis 6 release with the feeling that, what was probably the most important feature at all, the new version of the Redis protocol (RESP3), was going to have a very slow adoption curve, and for good reasons: wise people avoid switching tools without very good reasons. After all why I wanted to improve the protocol so badly? For two reasons mainly, to provide clients with more semantical replies, and in order to open to new features that were hard to implement with the old protocol; one feature in particular was the most important to me: client side caching.

Rewind back to about one year ago. I arrived at Redis Conf 2018, in San Francisco, with the firm idea that client side caching was the most important thing in the future of Redis. If we need fast stores and fast caches, then we need to store a subset of the information inside the client. It is a natural extension of the idea of serving data with small delays and at a big scale. Actually almost every very large company already does it, because it is the only way to survive to the load eventually. Yet Redis had no way to assist the client in such process. A fortunate coincidence wanted Ben Malec having a talk at Redis Conf exactly about client side caching [1], just using the tools that Redis provides and a number of very clever ideas.

[1] https://www.youtube.com/watch?v=kliQLwSikO4

The approach taken by Ben really opened my imagination. There were two key ideas Ben used in order to make his design work. The first was to use the Redis Cluster idea of “hash slots” in order to divide keys into 16k groups. That way clients would not need to track the validity of each key, but could use a single metadata entry for a group of keys. Ben used Pub/Sub in order to send the notifications when keys where changed, so he needed some help by the application in all its parts, however the schema was very solid. Modify a key? Also publish a message invalidating it. On the client side, are you caching keys? Remember the timestamp at which you cache each key, and also when receiving invalidation messages, remember the invalidation time for each slot. When using a given cached key, do a lazy eviction, by checking if the key you cached has a timestamp which is older than the timestamp of the invalidation received for the slot this key belongs to: in that case the key is stale data, you have to ask the server again.

After watching the talk, I realized that this was a great idea to be used inside the server, in order to allow Redis to do part of the work for the client, and make client side caching simpler and more effective, so I returned home and wrote a document describing my design [2].

[2] https://groups.google.com/d/msg/redis-db/xfcnYkbutDw/kTwCozpBBwAJ

But to make my design working I had to focus on switching the Redis protocol to something better, so I started writing the specification and later the code for RESP3, and the other Redis 6 things like ACL and so forth, and client side caching joined the huge room of the many ideas for Redis that I abandoned in some way or the other for lack of time.

Yet I was among the streets of New York thinking about this idea. Later went to lunch and coffee break with friends from the conference. When I returned to my hotel room I had all the evening left, and most of the next day before the flight, so I started writing the implementation of client side caching for Redis 6, closely following the proposal I wrote to the group one year ago: it still looked great.

Redis server-assisted client side caching, finally called “tracking” (but I may change idea), is a very simple feature composed of just a few key ideas.

The key space is split into “caching slots”, but they are a lot more than the hash slots used by Ben. We use 24 bits of the output of CRC64, so there are a bit more than 16 millions different slots. Why so much? Because I think you want to have a server with 100 millions of keys, and yet an invalidation message should not affect more than a few keys in the client side cache. The memory overhead inside Redis to take the invalidation table is 130 megabyte: an array of 8 bytes pointers to 16M entries. That’s ok with me, if you want the feature you are going to make a great use of all the memory you have in the clients, so to use 130MB server side is fine; what you win is a much more fine grained invalidation.

Clients enable the feature in an “opt in” way, with a simple command:

    CLIENT TRACKING on

The server replies the good old +OK, and starting from that moment, every command that is flagged as “read only” in the command table, will not just return the keys to the caller, it will also, as a side effect, remember the caching slots of all the keys the client requested so far (but only the ones using a read only command, that's the agreement between the server and the client). The way Redis stores this information is simple. Each Redis client has an unique ID, so if client ID 123 performs an MGET about keys hashing to the slot 1, 2, and 5, we’ll have the Invalidation Table with the following entry:

1 -> [123]
2 -> [123]
5 -> [123]

But later also client ID 444 will ask about keys in the slot 5, so the table will be like:

5 -> [123, 444]

Now some other client changes some key in the slot 5. What happens is that Redis will check the Invalidation Table, to find that both clients 123 and 444 may have cached keys on that slot. We’ll send an invalidation message to both clients, as a result they will be free to deal with it in any form: either remember with a timestamp the last time the slot was invalidate, and check later in a lazy way the timestamp (or incremental “epoch” if you like it more: it is safer) of the cached object, and evict it based on the comparison. Otherwise the client is free to reclaim the objects directly, by taking a table of what it cached about this specific slot. This approach with a 24 bit hash function is not an issue, because we’ll not have a very long list at all, even when caching tens of millions of keys. After sending the invalidation messages, we can remove the entries from the invalidation table, this way we'll no longer send invalidation messages to those clients until they don't read again keys for such slot.

Note that clients are not forced to really use all the 24 bits of the hash function. They may just use, for instance, 20 bits, and then also shift the invalidation messages slots that Redis sends them. Not sure if there are many good reasons to do that, but in memory constrained systems may be an idea.

If you followed closely what I said, you are thinking that the same connection receives both the normal client replies, and the invalidation messages. This is possible with RESP3, because invalidations are sent as “push” message types. Yet if the client is a blocking one, and not an event driven client, this starts to be complex: the application need some way to read new data from time to time, and that looks complex and fragile. It is perhaps, in that case, much better to use another application thread, and a different client connection, in order to receive the invalidation messages. So you are allowed to do the following:

    CLIENT TRACKING on REDIRECT 1234

Basically we can say that all the keys we get with the current connection, we want the invalidation message to be sent to client 1234 instead. Multiple clients may ask to have the invalidation messages redirected to a single client for instance, in case of connection pools. All you need to do is to create this special connection to receive the invalidation messages, call CLIENT ID to know which ID this client connection has, and later enable tracking.

There is one problem left: what happens if we lose the connection with the server from the invalidation link? We may run into troubles since invalidation messages will no longer be received. Normally the application will detect the link is severed, and will reconnect again, flushing the current cache (or taking more soft resolutions, like putting all the timestamps for the slots a few seconds in the future to have some time to populate the cache while serving data that may be a few seconds stale). Yet it may be a better idea if the invalidation thread pings from time to time the connection to make sure it is alive. However in order to reduce the risk of stale data, Redis will also start to inform the clients that redirected the invalidation messages to some other client, that is now disconnected, about the situation, just using special push messages: at the next query performed the client will know.

What I described was just merged into Redis unstable. Probably it’s not the final word, but we have more months before the first Redis 6 release candidate, there is time to change everything: just send me your feedbacks. I’m also looking at ways to enable the feature for RESP2. That would work only when redirection is enabled, and the client listening for messages should probably go into Pub/Sub mode so that we could send kinda of Pub/Sub messages. In this way old clients can be fully reused.

I hope this was enough to stimulate your appetite: if we execute this inside Redis very well, and then document it for the client authors to know how to provide support, data may go a lot nearer to the application than ever, even in applications ran by small teams that so far avoided trying to implement client side caching. For large teams and very large applications doing this already, the overhead could be reduced, together with the complexity of the implementation.
🚀 Dear reader, the first six chapters of my AI sci-fi novel, WOHPE, are now available as a free eBook. Click here to get it.
blog comments powered by Disqus
: