Slave partial synchronization work in progress
▼You can follow the commits in the next days in the "psync" branch at github: https://github.com/antirez/redis/commits/psync
You can follow the commits in the next days in the "psync" branch at github: https://github.com/antirez/redis/commits/psync
From an HN comment[1] "(Geek note: In the late nineties I worked briefly with a D&D fanatic ops team lead. He threw a D100 when he came in every morning. Anything >90 he picked a random machine to failover 'politely'. If he threw a 100 he went to the machine room and switched something off or unplugged something. A human chaos monkey)." [1] http://news.ycombinator.com/item?id=4736220
I'm pretty surprised no one tried to write a wrapper for redis-rb or other clients implementing a Dynamo-style system on top of Redis primitives. Basically something like that: 1) You have a list of N Redis nodes. 2) On write, use consistent hashing and write the same thing to M nodes (M configurable). 3) On reads, read from M nodes and pick the most common reply to return to the client. For all the non-matching replies, use DUMP / RESTORE in Redis 2.6 to update the value of nodes that are in the minority.
I'm proud to be mentioned in this well-thought and non-bigot post: http://www.nerdess.net/waffling/why-it-awesome-be-girl-tech/ Also in perfect accordance with the hacking culture, the post is sort of an HOWTO for girls that want to be involved in IT.
In this busy days I had the idea to focus on a single, non-huge, self contained project for some time, that could allow me to work focused as much as hours as possible, and could provide a significant advantage to the Redis community. It turns out, the best bet was partial replication resync. An always wanted feature that consists in the ability to a slave to resynchronize to a master without the need of a full resync (and RDB dump creation on the master side) if the replication link was interrupted for a short time, because of a timeout, or a network issue, or similar transient issue.
I love Github issues, it is one of the awesome things at Github IMHO: as simple as possible but actually under the hood pretty full featured. However one of the things I love more is labels. It is a truly powerful thing to organize issues in a project-specific way. Unfortunately if an issue is a pull request, no labels can be attached. I wonder why. Also I would love the ability to merge against multiple branches instead of the taget one, directly from the web UI.
I assume you already read the AWS report[1] about recent troubles. I think it is a very good argument you could use at work against design complexity and in favor of designing stuff that are at a complexity level where analysis of failure modes and prevention is actually possible. [1] https://aws.amazon.com/message/680342/
From a comment on Hacker News: (link: http://news.ycombinator.com/item?id=4705387) --- quoted comment --- Full disclosure: I work for an AWS competitor. While none of the specific AWS systemic failures may themselves be foreseeable, it is not true that issues of this nature cannot be anticipated: the architecture of their system (and in particular, their insistence on network storage for local data) allows for cascading failure modes in which single failures blossom to systemic ones. AWS is not the only entity to have made this mistake with respect to network storage in the cloud; I, too, was fooled.[1]
Achievement unlocked: releasing a Redis version the same day your daughter was born ;-) But that was a bad issue as there was a bug preventing compilation on pretty old Linux systems that are still pretty widespread (RHLE5 & similar). Redis 2.6.1 fixes just that issue and is available as usually at http://redis.io as a tar.gz or at github/antirez/redis as a "2.6.1" tag.
I really trust both in the usefulness of Redis bit operations and the fact that our community in the future should have documentation about Redis Patterns. So an article from CopperEgg where a bit operations pattern is described is good for sure :) http://copperegg.com/redis-bit-operations-use-case-at-copperegg/