Clustering redis to maximize uptime and scale

posted in:

At Recurly we use Redis to power website caching, queuing and some application data storage. Just like all the systems we use internally it has to be stable, have automatic failover incase of failure and be easily scalable should we need to add more capacity.

What is redis?

Redis is an open source, BSD licensed, advanced key-value store. It is often referred to as a data structure server since keys can contain strings, hashes, lists, sets and sorted sets.


On it’s own, Redis does not fit our requirements but with some extra software, we can get to where we want to be.

Core Solution

Initially we looked at Redis Cluster, this looks great but our testing and the general consensus showed that it's not ready for production use just yet. We will definitely be keeping an eye on this project.

The solution we decided to go with was a mix of different solutions Integrated together: Redis, keepalived, nutcracker/twemproxy, sentinel and smitty give us the scalability, stability and automatic failover we require.

Before I go into further detail on implementation, I think it's important to visualize what this setup looks like:

redis-cluster - regular

Application traffic first passes through the virtual IP address provided by keepalived, this gives us IP failover should we lose a server.

Traffic then passes to the twemproxy application. As you can see in the diagram we have three traditional Redis master > slave setups behind twemproxy, the reason for this is that twemproxy knows about all the master servers in the cluster and automatically shards data between them. The masters then replicate data to their slaves. This means that we have 3 copies of the sharded data, split across all Redis servers at all times.

The rest of the setup is better explained in a failover scenario, shown in the professionally designed graphic below:

redis-cluster downtime

If we lose a machine in the cluster, sentinel will notice and promote a one of the slaves within it's cluster to a master role, a more in-depth explanation can be found in the sentinel spec. Smitty monitors sentinel for master DOWN events and updates twemproxy with the new master thus we remain up and data consistent should we lose a node.


Every solution we deploy at Recurly is heavily monitored. Statistics taken from the monitoring software are used to alert our operations team to potential problems. This could be based on checks or trends from time series data.

We use Nagios for active checks, detailed below:

For time series data we use graphite, this allows us to spot trends and alert on various metrics. We grab information from the cluster using the following tools:


Redis by default, stores all data in memory but it has two options for data persistence if required: AOF and RDB. I won't go into too much detail as a good explanation may be found here: We use AOF for persistence and backups due to the way RDP stops handling client requests while it runs.


Redis provides the option of using password authentication however twemproxy does not support this currently and it can be easily cracked. Instead we opted for a network based security approach, access to our Redis clusters are tightly controlled by firewall rules and we deploy multiple Redis clusters to different VLANs on our network.


Redis is a very fast, in-memory information store that we love. Using the techniques described above will allow you to scale to multiple machines and handle outages elegantly.


Ready to Get Started?

Request a Demo

Recurly provides enterprise-class subscription management for thousands of businesses worldwide.