Knowledge Level:
Intermediate
Article tags:
Sentinel is a Redis mechanism for managing replication and failover. This tutorial will guide you through the steps of setting Sentinel up to manage your Redis replication.
Sentinel is a Redis service which monitors a group (or “pod”) of Redis master/slaves to provide failover capability at the server level. For more specific information about how to use it see A Guide To Using Sentinel.
For this scenario we will assume the following pod configuration: * 1 Master * 2 Slaves * 1 Sentinel constellation consisting of three sentinels
The goal is to have one of the slaves take over the role of master when the master becomes unavailable. We will be using Redis >= 2.8.
For this tutorial I will assume you have an appropriate version installed. You will need to deploy three instances of Redis. This can be three instances running on different ports on the same system or three different systems (such as a cloud server or Docker container). For this tutorial I will write for the former scenario: one server, three different ports.
As this tutorial is about Sentinel, we will skip informaton about non-sentinel configuration and the redis-server instances we will configure via options on the command line. As you will be running multiple commands which run in the foreground, you will need multiple terminal windows or a multiplexer such as Screen or Tmux.
In one terminal run redis-server --bind 0.0.0.0 --port=6500
. This will
be our initial master and it will run on port 6500 on all interfaces.
The output will be on standard out where you can see what happens.
In successive terminals run the following commands:
redis-server --bind 0.0.0.0 --port=6501
redis-server --bind 0.0.0.0 --port=6502
redis-server --bind 0.0.0.0 --port=6503
This will run our slave instances on ports 6501-6503. Naturally in production these would all be different systems and likely running on the default port.
Now we need to configure the slaves. To do this the following command line will do it:
for x in 1 2 3; do
redis-cli -p 650$x slaveof 127.0.0.1 6500
done
Now we have told each of those instances to slave from the master instance on port 6500. We can do this in the config file as well and specify the port to listen on and master, specifying the config file on the command line when running each instance.
Normally for Sentinel you would want to run each on a different node to prevent a node level failure form taking out your failover management. For this tutorial we will run these on the same node to become familiar with the process.
Sentinel requires either running redis-server
as redis-sentinel
(such as through symlinking) or passing the --sentinel
option and the
sentinel config file on the command line.
For this portion of the tutorial we will run with a basic sentinel config file.
Here is the basic config file we will be using.
# The port that this sentinel instance will run on
port 26379
dir "/tmp"
sentinel monitor cache 127.0.0.1 6500 2
The line of import here is the sentinel monitor line. With this we tell the sentinel instance to connect to port 6500 on IP 127.0.0.1 as a master Redis instance, and to use a quorum of 2. The quorum is the number of Sentinel instances needed to decide the master is down and begin failover procedures. As a rule of thumb, this number should be an odd number and represent a majority of the number of Sentinels you run. The ‘cache’ entry is the name we assign to this group, or ‘pod’ of Redis servers.
A given sentinel can manage multiple pods, thus the need to identify it. This name must be unique.
Create a file called sentinel-1.conf
with the above in it. For
sentinel-2.conf
and sentinel-3.conf
increment the ports to 26380 and
26381 respectively. As Sentinel will rewrite the config file to
represent changes in the state of the pod, these files will need to be
writable by the user running Sentinel. In this tutorial that is covered
since you create and own the files, then run sentinel from the command
line. In a production setup you will likely be running sentinel as
the redis user, so ensure the sentinel.conf file is writable by the redis user.
Now you will need three more terminals. Run the following commands, one per terminal as you did for the redis-server instances:
redis-server sentinel-1.conf --sentinel
redis-server sentinel-2.conf --sentinel
redis-server sentinel-3.conf --sentinel
With these commands successfully running you should have three sentinels monitoring your ‘cache’ pod. Notice we did not specify any slaves. Adding a slave IP will result in problems. If you re-use the name sentinel will consider it a duplicate and kick it out. If you don’t you will be trying to monitor a slave as a master and not get what you want.
Sentinel discovers slaves on it’s own by querying the master. To see
this, in a new terminal, run redis-cli -p 26379 sentinel master cache
and look for the “num-slaves” value. It should be “2”. You can then ask
sentinel about these slaves with redis-cli -p 26379 sentinel slaves
cache
. The output of this command is the configuration/state of each slave.
now to prove this all worked we need to stop the master instance. There
are a few ways we can do this. We can stop the service as we would any
other daemon or by running redis-cli -p 6500 debug segfault
which will
cause the instance to crash. However, if you are following along exactly
you can just go to the terminal running on port 6500 and hit Ctrl-c
.
Regardless of how you kill it, in a few seconds (30 by default) you will see each
sentinel issuing +sdown
events and eventually an +odown
followed by
the promotion of a slave to the new master, then reconfiguring the other
slaves to point to the new master.
After a few minutes, bring up the original master and watch it get reconfigured as a slave of the new one. This is a rather nice feature of Sentinel, but it also points out something important. Because the failed master will likely come back up thinking it is a master it will accept commands in the brief interval between coming up and being reconfigured. For this reason, as well as others, placing a load balancer in front which directs traffic to the master if available is a bad idea.
If using a load balancer you will need to control the backend nodes it communicates with manually. You should not use a policy of a single “always the master” instance, rather let any slave be the master when promoted and only change for maintenance or when the new master fails.
You can let Sentinel modify your load balancer if you can control it via the command line.
Bring your pod back to all systems running.
For this portion we will assume you have some script or command
available on the sentinel nodes which can change the configuration of
your front-end load balancer. This will be implementation specific so
for this tutorial we will use the following script as a placeholder, naming it
failover.sh
and marking it executable.
#!/bin/bash
# We get from sentinel the following arguments:
# <master-name> <role> <state> <from-ip> <from-port> <to-ip> <to-port>
# Currently, state is always "failover" and role will be either observer
# or leader
PODNAME=$1
ROLE=$2
STATE=$3
OLDIP=$4
OLDPORT=$5
NEWIP=$6
NEWPORT=$7
echo "[$PODNAME] Failover occurred from $OLDIP:$OLDPORT to $NEWIP:$NEWPORT" >>/tmp/failovers.log
if [ $ROLE = "leader" ] ; then
echo "[$PODNAME] As leader I should so something here"
fi
To add this to our config, add the following line to your
sentinel-N.conf files and restart them: sentinel client-reconfig-script
cache /path/to/failover.sh
. In yet another terminal run touch
/tmp/failovers.log && tail -f /tmp/failovers.log
to see the script’s results.
Proceed to kill the current master. You should see the lines show up in
out temporary log file. You can use the $ROLE
variable to ensure that
only the current sentinel leader actually effects changes. Otherwise you
will need your script to be idempotent as each sentinel will call it individually.
In production this script would do things such as call an API to your
load balancer to switch the backend node it talks to, raise an alert in
your monitoring console, and other actions you need or want. This script
will only be called when a failover has completed. For other
notifications you would use the config directive sentinel
notification-script
, the details of what it will be passed are
available in the default sentinel.conf file.
Alternatively, you can write a daemon which subscribes to the event channels you want to act on via the PUBSUb mechanism, but that is a different tutorial.
Sentinel’s ability to monitor Redis Master+Slave setups provides the ability for a more highly available Redis setup. You’ve now seen how to use Sentinel in your own configurations and a means to integrating it with your existing monitoring infrastructure. Armed with this knowledge, go forth and conquer your Redis M/S needs.
Tags: sentinel
Follow Us
Follow Us online, join our conversations, engage with our teams around the world!