Redis Part III

In part 2 we looked at the 2 different ways Redis is setup in production, Cluster and Sentinel, and we started working towards configuring a Redis Sentinel Setup.

We’re going to continue with Configuring Redis Sentinel using our already running Redis1, Reds2, and Redis3 machines.

The next step in the process is to setup the Redis-Sentinel services in Docker. We are going to Pin our services to specific nodes to ensure we don’t ever schedule multiple Redis-Sentinel instances on the same node.

However this time we need to provide a configuration to our sentinel setup. There are several ways we could do that, however, the best solution in terms of future maintainability is to just go ahead and build your own Docker image with your configuration baked in. Once you have done that you can use Docker Hubs CI/CD pipeline to fully automate things. The contents of that thought could easily encompass a full post. So we’ll come back to that part.

For now using your favorite tool (I’m using VS Code) lets get a new project going. If you’re using VS Code create a directory where you like to store projects, open code and select Open Folder, selecting your new folder. I then like to “Save Workspace” into the same folder so I can easily launch Code back into this project.

Once you have the workspace setup create a new file called: Dockerfile

Notice the lack of file extension? You can name it something else like Redis-Sentinel.Dockerfile if you’d like

Next create another file called redis-sentinel.conf and give it these contents:

#https://redis.io/topics/sentinel
#You can read the documentation at that link
port 26379
sentinel monitor mymaster 192.168.10.117 6379 2
sentinel down-after-milliseconds mymaster 5000
sentinel failover-timeout mymaster 180000
sentinel parallel-syncs mymaster 1
  • The 1st line is the url for the sentinel docs so you can get all smart
  • The 3rd line tells redis to start on port 26379
  • The 4th line defines the master we want our sentinel to monitor, it has the Ip, port, and quorum configuration at the end. The Quorum is the number of sentinels that must agree to imitate a fail over.
  • Next is how long a master must fail PING checks before being considered down
  • Next is how long we’ll wait before Sentinel will try to fail over the same master again.
  • parallel-syncs┬ásets the number of slaves that can be reconfigured to use the new master after a failover at the same time. The lower the number, the more time it will take for the failover process to complete, however if the slaves are configured to serve old data, you may not want all the slaves to re-synchronize with the master at the same time

Now update your Dockerfile with these contents:

FROM redis:5
COPY redis-sentinel.conf /usr/local/etc/redis/redis-sentinel.conf
#CMD [ "redis-server", "/usr/local/etc/redis/redis.conf" ]

Notice a couple things here:

  • I specify redis:5 – using a specific version instead of latest is a must for production
  • We copy the configuration file to the container
  • The CMD is for my reference later when we create the containers. We’ll eventually update the redis-server installation to use this custom image as well and I want to be able to use the same image for both.

I have also created a .dockerignore file with this in it:
.vagrant

This prevents the vagrant stuff from being sent to the docker daemon during build.

Now that we have our Dockerfile built and our redis-sentinel.conf file ready we can build the image

docker build -t wjdavis5/redis_sentinel_wordpress .

This isnt a docker tutorial, but in short this command builds an image tagged as “wjdavis5/redis_sentinel_wordpress” based on the current directory “.”

Once it is built we can push it to Docker Hub

docker push wjdavis5/redis_sentinel_wordpress:latest

Now we can move back over to our Docker host and create our services.

sudo docker service create --name redis_sentinel1 --constraint node.hostname==redis1 --hostname redis_sentinel1 --mode global --publish published=26379,target=26379,mode=host wjdavis5/redis_sentinel_wordpress:latest redis-sentinel /usr/local/etc/redis/redis-sentinel.conf

sudo docker service create --name redis_sentinel2 --constraint node.hostname==redis2 --hostname redis_sentinel2 --mode global --publish published=26379,target=26379,mode=host wjdavis5/redis_sentinel_wordpress:latest redis-sentinel /usr/local/etc/redis/redis-sentinel.conf

sudo docker service create --name redis_sentinel3 --constraint node.hostname==redis3 --hostname redis_sentinel3 --mode global --publish published=26379,target=26379,mode=host wjdavis5/redis_sentinel_wordpress:latest redis-sentinel /usr/local/etc/redis/redis-sentinel.conf

Now we should be able to connect to an instance of redis sentinel, using the docker run command from before, only this time we need to specify the port -p 26379

vagrant@redis1:~$ sudo docker run -it redis:latest redis-cli -h 192.168.10.117 -p 26379
192.168.10.117:26379> info
# Server
redis_version:5.0.3
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:9f27eb593282148b
redis_mode:sentinel
os:Linux 4.4.0-131-generic x86_64
arch_bits:64
multiplexing_api:epoll
atomicvar_api:atomic-builtin
gcc_version:6.3.0
process_id:1
run_id:6f31f65979e629341da22a4b49dc820ebf2d2f2f
tcp_port:26379
uptime_in_seconds:216
uptime_in_days:0
hz:15
configured_hz:10
lru_clock:4492786
executable:/data/redis-sentinel
config_file:/usr/local/etc/redis/redis-sentinel.conf

# Clients
connected_clients:4
client_recent_max_input_buffer:2
client_recent_max_output_buffer:0
blocked_clients:0

# CPU
used_cpu_sys:0.276000
used_cpu_user:0.272000
used_cpu_sys_children:0.000000
used_cpu_user_children:0.000000

# Sentinel
sentinel_masters:1
sentinel_tilt:0
sentinel_running_scripts:0
sentinel_scripts_queue_length:0
sentinel_simulate_failure_flags:0
master0:name=mymaster,status=ok,address=192.168.10.117:6379,slaves=2,sentinels=4
192.168.10.117:26379>

Yeah mine says 4 sentinels b/c created an additional while testing this out, you should see 3. The next thing we want to do is test failover. To do that we first want to tail the logs for redis-sentinel. We can do that with the docker logs command:

sudo docker ps|grep sentinel|awk '{print $1}'|xargs sudo docker logs -f

Now from another window connect to the master (should be redis1) and issue the shutdown command:

vagrant@redis1:~$ sudo docker run -it redis:latest redis-cli -h 192.168.10.117
192.168.10.117:6379> shutdown
not connected>

Now if you watch the redis-sentinel log you should see the fail over initiate

20 Jan 2019 15:01:07.934 * +sentinel-address-switch master mymaster 192.168.10.117 6379 ip 172.17.0.3 port 26379 for 08aa07e38473a554911f3aebfd720692b3b7c948
1:X 20 Jan 2019 15:08:36.236 # +sdown master mymaster 192.168.10.117 6379
1:X 20 Jan 2019 15:08:36.236 # +sdown master mymaster 192.168.10.117 6379
1:X 20 Jan 2019 15:08:36.326 # +odown master mymaster 192.168.10.117 6379 #quorum 4/2
1:X 20 Jan 2019 15:08:36.326 # +new-epoch 1
1:X 20 Jan 2019 15:08:36.326 # +try-failover master mymaster 192.168.10.117 6379
1:X 20 Jan 2019 15:08:36.330 # +vote-for-leader 08aa07e38473a554911f3aebfd720692b3b7c948 1
1:X 20 Jan 2019 15:08:36.330 # 63e82269c31d372eaee4d3bde15aea8a2c2e65f4 voted for 08aa07e38473a554911f3aebfd720692b3b7c948 1
1:X 20 Jan 2019 15:08:36.330 # d7b04497a0b905f692aea802979d030d5c73d0e9 voted for 08aa07e38473a554911f3aebfd720692b3b7c948 1
1:X 20 Jan 2019 15:08:36.330 # 08aa07e38473a554911f3aebfd720692b3b7c948 voted for 08aa07e38473a554911f3aebfd720692b3b7c948 1
1:X 20 Jan 2019 15:08:36.383 # +elected-leader master mymaster 192.168.10.117 6379
1:X 20 Jan 2019 15:08:36.383 # +failover-state-select-slave master mymaster 192.168.10.117 6379
1:X 20 Jan 2019 15:08:36.445 # +selected-slave slave 192.168.10.121:6379 192.168.10.121 6379 @ mymaster 192.168.10.117 6379
1:X 20 Jan 2019 15:08:36.445 * +failover-state-send-slaveof-noone slave 192.168.10.121:6379 192.168.10.121 6379 @ mymaster 192.168.10.117 6379
1:X 20 Jan 2019 15:08:36.516 * +failover-state-wait-promotion slave 192.168.10.121:6379 192.168.10.121 6379 @ mymaster 192.168.10.117 6379
1:X 20 Jan 2019 15:08:36.670 # +config-update-from sentinel 63e82269c31d372eaee4d3bde15aea8a2c2e65f4 172.17.0.3 26379 @ mymaster 192.168.10.117 6379
1:X 20 Jan 2019 15:08:36.670 # +switch-master mymaster 192.168.10.117 6379 192.168.10.118 6379

When we issued the shutdown command it should have terminated the docker container that was running. Because we configured this as a Docker Service when the container terminates Docker is automagically going to reschedule the container to execute again. So at this point you should be able to reconnect to redis1, run an info command, and see that it is now a secondary (slave)

# Replication
role:slave
master_host:192.168.10.118
master_port:6379

Congratulations, you now have created a Redis Sentinel deployment on Docker swarm. In the next post we’ll cover configuring Redis Cluster.

Redis – Part II

In Part I we covered some very basic steps to start up a Redis server and connect to it with the redis-cli tool. This is useful information for playing around in your Dev. environment, but doesn’t help us much when its time to move to production. In this post I will start tocover the steps we took to deploy Redis to production, and keep it running smoothly.

Redis Sentinel

When its time to run in production there are generally two primary ways to offer high availability. The older way of doing this is using Redis Sentinel. In this setup you have a Primary (also called the master) and 2 or more Secondaries (also called slaves). All writes must go to the primary, data is then replicated to the secondaries over the network.

You also must have at least 3 more instances of Redis running in sentinel mode. These sentinels monitor the primary (also called the master). If they determine the master is unavailable a new master will be promoted from one of the available secondaries. All other secondaries will automatically be reconfigured to pull from the new master.

There is a bit more to it than this, but that is a sufficient explanation for right now.

Redis Cluster

In cluster mode we shard reads/writes across multiple instances, and these multiple instances also have Secondaries.

2019-01-19_22-12-50

Reads and Writes are distributed across them Primaries by using a computed hash slot. Hash slots are pretty easy to understand if you want to review the Cluster Specification. But suffice it to say that a hash slot is computed based upon the key name, the hash slots are divided between primaries and it is up to the client to route to the correct instance.

Note, when I say Client, I dont mean your application, unless you plan to connect directly Redis and speak the Redis protocols. You’ll likely be using a client library like StackExchange.Redis though

Which One To Choose?

Generally speaking, I think you should just go ahead and choose Redis Cluster if you’re going to be setting this up in a production environment. It gives you the ability to scale horizontally when you need to, and honestly isnt much more difficult to setup than Sentinel. But I’ll cover the setup of both.

Configure Redis Sentinel

I’m going to continue using Docker here. And we’ll assume you have 3 nodes called redis1, redis2, and redis3. If you’re following along on your local machine you can use the following Vagrant file to get started:

Vagrant.configure("2") do |config|
  config.vm.define "redis1" do |redis1|
    redis1.vm.box = "bento/ubuntu-16.04"
    redis1.vm.box_version = "201812.27.0"
    redis1.vm.provision :shell, path: "bootstrap.sh", :args => "redis1"
  end

  config.vm.define "redis2" do |redis2|
    redis2.vm.box = "bento/ubuntu-16.04"
    redis2.vm.box_version = "201812.27.0"
    redis2.vm.provision :shell, path: "bootstrap.sh", :args => "redis2"
  end

  config.vm.define "redis3" do |redis3|
    redis3.vm.box = "bento/ubuntu-16.04"
    redis3.vm.box_version = "201812.27.0"
    redis3.vm.provision :shell, path: "bootstrap.sh", :args => "redis3"
  end

end

And here is the bootstrap.sh file mentioned in the above config:

#!/usr/bin/env bash

hostnamectl set-hostname $1
echo 127.0.0.1 $1 >> /etc/hosts
apt-get update
apt-get remove docker docker-engine docker.io containerd runc
apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg2 \
    software-properties-common -y
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
add-apt-repository \
   "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
   stable"

apt-get update
sudo apt-get install docker-ce  -y

Now when you run vagrant up, in a few moments you’ll have 3 machines running with docker installed. I’ll be configuring a swarm, like I have in production. So next I need to init the swarm.

vagrant ssh redis1
#wait for connect
sudo docker swarm init
#copy the join command that gets output by the last command, it'll look like this
#sudo docker swarm join --token SWMTKN-1-2wwy0py488uvo6u0lhbpgqpvbhha1kd6w4k1t95uox9m0t4ln0-1fjjc15308hvndpdj8ui7lts9 192.168.10.114:2377

Now you’ll ssh into the other two and join the swarm with the command from the previous example

vagrant ssh redis2
sudo docker swarm join --token SWMTKN-1-2wwy0py488uvo6u0lhbpgqpvbhha1kd6w4k1t95uox9m0t4ln0-1fjjc15308hvndpdj8ui7lts9 192.168.10.114:2377

Repeat that step on 2 and 3

Now we have a running docker swarm with redis1 as our master. The next steps are to create our redis services. We’ll be pinning each redis instance to a specific node. Here are the commands to create the redis services

sudo docker service create --name redis1 --constraint node.hostname==redis1 --hostname redis1 --mode global --publish published=6379,target=6379,mode=host redis:latest

sudo docker service create --name redis2 --constraint node.hostname==redis2 --hostname redis2 --mode global --publish published=6379,target=6379,mode=host redis:latest

sudo docker service create --name redis3 --constraint node.hostname==redis3 --hostname redis3 --mode global --publish published=6379,target=6379,mode=host redis:latest

We are pinning each instance of Redis to a node b/c we dont want docker to schedule the primary and secondaries to ever be on the same docker host. That would remove some of the high availability we get with running multiple instance.

We now have 3 instances of redis running. You can test connecting to them using the examples from part 1 of this series.

docker run -it redis:latest redis-cli -h 192.168.10.117
#obviously update your ip address

The next step is to enlist redis2 and redis3 as slaves of redis1. To do that we’ll connect to each and run the slaveof command. First ssh into redis 2 and then connect to redis using the docker command above. Then run

slaveof 192.168.10.117 6379
#again, update your IP and port (if you changed the port)

Redis should respond with “OK”
Repeat this step on redis 3

Once this is done you should be able to again connect to the instance of redis on redis1 and run the info replication command:

redis-cli info Replication

# Replication
role:master
connected_slaves:2
slave0:ip=192.168.10.118,port=6379,state=online,offset=322,lag=0
slave1:ip=192.168.10.121,port=6379,state=online,offset=322,lag=1
master_replid:1dcb7819abdce86eeee71021c89bdb075dab8943
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:322
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:322

Whew! Now we have 3 instances of redis running with 1 master and 2 slaves. In the next post we’ll work on getting redis-sentinel up and running to monitor them.

Redis Part I

This will be the first in a multi-part write up of how I use redis. I will focus on a few key areas:

  • Configuring redis server
  • General design for storing and retrieving data
  • Language specific stuff using C# / StackExchange.Redis.

To get start you’ll need to download and install redis in some form or fashion.

  1. You can go to the site and download it directly: https://redis.io/download
  2. Or you can download the source and build it: https://github.com/antirez/redis
  3. Or, my preferred way, is to install Docker, and run redis from there: https://hub.docker.com/_/redis

I’m not going to walk you through installing docker in this guide, but its pretty easy.

We’ll get started by opening up your favorite command prompt and running:

docker run -it -p 6379:6379 redis:latest

This will download the image, run it interactively, and map the default port to the container.

Thats it! Now you have an instance of redis running that you can play with.

Now there are several ways for you to connect to, and play with, redis once you have it running:

  • A number of gui applications (learn these later if you really want to get good)
  • Install redis-cli locally
  • Run redis-cli from a docker container

Again we’re going to use docker, its just the easiest way to get things going quickly. So again from your favorite command prompt type

docker run -it redis:latest redis-cli -h 192.168.10.13

You’ll want to enter the ip address of the docker host where redis was started.

I’m greeted by the following:

2019-01-19_16-22-43

Now that it is up and running, and we have a client connected, we can easily save our first entry:

192.168.10.131:6379> set Hello World
OK
192.168.10.131:6379> get Hello
“World”
192.168.10.131:6379>

That was pretty simple!

In the next article we’ll cover different productions setups, and dive into getting things setup for production.

High Speed Log4Net

Log4Net is a great logging extension for the .NET EcoSystem, that also supports .NET Standard / .NET Core (which you should be using if you arent).

Unless you want to really read up on the framework extensively it can be easy to fall into some performance traps. I’ve found that in many cases these performance issues are caused by less-than-desirable appender configuration. For example lets say you have a FileAppender.

<appender name="FileAppender" type="log4net.Appender.FileAppender">
<file value="log-file.txt" />
<appendToFile value="true" />
<layout type="log4net.Layout.SimpleLayout" />
</appender>

When you log (assuming no other appenders are configured) your application will call into log4net, which will attempt to get a file handle, and once successful will write this to your log file, followed by releasing the lock. The point is your application is going to block on this thread whilst completing all those steps.

One solution that can be implemented easily is to just use the BufferingForwardingAppender.

You simply add this appender (which is a forwarding appender)

<appender name="BufferingForwardingAppender" type="log4net.Appender.BufferingForwardingAppender" >
<bufferSize value="100"/>
<appender-ref ref="FileAppender" />
<evaluator type="log4net.Core.LevelEvaluator">
<threshold value="WARN"/>
</evaluator>
</appender>
<appender name="FileAppender" type="log4net.Appender.FileAppender">
<file value="log-file.txt" />
<appendToFile value="true" />
<layout type="log4net.Layout.SimpleLayout" />
</appender>

Now instead of writing directly to the FileAppender you will write to the BufferingForwardingAppender. Essentially this thread will drop your message in a queue. When that queue fills up (bufferSize) or if you log an event that will trigger the evaluator (warn) all messages will be written to the downstream appenders. In this case the FileAppender. This action will block the calling thread that triggered it, but writing 100 entries to the logfile in one action is much faster than writing 100 individual messages.

 

Here Are Some Links: