Redis CPU Pinning

Or, How to Run Multiple Instances of Redis on One Machine.

Why would you even want to do something like this? Well Redis is a single threaded application. So if you have a server with 8 cores and it runs Redis, only 1 of those cores will ever be used by Redis.

Yes, there are some cases, such as bgsave, where this is not true.

By running multiple instances on the same machine and pinning each instance to specific CPU core you can better utilize the Cores to more quickly serve data.

To accomplish this I use :

taskset -c N

Here is an example from my init.d file that I use to run Redis on my Ubuntu 16.04 machine:

EXEC=/usr/bin/taskset
CLIEXEC=/usr/local/bin/redis-cli
PIDFILE=/var/run/redis_6380.pid
CONF="-c 1 /usr/local/bin/redis-server /etc/redis/6380.conf"
REDISPORT="6380"

Its pretty simple, instead of calling redis-server directly you first call /usr/bin/tasket and then pass in the proper arguments. If you were to type the full command out it would look like this:

taskset -c 0 redis-server /etc/redis/redis.conf
#this will use taskset to launch an instance of redis server and pin it #to core 0 on the server

The full file is below

willd@myserver.local@ORDRedis1:~$ cat /etc/init.d/redis_6380
#!/bin/sh
#Configurations injected by install_server below....

EXEC=/usr/bin/taskset
CLIEXEC=/usr/local/bin/redis-cli
PIDFILE=/var/run/redis_6380.pid
CONF="-c 1 /usr/local/bin/redis-server /etc/redis/6380.conf"
REDISPORT="6380"
###############
# SysV Init Information
# chkconfig: - 58 74
# description: redis_6380 is the redis daemon.
### BEGIN INIT INFO
# Provides: redis_6380
# Required-Start: $network $local_fs $remote_fs
# Required-Stop: $network $local_fs $remote_fs
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Should-Start: $syslog $named
# Should-Stop: $syslog $named
# Short-Description: start and stop redis_6380
# Description: Redis daemon
### END INIT INFO


case "$1" in
    start)
        if [ -f $PIDFILE ]
        then
            echo "$PIDFILE exists, process is already running or crashed"
        else
            echo "Starting Redis server..."
            $EXEC $CONF
        fi
        ;;
    stop)
        if [ ! -f $PIDFILE ]
        then
            echo "$PIDFILE does not exist, process is not running"
        else
            PID=$(cat $PIDFILE)
            echo "Stopping ..."
            $CLIEXEC -p $REDISPORT shutdown
            while [ -x /proc/${PID} ]
            do
                echo "Waiting for Redis to shutdown ..."
                sleep 1
            done
            echo "Redis stopped"
        fi
        ;;
    status)
        PID=$(cat $PIDFILE)
        if [ ! -x /proc/${PID} ]
        then
            echo 'Redis is not running'
        else
            echo "Redis is running ($PID)"
        fi
        ;;
    restart)
        $0 stop
        $0 start
        ;;
    *)
        echo "Please use start, stop, restart or status as first argument"
        ;;
esac
Advertisements

Redis – Part II

In Part I we covered some very basic steps to start up a Redis server and connect to it with the redis-cli tool. This is useful information for playing around in your Dev. environment, but doesn’t help us much when its time to move to production. In this post I will start tocover the steps we took to deploy Redis to production, and keep it running smoothly.

Redis Sentinel

When its time to run in production there are generally two primary ways to offer high availability. The older way of doing this is using Redis Sentinel. In this setup you have a Primary (also called the master) and 2 or more Secondaries (also called slaves). All writes must go to the primary, data is then replicated to the secondaries over the network.

You also must have at least 3 more instances of Redis running in sentinel mode. These sentinels monitor the primary (also called the master). If they determine the master is unavailable a new master will be promoted from one of the available secondaries. All other secondaries will automatically be reconfigured to pull from the new master.

There is a bit more to it than this, but that is a sufficient explanation for right now.

Redis Cluster

In cluster mode we shard reads/writes across multiple instances, and these multiple instances also have Secondaries.

2019-01-19_22-12-50

Reads and Writes are distributed across them Primaries by using a computed hash slot. Hash slots are pretty easy to understand if you want to review the Cluster Specification. But suffice it to say that a hash slot is computed based upon the key name, the hash slots are divided between primaries and it is up to the client to route to the correct instance.

Note, when I say Client, I dont mean your application, unless you plan to connect directly Redis and speak the Redis protocols. You’ll likely be using a client library like StackExchange.Redis though

Which One To Choose?

Generally speaking, I think you should just go ahead and choose Redis Cluster if you’re going to be setting this up in a production environment. It gives you the ability to scale horizontally when you need to, and honestly isnt much more difficult to setup than Sentinel. But I’ll cover the setup of both.

Configure Redis Sentinel

I’m going to continue using Docker here. And we’ll assume you have 3 nodes called redis1, redis2, and redis3. If you’re following along on your local machine you can use the following Vagrant file to get started:

Vagrant.configure("2") do |config|
  config.vm.define "redis1" do |redis1|
    redis1.vm.box = "bento/ubuntu-16.04"
    redis1.vm.box_version = "201812.27.0"
    redis1.vm.provision :shell, path: "bootstrap.sh", :args => "redis1"
  end

  config.vm.define "redis2" do |redis2|
    redis2.vm.box = "bento/ubuntu-16.04"
    redis2.vm.box_version = "201812.27.0"
    redis2.vm.provision :shell, path: "bootstrap.sh", :args => "redis2"
  end

  config.vm.define "redis3" do |redis3|
    redis3.vm.box = "bento/ubuntu-16.04"
    redis3.vm.box_version = "201812.27.0"
    redis3.vm.provision :shell, path: "bootstrap.sh", :args => "redis3"
  end

end

And here is the bootstrap.sh file mentioned in the above config:

#!/usr/bin/env bash

hostnamectl set-hostname $1
echo 127.0.0.1 $1 >> /etc/hosts
apt-get update
apt-get remove docker docker-engine docker.io containerd runc
apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg2 \
    software-properties-common -y
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
add-apt-repository \
   "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
   stable"

apt-get update
sudo apt-get install docker-ce  -y

Now when you run vagrant up, in a few moments you’ll have 3 machines running with docker installed. I’ll be configuring a swarm, like I have in production. So next I need to init the swarm.

vagrant ssh redis1
#wait for connect
sudo docker swarm init
#copy the join command that gets output by the last command, it'll look like this
#sudo docker swarm join --token SWMTKN-1-2wwy0py488uvo6u0lhbpgqpvbhha1kd6w4k1t95uox9m0t4ln0-1fjjc15308hvndpdj8ui7lts9 192.168.10.114:2377

Now you’ll ssh into the other two and join the swarm with the command from the previous example

vagrant ssh redis2
sudo docker swarm join --token SWMTKN-1-2wwy0py488uvo6u0lhbpgqpvbhha1kd6w4k1t95uox9m0t4ln0-1fjjc15308hvndpdj8ui7lts9 192.168.10.114:2377

Repeat that step on 2 and 3

Now we have a running docker swarm with redis1 as our master. The next steps are to create our redis services. We’ll be pinning each redis instance to a specific node. Here are the commands to create the redis services

sudo docker service create --name redis1 --constraint node.hostname==redis1 --hostname redis1 --mode global --publish published=6379,target=6379,mode=host redis:latest

sudo docker service create --name redis2 --constraint node.hostname==redis2 --hostname redis2 --mode global --publish published=6379,target=6379,mode=host redis:latest

sudo docker service create --name redis3 --constraint node.hostname==redis3 --hostname redis3 --mode global --publish published=6379,target=6379,mode=host redis:latest

We are pinning each instance of Redis to a node b/c we dont want docker to schedule the primary and secondaries to ever be on the same docker host. That would remove some of the high availability we get with running multiple instance.

We now have 3 instances of redis running. You can test connecting to them using the examples from part 1 of this series.

docker run -it redis:latest redis-cli -h 192.168.10.117
#obviously update your ip address

The next step is to enlist redis2 and redis3 as slaves of redis1. To do that we’ll connect to each and run the slaveof command. First ssh into redis 2 and then connect to redis using the docker command above. Then run

slaveof 192.168.10.117 6379
#again, update your IP and port (if you changed the port)

Redis should respond with “OK”
Repeat this step on redis 3

Once this is done you should be able to again connect to the instance of redis on redis1 and run the info replication command:

redis-cli info Replication

# Replication
role:master
connected_slaves:2
slave0:ip=192.168.10.118,port=6379,state=online,offset=322,lag=0
slave1:ip=192.168.10.121,port=6379,state=online,offset=322,lag=1
master_replid:1dcb7819abdce86eeee71021c89bdb075dab8943
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:322
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:322

Whew! Now we have 3 instances of redis running with 1 master and 2 slaves. In the next post we’ll work on getting redis-sentinel up and running to monitor them.

Success as measured by time

I recently read an article on HBR.ORG titled “Stop Working All Those Hours” and it has really made to stop and think about how people, myself included, measure success. The general premise here is that success in the workplace is often measured by the amount of time spent in the office. Stop and think about this for a moment; is it true?
Will spending 50+ hours a week in the office really make you a better employee? Or is it a perfect demonstration of not being as efficient as you could be? Metrics are an important factor in a business, I mean you have to have some empirical data by which you compare employees against each other. But is hours worked a valid metric? Does the employee that came in on Saturday deserve recognition if they could have completed the task on Friday? Is the employee that leaves early to watch their childrens’ little league game less dedicated to his or her job?

Stop working so much!

That last question sort of hits close to home for me because I am frequently hard on myself for having to leave or miss work for any reason, but at the same time I refuse to be one of those parents that doesn’t get to see their children grow up. If time spent truly is a valid metric for measuring employees then my career progression is possibly doomed to be a slow and arduous process.

Admittedly time spent is a metric we instantly gravitate towards, I do however believe that it is, at the very least, an unreliable total picture metric. As leaders and managers we should strive to identify metrics that are specific to a position. By doing so we will be able to obtain a much better idea of how well our employees are actually performing.

As an employee I feel that it is important to closely evaluate our efficiency. If you can make yourself more efficient then perhaps you can spend a few of those hours focusing on your personal life as well. A couple more hours spent catering to yourself will go a long way in ensuring your happiness and overall well being.

https://www.wjd.io/disqus/

Our Move to Dot Net Core

I work at Synovia Solutions LLC. creators of the Silverlining Fleet Management software and Here Comes The Bus. Our solution installs hardware devices on vehicles that then report back over cellular to us. During peak times we are processing about 3000 messages / second over UDP.

Our current system includes a monolithic windows service that handles pretty much all aspects of message processing. Its written in .NET (currently 4.6.1) and runs on several physical machines located in a local Data Center. It uses SQL Server as a backend data store.

When I was brought on board one of my primary tasks was to migrate the existing queuing infrastructure, several Sql Server Tables, into a new queuing solution. We chose RabbitMq via the hosted provider CloudAMQP. This was a pretty new paradigm for me, as I had never worked with anything other than MSMQ (GAG!) .

After the initial implementation of Rabbit was written we discovered a show stopper. To explain that I’ll need to cover a bit more on how this all works.

You see the devices on the vehicle communicate over UDP only, but once we receive a message and persist it we have to send an ACK message back to the device. If the device doesnt receive this ACK within 30 seconds it will retransmit the same mesasge. With our existing infrastructure already strained, several times we found ourselves falling behind processing inbound messages, as both the number of incoming messages and the average processing times increased, during peak hours, we hit critical mass. If we had 3k messages in the queue to persist, and persisting was taking upwards to 10ms, devices would begin re transmitting messages, which, at this point are duplicates, and our queue would snowball.

This problem was only made worse by the fact that if the vehicle turned off before all of its data was sent and ack’d the data would reside on the device until the next time the vehicle’s ignition was turned on, at which point it would again resume trying to send out. This was usually during another peak time and so the cycle continued.

When we introduced hosted RabbitMq this problem got worse, because now we have at least a 25ms round trip time from our data center to the AWS data center where CloudAMQP was hosted. We could have opted to host RabbitMq ourselves, but lacking a dedicated Sys Admin this just wasnt in the cards.

It was around this time that Dot Net Core was in beta and we were looking at migrating our ‘Listener’ infrastructure into AWS to eliminate the 25MS round trip time, and move forward with RabbitMq.

We had the idea to take another step, and write a Listener Microservice. At the time I was really torn between using NodeJs, Python, Java or risking it and using the Beta version of Dot Net Core. The main requirements where:

*Be cross platform, specifically it had to run on Linux (Ubuntu).
*Be really freakin fast
*Accept messages, persist them, and Ack them
*Be stateless
*Scale automagically.
*Live behind a load balancer.

Thats it, small and fast. That last one though, the load balancer, yeah there wasnt much in the way of UDP load balancing when we started the project. AWS didnt support UDP, NGINX didnt and the only one I found at the time was Loadbalancer.org. Once I was about halfway done NGINX released their UDP load balancing. AWS still doesnt support it.

At the end of the day we went with the following stack:

*Loadbalancer.org for balancing
*Ubuntu OS
*AWS OpsWorks
*Dot Net Core
*Rabbit Mq
*Redis (for syncing sequence numbers across all instances)

There is still a slight bottleneck when persisting to RabbitMq b/c we have to utilize CONFIRMS to ensure the message is persisted before we can send the ACK. The average time from start to finish is between 5 – 7ms.

That may not sound like much of an improvement, but it doesnt tell the whole story either. Thats the time to process a single message, on a single instance on a single thread. Its about 200 messages / second.

But when I use a typical Producer Consumer model with 25 processing threads we can hit 5000 messages / second, and do so without incurring any additional latency b/c RabbitMq is just awesome. At that rate here is my cpu utilization (from DataDog) over the last 24 hours:

Needless to say we arent even scratching the surface of what this thing can do. Of course we have a few instances running behind the LB.ORG appliance, so we can easily handle 30k messages / second.

All of this running on a T2.Medium instance with 2vPUs and 4GB of RAM, costing ~ $300 / year / instance to run 24/7. We could save even more money utilizing Spot Instances for peak times, but we just dont need it right now.

At the end of the day it has been a pretty awesome experience learning all of these new technologies. RabbitMq is amazing. But I have to give an enormous shout out to the ****Dot Net Core**** team and MSFT. What they have done is really going to shake shit up in the development world.

Gone are the days of going to a Hackathon and being laughed at for not rocking Ruby or NodeJs. C# is an incredibly powerful language and now that it is truly cross-platform I think we are going to start seeing a major paradigm shift in the open-source world.

Some will say yeah, Java has been doing that for ever, and I get that. Java is great. But what Java lacked, at least in comparison to .NET / C#, was Visual Studio. VS is, in my humble opinion, hands down the best development environment in existence.

Seriously, you couple VS with JetBrains ReSharper and you have a code churning productivity machine. Now add in Docker for windows and I can prototype and hack on a level never before possible in this world; and on a level that is, at the very least, equal to that of any other language. I would probably even say it is superior.

https://www.wjd.io/disqus/

BentBox.co Security Concerns

Please note that as of this writing, the majority of the problems discussed below have been addressed by the BentBox.co team. I will point out that they were fairly responsive and thankful for the issues that I presented them.

However there are still some of these problems that exist on their site.

On or around June 25th I discovered several security issues with the website BentBox.co. This website provides a platform for photographers and other artists to sell their work. I reached out to a well known security researcher whose name I wont mention until I get permission.

Following that individuals guidance I contacted the folks at BentBox.co where I provided them with the details of my findings. Over the course of the next few weeks we emailed a few times.

Below are the details of my findings.

BentBox.co vulnerability

Overview

Cookies are used to store session information. The cookies that are set contain 3 pieces of information:

  1. PHPSessionId
  2. Adult Content filter setting
  3. User Id

It is possible to sign in with a valid user name and password to get a valid PHPSessionId and then edit the cookies stored locally to insert a different User Id in order to access another user’s private account information.

You can easily obtain any user’s user id value by browsing to one of their boxes and looking at the page source

var loggedinUserId="";var userId="";

It is trivially simple to gain access to any user’s private information in this manner. In fact the entire website could be easily compromised with a simple script used to harvest user id’s.

What information is accessible?

From my initial research it appears that everything related to a user’s account is accessible, including:

  1. Private account settings
  2. Private messages that have been sent / received
  3. All boxes and their content
  4. Payment Information

Other Information

Another major issue is that the website does not use HTTPS by default. This means that every time the page is loaded the cookie’s containing the PHPSessionId value are transmitted in clear text. This is a major problem b/c it allows for trivial session hijacking.

How to Fix?

Here are some of my suggestions on how to resolve this problem:

  1. Enable HTTPS by default for ALL PAGES
  2. Do not store the userId in the cookie, instead only the session id.
  3. Map session Ids to the correct account in memory server side
  4. Enforce access control checks on all page loads that verify that the session is still active and is valid for the account.
  5. I would suggest not using that user’s Id anywhere in the page, but, it appears this would require significant work to achieve and may not be feasible.
  6. Expire sessions after a shorter period of time.

Using Moq to override calls to App.config

The other day I was working on a new implementation in our product to re do logging. I’m taking us from a custom File Writer to using Log4Net wrapped in a Facade.

To make this transition a bit smoother, and allow us to roll back to the old style if something breaks, I also implemented a Factory Pattern to provide the correct logger based upon the current App.Config settings.

To clarify, we are using Ninject for DI, and usually I would use the DI container to inject the correct implementation. However, we are also using the NinjectModule interface to setup bindings at runtime, based upon a compiled assembly. So instead, I’m using DI to inject the factory and it can provide the correct implementation.
I’m sure there will be countless opinions both ways here, but its convenient and makes sense in our project

I had sketched up my interfaces and got ready to write unit tests, when I discovered that I wasnt sure how to get NUnit to read from my App.Config, so I jumped on DuckDuckGo and ended up finding this article on CoryFoy.com

This definitely game me the direction I needed, but all of a sudden I had a revelation. I’ve been wanting to learn to use Moq and I thought this was a great opportunity to try it out.

So here is the interface for my LogFactory

public interface ILogFactory
{
ILog GetLogger();
ILog GetLogger(string name);
string GetServiceName { get; }
string GetLoggerType { get; }
}

And its impl.

 

public class LogFactory : ILogFactory
{
public virtual ILog GetLogger()
{
if (GetLoggerType != null && GetLoggerType.Equals("Log4Net", StringComparison.InvariantCultureIgnoreCase))
{
return new Log4NetLogger(GetServiceName);

}
else
{
return new LogHelper(GetServiceName);
}
}

public virtual ILog GetLogger(string name)
{
if (GetLoggerType != null && GetLoggerType.Equals("Log4Net", StringComparison.InvariantCultureIgnoreCase))
{
return new Log4NetLogger(name);
}
else
{
return new LogHelper(name);
}
}

 

public virtual string GetServiceName
{
get
{
return ConfigurationManager.AppSettings["ServiceName"] ?? "Service";
}
}

public virtual string GetLoggerType
{
get
{
return ConfigurationManager.AppSettings["Logger"] ?? "LogHelper";
}
}
}

As you can see we are currently dipping into the App.Config to grab the settings and return the correct logger.

To test the implementation I needed to override the methods (ok they are properties but still):

GetLoggerType ()
//and
GetServiceName()

To accomplish this I setup my unit tests (I am using NUnit) and used Moq to override the properties:

var mock = new Mock();
mock.Setup(f => f.GetServiceName).Returns("My Test");
mock.Setup(f => f.GetLoggerType).Returns("Log4Net");
mock.Setup(f => f.GetLogger()).CallBase();
LogFactoryLog4Net = mock.Object;

Then in the test you just call the GetLogger() method and Moq takes care of the rest for you:

[Test()]
public void LogFactoryLog4NetGetLogger()
{
var factory = LogFactoryLog4Net.GetLogger();
var type = factory.GetType();
Assert.AreEqual("Log4NetLogger",type.Name);
}

Here is the full unit test for reference:

 

[TestFixture()]
public class LogFactoryTests
{
public ILogFactory LogFactoryLog4Net { get; set; }
public ILogFactory LogFactoryLogHelper { get; set; }

public ILogFactory LogFactoryDefault { get; set; }

[SetUp]
public void Init()
{
var mock = new Mock();
mock.Setup(f => f.GetServiceName).Returns("Service Test");
mock.Setup(f => f.GetLoggerType).Returns("Log4Net");
mock.Setup(f => f.GetLogger()).CallBase();
LogFactoryLog4Net = mock.Object;

var mock2 = new Mock();
mock2.Setup(f => f.GetServiceName).Returns("Service Test");
mock2.Setup(f => f.GetLoggerType).Returns("LogHelper");
mock2.Setup(f => f.GetLogger()).CallBase();
LogFactoryLogHelper = mock2.Object;

var mock3 = new Mock();
mock3.Setup(f => f.GetServiceName).Returns("Service Test");
mock3.Setup(f => f.GetLoggerType).Returns("INVALID");
mock3.Setup(f => f.GetLogger()).CallBase();
LogFactoryDefault = mock3.Object;
}

[Test()]
public void LogFactoryLog4NetGetLogger()
{
var factory = LogFactoryLog4Net.GetLogger();
var type = factory.GetType();
Assert.AreEqual("Log4NetLogger",type.Name);
}

 

[Test()]
public void LogFactoryLogHelperGetLogger()
{
var factory = LogFactoryLogHelper.GetLogger();
var type = factory.GetType();
Assert.AreEqual("LogHelper", type.Name);
}

[Test()]
public void LogFactoryDefaultGetLogger()
{
var factory = LogFactoryDefault.GetLogger();
var type = factory.GetType();
Assert.AreEqual("LogHelper", type.Name);
}

}

Hope that helps someone, and thanks for reading!

Using Moq to override calls to App.config

The other day I was working on a new implementation in our product to re do logging. I’m taking us from a custom File Writer to using Log4Net wrapped in a Facade.

To make this transition a bit smoother, and allow us to roll back to the old style if something breaks, I also implemented a Factory Pattern to provide the correct logger based upon the current App.Config settings.

To clarify, we are using Ninject for DI, and usually I would use the DI container to inject the correct implementation. However, we are also using the NinjectModule interface to setup bindings at runtime, based upon a compiled assembly. So instead, I’m using DI to inject the factory and it can provide the correct implementation.
I’m sure there will be countless opinions both ways here, but its convenient and makes sense in our project

I had sketched up my interfaces and got ready to write unit tests, when I discovered that I wasnt sure how to get NUnit to read from my App.Config, so I jumped on DuckDuckGo and ended up finding this article on CoryFoy.com

This definitely game me the direction I needed, but all of a sudden I had a revelation. I’ve been wanting to learn to use Moq and I thought this was a great opportunity to try it out.

So here is the interface for my LogFactory

    public interface ILogFactory
    {
        ILog GetLogger();
        ILog GetLogger(string name);
        string GetServiceName { get; }
        string GetLoggerType { get; }
    }

And its impl.

 public class LogFactory : ILogFactory
    {
        public virtual ILog GetLogger()
        {
            if (GetLoggerType != null && GetLoggerType.Equals("Log4Net", StringComparison.InvariantCultureIgnoreCase))
            {
                return new Log4NetLogger(GetServiceName);

            }
            else
            {
                return new LogHelper(GetServiceName);
            }
        }

        public virtual ILog GetLogger(string name)
        {
            if (GetLoggerType != null && GetLoggerType.Equals("Log4Net", StringComparison.InvariantCultureIgnoreCase))
            {
                return new Log4NetLogger(name);
            }
            else
            {
                return new LogHelper(name);
            }
        }


        public virtual string GetServiceName
        {
            get
            {
                return ConfigurationManager.AppSettings["ServiceName"] ?? "Service";
            }
        }

        public virtual string GetLoggerType
        {
            get
            {
                return ConfigurationManager.AppSettings["Logger"] ?? "LogHelper";
            }
        }
    }

As you can see we are currently dipping into the App.Config to grab the settings and return the correct logger.

To test the implementation I needed to override the methods (ok they are properties but still):

GetLoggerType ()
//and
GetServiceName()

To accomplish this I setup my unit tests (I am using NUnit) and used Moq to override the properties:

var mock = new Mock<LogFactory>();
mock.Setup(f => f.GetServiceName).Returns("My Test");
mock.Setup(f => f.GetLoggerType).Returns("Log4Net");
mock.Setup(f => f.GetLogger()).CallBase();
LogFactoryLog4Net = mock.Object;

Then in the test you just call the GetLogger() method and Moq takes care of the rest for you:

 [Test()]
        public void LogFactoryLog4NetGetLogger()
        {
            var factory = LogFactoryLog4Net.GetLogger();
            var type = factory.GetType();
            Assert.AreEqual("Log4NetLogger",type.Name);
        }

Here is the full unit test for reference:

[TestFixture()]
    public class LogFactoryTests
    {
        public ILogFactory LogFactoryLog4Net { get; set; }
        public ILogFactory LogFactoryLogHelper { get; set; }

        public ILogFactory LogFactoryDefault { get; set; }

        [SetUp]
        public void Init()
        {
            var mock = new Mock<LogFactory>();
            mock.Setup(f => f.GetServiceName).Returns("Service Test");
            mock.Setup(f => f.GetLoggerType).Returns("Log4Net");
            mock.Setup(f => f.GetLogger()).CallBase();
            LogFactoryLog4Net = mock.Object;

            var mock2 = new Mock<LogFactory>();
            mock2.Setup(f => f.GetServiceName).Returns("Service Test");
            mock2.Setup(f => f.GetLoggerType).Returns("LogHelper");
            mock2.Setup(f => f.GetLogger()).CallBase();
            LogFactoryLogHelper = mock2.Object;

            var mock3 = new Mock<LogFactory>();
            mock3.Setup(f => f.GetServiceName).Returns("Service Test");
            mock3.Setup(f => f.GetLoggerType).Returns("INVALID");
            mock3.Setup(f => f.GetLogger()).CallBase();
            LogFactoryDefault = mock3.Object;
        }

        [Test()]
        public void LogFactoryLog4NetGetLogger()
        {
            var factory = LogFactoryLog4Net.GetLogger();
            var type = factory.GetType();
            Assert.AreEqual("Log4NetLogger",type.Name);
        }


        [Test()]
        public void LogFactoryLogHelperGetLogger()
        {
            var factory = LogFactoryLogHelper.GetLogger();
            var type = factory.GetType();
            Assert.AreEqual("LogHelper", type.Name);
        }

        [Test()]
        public void LogFactoryDefaultGetLogger()
        {
            var factory = LogFactoryDefault.GetLogger();
            var type = factory.GetType();
            Assert.AreEqual("LogHelper", type.Name);
        }

    }

Hope that helps someone, and thanks for reading!