Running Redis with resilience in Linux containers on Windows — Part 1
We’ve gone through the installation of Docker Desktop and have some basic background information on registries, repositories, images and tags.
It’s now time for a non-trivial example: running a resilient Redis deployment in Linux containers on modern Windows.
In Part 1, we’ll get a single fully-configured Redis server container up and running.
Although not strictly necessary, since docker run
will pull an image for us (so much of the beauty of Docker and Docker Compose is being able to get up and running with a single command), let’s pull the image ourselves from the Docker registry.
docker pull redis
For completeness, it is possible to search
the Docker registry (only) from the command line.
docker search redis
We can also get the history
of a local image.
docker history redis
Let’s run
a new Redis container with some simple options.
docker run --detach --name my-redis --publish 6379:6379 redis
--detach
: run a container in the background--name
: assign a name to the container (my-redis
)--publish
: publish a container’s port(s) to the host (6379
on both host and container)- IMAGE_NAME:
redis
Exciting times indeed! We can check that our container is now running.
docker ps
There are a number of informational commands (inspect
, top
, logs
, stats
and port
) which can be run against the container.
# return low-level information about a container
docker inspect my-redis# display the running processes of a container
docker top my-redis# fetch the logs of a container
docker logs my-redis# display a live stream of container(s) resource usage statistics
# can also be run continuously (without --nostream)
docker stats --no-stream my-redis# list port mappings or a specific mapping for the container
docker port my-redis
We can also start
, stop
, restart
, pause
, unpause
, kill
, and rm
(remove) the container.
# stop one or more running containers
# waits for 10 seconds by default before killing
docker stop my-redis
# only running containers shown
docker ps
# use the "-all" option to show stopped containers as well
docker ps --all# start one or more stopped containers
docker start my-redis# restart one or more containers
docker restart my-redis# pause all processes within one or more containers
docker pause my-redis
# shows as "Paused"
docker ps# un-pause all processes within one or more containers
docker unpause my-redis# kill one or more running containers
# doesn't wait for it to stop gracefully, just tears it down
docker kill my-redis# remove one or more containers
# use the "--force" option if it is running
docker rm my-redis
Redis is persisting the data to disk (i.e. into a thin read/writer layer in the container) such that stopping/starting or restarting will preserve current state.
We can fire-up (exec
) the Redis CLI and run a few commands (if we’ve removed the container, we’ll need to re-run it).
docker exec --interactive --tty my-redis redis-cli> SET key "value"
> SAVE
> exit
Let’s run a command against the running container to get a bash
shell.
docker exec --interactive --tty my-redis /bin/bash
We’ll arrive in the /data
directory, which is where Redis, by default, persists its data as an RDB (Redis Database). This is something we’ll change later.
3 directories are of particular interest when customising our Redis instances:
/data
: the directory where data is persisted/user/local/etc/redis
: the directory from/to which configuration is read and written/var/log/redis
: the directory where logs are written
We’re going to map each to a persistent volume on the Docker host using a bind mount. To this end, we’ve created the following directory structure in to root of the C:
drive.
We’ve created an empty file named bar.txt
in each, and run a container.
docker run --detach --name my-redis --publish 6379:6379 --volume "C:\redis\conf:/usr/local/etc/redis:rw" --volume "C:\redis\data:/data:rw" --volume "C:\redis\logs:/var/log/redis:rw" redis
--volume
: bind mount a volume (a read/write volume for each of conf
, data
and logs
)
The structure of the volume option is:
host_path
:container_path
:option(s)
For awareness, a more explicit/expressive way to mount a volume is available and recommended.
docker run --detach --mount "type=bind,source=C:\redis\conf,target=/usr/local/etc/redis" --mount "type=bind,source=C:\redis\data,target=/data" --mount "type=bind,source=C:\redis\logs,target=/var/log/redis" --name my-redis --publish 6379:6379 redis
--mount
: bind mount a volume (a read/write volume for each of conf
, data
and logs
)
Success can be verified by running a bash shell inside the container, writing a file (e.g. foo.txt
) and listing the contents of each directory (which should contain both foo.txt
and bar.txt
), moving through each directory in turn.
It’s time to configure Redis properly, including a password.
The redis.conf
from Redis 6.2.6 is used as a base configuration file and can be found here (it’s not embedded directly due to its size). Save it to our conf
directory (C:\redis\conf
) as redis-base.conf
. Then save the following, to the same local folder, as redis.conf
.
Now, run a new container.
docker run --detach --name my-redis --publish 6379:6379 --volume "C:\redis\conf:/usr/local/etc/redis:rw" --volume "C:\redis\data:/data:rw" --volume "C:\redis\logs:/var/log/redis:rw" redis redis-server /usr/local/etc/redis/redis.conf
- COMMAND_1 =
redis-server
(as opposed toredis-sentinel
) - COMMAND_2 =
/usr/local/etc/redis/redis.conf
(container-based path to the Redis configuration file)
If we look at the Redis log file, we may observe the following warning:
WARNING overcommit_memory is set to 0! Background save may fail under low memory condition.
To fix this, we need to launch a WSL prompt.
Then run the following commands.
echo "vm.overcommit_memory=1" >> /etc/sysctl.conf
sysctl vm.overcommit_memory=1
Then restart the container, checking the logs to verify the fix.
docker restart my-redis
Finally, for this article anyway, run the built-in Redis benchmark.
docker exec my-redis redis-benchmark -a Tby1YbsU9yS6RX3YmaXGpb1pXrjgBRgc
Okay, we’re done for now; tune in shortly for Part 2 (by way of Part 1.5), which will expand on this deployment to deliver high-availability and greater resilience.
Next time…