Running background tasks on a schedule is a standard requirement of backend services. Getting setup used to be simple â€“ youâ€™d define your tasks in your serverâ€™s
crontab and call it a day. Letâ€™s look at how you can utilise
cron while using Docker for deployment.
Containerising your services increases developer productivity. Simultaneously, it can leave you wondering how traditional sysadmin concerns map to Docker concepts. Youâ€™ve got several options when using
cron with Docker containers and weâ€™ll explore them below in order of suitability. Before continuing, make sure youâ€™ve built a Docker image of your application.
Using the Hostâ€™s Crontab
At its most basic, you can always utilise the
cron installation of the host thatâ€™s running your Docker Engine. Make sure
cron is installed and then edit the systemâ€™s
crontab as normal.
You can use
docker exec to run a command within an existing container:
*/5 * * * * docker exec example_app_container /example-scheduled-task.sh
This will only work if you can be sure of the containerâ€™s name ahead of time. Itâ€™s normally better to create a new container which exists solely to run the task:
*/5 * * * * docker run --rm example_app_image:latest /example-scheduled-task.sh
Every five minutes, your systemâ€™s
cron installation will create a new Docker container using your appâ€™s image. Docker will execute the
/example-scheduled-task.sh script within the container. The container will be destroyed (
--rm) once the script exits.
Using Cron Within Your Containers
Using the hostâ€™s
crontab breaks Dockerâ€™s containerisation as the scheduled tasks require manual setup on your system. Youâ€™ll need to ensure
cron is installed on each host you deploy to. While it can be useful in development, you should look to integrate
cron into your Dockerised services when possible.
Most popular Docker base images do not include the
cron daemon by default. You can install it within your
Dockerfile and then register your applicationâ€™s
First, create a new
crontab file within your codebase:
*/5 * * * * /usr/bin/sh /example-scheduled-task.sh
Next, amend your
Dockerfile to install
cron and register your
crontab â€“ hereâ€™s how you can do that with a Debian-based image:
RUN apt-get update && apt-get install -y cron COPY example-crontab /etc/cron.d/example-crontab RUN chmod 0644 /etc/cron.d/example-crontab && crontab /etc/cron.d/example-crontab
cron and copy our codebaseâ€™s
crontab into the
/etc/cron.d directory. Next, we need to amend the permissions on our
crontab to make sure itâ€™s accessible to
cron. Finally, use the
crontab command to make the file known to the
To complete this setup, youâ€™ll need to amend your imageâ€™s command or entrypoint to start the
cron daemon when containers begin to run. You canâ€™t achieve this with a
RUN stage in your
Dockerfile because these are transient steps which donâ€™t persist beyond the imageâ€™s build phase. The service would be started within the ephemeral container used to build the layer, not the final containers running the completed image.
If your containerâ€™s only task is to run
cron â€“ which weâ€™ll discuss more below â€“ you can add
ENTRYPOINT ["cron", "-f"] to your
Dockerfile to launch it as the foreground process. If you need to keep another process in the foreground, such as a web server, you should create a dedicated entrypoint script (e.g.Â
ENTRYPOINT ["bash", "init.sh"]) and add
service cron start as a command within that file.
Separating Cron From Your Applicationâ€™s Services
Implementing the setup described in the preceding section provides a more robust solution than relying on the hostâ€™s
crontab. Adding the
cron daemon to the containers that serve your application ensures anyone consuming your Docker image will have scheduled tasks setup automatically.
This still results in mixing of concerns though. Your containers end up with two responsibilities â€“ firstly, to provide the applicationâ€™s functionality, and secondly, to keep
cron alive and run the scheduled tasks. Ideally, each container should provide one specific unit of functionality.
Wherever possible, you should run your
cron tasks in a separate container to your application. If youâ€™re creating a web backend, that would mean one container to provide your web server and another which runs
cron in the foreground.
Without this separation, youâ€™ll be unable to use an orchestrator like Docker Swarm or Kubernetes to run multiple replicas of your application. Each container would run its own
cron daemon, causing scheduled tasks to run multiple times. This can be mitigated by using lock files bound into a shared Docker volume. Nonetheless, itâ€™s more maintainable to address the root problem and introduce a dedicated container for the
Generally, youâ€™ll want both containers to be based on your applicationâ€™s Docker image. Theyâ€™ll each need connections to your serviceâ€™s Docker volumes and networks. This will ensure the
cron container has an identical environment to the application container, with the only difference being the foreground process.
This is not a hard-and-fast rule â€“ in some projects, your scheduled tasks might be trivial scripts which operate independently of your codebase. In that case, the
cron container may use a minimal base image and do away with connections to unnecessary peripheral resources.
One way to get setup with a separate
cron container would be to use
docker-compose. Youâ€™d define the
cron container as an extra service. You could use your applicationâ€™s base image, overriding the entrypoint command to start the
cron daemon. Using
docker-compose also simplifies attaching the container to any shared volumes and networks it requires.
version: "3" Â services: app: image: demo-image:latest volumes: - data:/app-data cron: image: demo-image:latest entrypoint: /bin/bash command: ["cron", "-f"] volumes: - data:/app-data Â volumes: data:
Using the above example, one container serves our application using the default entrypoint in the image. Make sure this does not start the
cron daemon! The second container overrides the imageâ€™s entrypoint to run
cron. As long as the image still has
cron installed and your
crontab configured, you can use
docker-compose up to bring up your application.
Using Kubernetes Cron Jobs
Finally, letâ€™s look at a simple example of running scheduled tasks within Kubernetes. Kubernetes comes with its own
CronJob resource which you can use in your manifests.
You donâ€™t need to install
cron in your image or setup specialised containers if youâ€™re using Kubernetes. Be aware that
CronJob is a beta resource which may change in future Kubernetes releases.
apiVersion: batch/v1beta1 kind: CronJob metadata: name: my-cron namespace: my-namespace spec: schedule: "*/5 * * * *" concurrencyPolicy: Forbid jobTemplate: spec: template: spec: containers: - name: my-container image: my-image:latest command: ["/bin/bash", "/my-cron-script.sh"] restartPolicy: OnFailure
Apply the above manifest to your cluster to create a new
cron job which will run
/my-cron-script.sh within your container every five minutes. The frequency is given as a regular
cron definition to the
schedule key in the resourceâ€™s
You can customise the
ConcurrencyPolicy to control whether Kubernetes allows your jobs to overlap. It defaults to
Allow but can be changed to
Forbid (prevent new jobs from starting while one already exists) or
Replace (terminate an existing job as soon as a new one starts).
Using Kubernetesâ€™s built-in resource is the recommended way to manage scheduled tasks within your clusters. You can easily access job logs and donâ€™t need to worry about preparing your containers for use with
cron. You just need to produce a Docker image which contains everything your tasks need to run. Kubernetes will handle creating and destroying container instances on the schedule you specify.
The post How to Use Cron With Your Docker Containers â€“ CloudSavvy IT appeared first on TechFans.