This post is about a small and simple process to add Continuous Deployment to some projects where 0-downtime deployment is not needed. With the following methods, the containers will need few seconds to restart on deployment, therefore it is not adapted to a high availability project.
You will need a repository (Github, Gitlab, Bitbucket, ...), a pipeline tool (Bitbucket pipeline, circleCI, github actions, Gitlab CI, ...), an image registry (Docker hub), and a server.
In this post, I use a bitbucket repository, a bitbucket pipeline and docker hub. You can of course adapt this to your needs and tools!
As well, this is a simple version. For sensitive projects, apply best practices, read more about Docker, security, CI/CD, etc.
Why it is worth the time to set up automated deployment for a project:
First, set up the variables you need from the settings of your repository:
$BITBUCKET_COMMIT is a variable provided by bitbucket, giving you the commit ID and I am using it as a tag for the docker image.
# bitbucket-pipelines.yml image: node:12 pipelines: branches: production: - step: name: Build & Push to registry script: - docker login --username $DOCKER_HUB_USER_TOKEN_NAME --password $DOCKER_HUB_TOKEN - docker build -t $IMAGE:$BITBUCKET_COMMIT -t $IMAGE:$production . - docker push $IMAGE:$BITBUCKET_COMMIT caches: - docker services: - docker - step: name: Deploy to production deployment: production # comment this step to deploy automatically on production # It is better to run tests before deploying trigger: manual script: - echo " [+] Start deploy script on the server" - ssh root@$DEPLOY_SERVER "/docker/deploy.sh $BITBUCKET_COMMIT" options: docker: true definitions: services: docker: memory: 2048
In the step "Build & Push to registry", first it logs in to the registry using a token you generated on docker hub (go to settings > security). Then it builds the image using the Dockerfile at the root of your repository and adds 2 tags to it (commit ID and
production). Finally, it pushes the image to the registry.
The following part will listen to the production branch for a new commit. You can replace it with
master depending on your branch's names.
branches: production: - step:
- step: name: Deploy to production deployment: production trigger: manual
The manual trigger is advised to prevent downtime of a few second a moment you might want a high availability of service (during the day).
options: docker: true definitions: services: docker: memory: 2048
The last part is useful to activate docker and to change the default memory setting, preventing the pipeline to run out of memory.
This script is on the server in a folder, the same that I specified in my pipeline:
Set the rights to a write mode:
chmod +x deploy.sh
#! /bin/bash if [ -z "$1" ]; then echo " [!] No argument supplied, this script expects a docker tag to run." exit 1 fi tag=$1 image="user/project" echo "[>] Starting deployment" echo " [+] Remove containers, volume and networks older than 1 week..." docker system prune --force --filter "until=168h" cd /docker echo " [+] Bitbucket commit ID: $tag" echo " [+] Pull image $image:$tag" pull=$(docker pull $image:$tag) # Check if docker pull returns empty string if [[ -z "$pull" ]]; then echo " [!] Fail to pull image with tag $tag" exit 1 fi echo " [+] Start (or Restart) containers: docker-compose up -d" TAG=$tag docker-compose up -d echo "[>] Deployment done."
Tips: If you want to debug this script, you can add the option
set -xat the beginning of the script to have more detailed information during the script process.
If you need to roll back to any docker image, you can either rerun the deployment of a previous commit, or SSH to the server and run
TAG=abcdefg docker-compose up -d (with abcdefg the commit ID you want to roll back to)
Connect bitbucket pipelines to slack to get notifications on some events such as failed pipeline.