Install Ghost on Docker (Ubuntu 18.04)


Full article


Why docker? Docker is the world’s leading software container platform.

It's a pain to upgrade stuff, from node.js to ghost cli and everything else, I mean... who has time to do all that, on top of that if I want to move my blog somewhere I have to do one thousand other things including database migration/installation. Wouldn't it be easier to upgrade with 1 command and have something portable you can take with you anywhere? Even locally...
I spent today the whole day learning about Docker and I found two ways to install it. I'll post both.

2 Methods

As far as I got into docker, there are two kinds of Docker volumes: bind mount and managed.

When should you use the managed, and when is appropriate to use a bind mount?

Bind mounts are great for local development. They provide convenience through the fact that changes to the host’s development environment and code are immediately reflected within the container, and files that the application creates in the container, like build artifacts or a log file, become available from the host.

However, bind mounts do come with some security concerns


Bind mount volume:
A bind mount is done at runtime. It's just mapping a directory on the host machine to a directory in the container so they both can communicate. However, when the container is removed, it does not affect the directory.
If the -v or --volume flag’s value is a path, then it is assumed to be a bind mount. If the directory does not exist, then it will be created.

Managed volume:
A managed volume doesn't have a source directory, it exists in the container space only. They are the preferred mechanism for persisting data generated by and used by Docker containers. While bind mounts are dependent on the directory structure of the host machine, volumes are completely managed by Docker. Volumes have several advantages over bind mounts:

  • Volumes are easier to back up or migrate than bind mounts.
  • You can manage volumes using Docker CLI commands or the Docker API.
  • Volumes work on both Linux and Windows containers.
  • Volumes can be more safely shared among multiple containers.
  • Volume drivers allow you to store volumes on remote hosts or cloud providers, to encrypt the contents of volumes, or to add other functionality.
  • A new volume’s contents can be pre-populated by a container.

In addition, volumes are often a better choice than persisting data in a container’s writable layer, because using a volume does not increase the size of containers using it, and the volume’s contents exist outside the lifecycle of a given container.

Docker images
They're used to configure and distribute application states. Think of it as a template with which to create the container.

With a Docker image, we can quickly spin up containers with the same configuration. We can then share these images with our team, so we will all be running containers which all have the same configuration.

There are several ways to create Docker images, but the best way is to create a Dockerfile and use the docker build command.

The way Volumes work:
When you create a volume that data is secured and the host isn't directly affected by it's content, however, to access the data you need to either use the Dockerfile COPY Instruction or interact directly with the host.
The source is mapped into the container, mounted on top of whatever exists in that directory inside the container. When you mount over an existing directory in Linux, the parent filesystem at that location is no longer visible.
So if you need to map a volume is basically to transfer data into the container.

Clean containers, images, volumes and networks

The docker system prune command will remove all stopped containers, all dangling images, and all unused networks:

docker system prune

Managing container:

View active containers: docker ps
View the latest container you created: docker ps -l
View all containers: docker ps -as (a for all, s for size)
Interact with container: docker exec -it CONTAINER_ID bash
Start container: docker start container_ID/name
Stop container: docker stop container_ID/name
Restart container: docker restart imageName
Remove container: docker rm name
Copy files from Container -> Local Host: docker cp <containerID>:/var/lib/ghost/config.production.json /var/www/myBlog
Copy files from Local Host -> Container:docker cp /host/local/path/file <containerID>:/file/path/in/container/file

Managing images:

List images: docker image ls
Inspect image: docker inspect myBlog
Remove one or more images: docker image rm 75835a67d134 2a4cca5ac898
Remove dangled and unused images: docker image prune
Remove all unused images: docker image prune -a

Managing Volumes:

List volumes: docker volume ls
View all volumes stored: ls /var/lib/docker/volumes/
Remove volumes: docker volume rm 4e12af8913dec784196db64293163
Cleaning up unused volumes: docker system prune --volumes
Remove all unused volumes: docker volume prune

Managing Network:

List all networks: docker network ls
Remove networks: docker network rm c520032c3d31
Remove all unused networks: docker network prune

Choose your weapon (Install method 1 or 2)

I'll go in this guide over how to install Ghost using both type of volumes (Bind mount and managed). You are free to use which ever you want. I'll start with the simplest one and the one suited for dev (bind mount).

On install 1 I'll use the regular sudo docker run parameter and on install 2 I'll use the docker-compose method. Both are the same thing.

Before continuing make sure you have docker installed:

Server setup

Before proceeding below make sure your server is configured

Install docker

sudo apt update -y && sudo apt install -y apt-transport-https ca-certificates curl software-properties-common && curl -fsSL | sudo apt-key add - && sudo add-apt-repository "deb [arch=amd64] bionic stable" && sudo apt update && apt-cache policy docker-ce && sudo apt install -y docker-ce

You can check if it installed correctly: sudo systemctl status docker

Install docker-compose

We are going to install Docker Compose using the following:

sudo apt install -y python-dev libffi-dev libc-dev make && sudo curl -L "$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose && sudo chmod +x /usr/local/bin/docker-compose
  1. Test the installation: docker-compose --version

Executing the Docker Command Without Sudo (Optional)

By default, the docker command can only be run the root user or by a user in the docker group, which is automatically created during Docker’s installation process. If you attempt to run the docker command without prefixing it with sudo or without being in the docker group, you’ll get an output like this:

docker: Cannot connect to the Docker daemon. Is the docker daemon running on this host?.
See 'docker run --help'.

If you want to avoid typing sudo whenever you run the docker command, add your username to the docker group:
sudo usermod -aG docker ${USER}
To apply the new group membership, log out of the server and back in.


If you are going through a new install of ghost with no previous data, skip this step.

Remember this step and once you finish the installation at the bottom, and come here again. But first do the backup below.

If you are migrating servers, on your old install of Ghost, save this folder/files:

  • /ghost/content. Here we are using SQlite so you it'll be included inside data/ghost.db, if you are comming from MySql, don't worry about DB, it'll be created for you.
  • config.production.json

Everything below should be on the folder you just copied above, but ... sometimes you may need to import it manually once Ghost is running, so it doesn't hurt to back it up:

  • On Settings, save your: Publication icon, Publication logo, Publication cover, Installed Themes (this should be on the theme folder inside content, check anyway). In the labs section of the settings, Export your content, Redirects and Routes.

Install Method 1: Bind-Mount volume

What am I using on this setup:

  • Docker using the latest Ghost image created by the Docker team, therefore unofficial to Ghost.
  • SQlite database
  • Nginx as reverse proxy

Delete the samples nginx config:

sudo rm /etc/nginx/sites-available/default && sudo rm /etc/nginx/sites-enabled/default && sudo rm /etc/nginx/conf.d/default

Now let's create our Nginx config file:

sudo touch /etc/nginx/sites-available/myBlog.conf && cd /etc/nginx/sites-available/ && sudo ln -s /etc/nginx/sites-available/myBlog.conf /etc/nginx/sites-enabled/myBlog.conf

and paste the code below inside: sudo vim myBlog.conf

server {


    location / {
        proxy_set_header X-Forwarded-For 
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header Host $http_host;

    location ~ /.well-known {
        allow all;

    client_max_body_size 50m;


Now reload Nginx: sudo systemctl reload nginx

Install Ghost:
Let's first create the directories we'll need and apply the right permissions:

mkdir /var/www/myBlog && mkdir /var/www/myBlog/content && cd /var/www/myBlog

Time to create the image and volumes:

Here is where you make sure you have your backup in those locations, for example, config.production.json and content

sudo docker run -d -e url= --name myBlog -p 3001:2368 -v /var/www/myBlog/content:/var/lib/ghost/content -v /var/www/myBlog/config.production.json:/var/lib/ghost/config.production.json ghost:latest

The above command breaks down like this:

docker run is the main command that says we’re going to run a command in a new container.
-P publishes the containers ports to the host.
--name says what follows is the name of the new container.
-v says what follows is to be a Bind-Mount volume.
ghost is the image to be used for the container.
myBlog is the name of the container

Check if the container you created is active: sudo docker ps

Set the correct permissions

sudo chmod 775 -R /var/www/myBlog

I prefer to create a script to apply all the permissions at once:

mkdir /home/ubuntu/scripts && sudo vim /home/ubuntu/scripts/

Then copy the code below inside that file:

# Set ownership for all folders
chown -R www-data:www-data /var/www/
chown -R ubuntu:root /var/www/

# set files to 644 [except *.pl *.cgi *.sh]
find /var/www/ -type f -not -name ".pl" -not -name ".cgi" -not -name "*.sh" -print0 | xargs -0 chmod 0644

# set folders to 755
find /var/www/ -type d -print0 | xargs -0 chmod 0755

Now give execute rights to the file you created:

sudo chmod 0750 /home/ubuntu/scripts/

Run the script: cd ~/scripts && sudo ./

Install Method 2: Managed volume

Honestly I found method 1 very useful so I'll stick to it for now. If someone finds a decent guide on method 2 let me know or if I use it in the future I'll update it here :)

Update Ghost to a newer version

Before attempting to change anything I suggest you backup your server.
You'll be surprised how simple it is 🤓

  1. Stop containers: docker stop myBlog
  2. Remove container: docker rm myBlog
  3. Look up your current image and remove it: docker image ls & docker image rm <imageID>
  4. Clean containers, images, volumes and networks: docker system prune
  5. Time to install new ghost docker images:

sudo docker run -d -e url= --name myBlog -p 3001:2368 -v /var/www/myBlog/content:/var/lib/ghost/content -v /var/www/myBlog/config.production.json:/var/lib/ghost/config.production.json ghost:latest

Here we are mapping the content as well as the config.production.json. Make sure to apply the correct permissions after that. I recommend creating a script such as so you can just run it every time you upgrade.

That's it! Simple update/upgrade! 🤓 No node.js updating or other stuff...