Automating a self hosted Ghost blog platform


Technology has come a long way since I first touched html in the early 00s. In searching for a platform where I could publish blog posts in recent years I ran own static webpage in Azure, dabbled on Medium and even Hashnode. Ultimately I found that none of these offerings could match the convenience I desired while maintaining ownership over the content and the underlying infrastructure delivering it! Enter Ghost.

What is Ghost?

Ghost is a content management system (CMS) and blogging platform that is designed for bloggers, publishers, and content creators. It was created in 2013 as an alternative to WordPress, with a focus on simplicity, speed, and user experience.
Ghost is also open-source, built on Node.js and uses the Handlebars templating language, which allows for easy customization and theme development.
One of the key features of Ghost is its minimalist editor, which supports Markdown formatting. Ghost also includes features like built-in SEO optimization, social sharing, and content scheduling, making it a powerful platform for content creators.

Management and Hosting Options

It's quite reasonable to pay Ghost $9 to manage the platform for you, or even $5 to run it yourself on a droplet in digital ocean. But I have a fairly robust and stable lab environment with compute to spare! I made the time to roll up my sleeves and dig in.


I self host many other services and want to deny communication between Ghost and my local network. Additionally, I didn't want to need to forward ports for HTTP/s. Another requirement was that the virtual machines should be brought up quickly and repeatably in the event something goes wrong. For this I turn to utilizing infrastructure as code (IaC) tools Terraform and Ansible as well as Docker for containerized services.

What is Terraform?

Terraform is an open-source infrastructure-as-code tool developed by HashiCorp that allows users to define and manage infrastructure resources in a declarative manner, using configuration files. With Terraform, users can define infrastructure resources such as virtual machines, networks, storage, and security policies in a simple and consistent way, and then deploy and manage them across multiple cloud providers and on-premises data centers. Terraform provides a powerful and flexible way to automate infrastructure provisioning and management, enabling teams to increase efficiency, reduce errors, and achieve better consistency across their infrastructure.


Ansible is an open-source IT automation tool developed by Red Hat that allows users to automate deployment, configuration, and management of systems and applications. With Ansible, users can define tasks and playbooks in a simple and human-readable language, which are then executed on remote hosts using SSH or other remote communication protocols. Ansible supports a wide range of systems, including Linux, Windows, network devices, and cloud platforms. It provides a powerful and flexible way to automate routine tasks, reduce manual errors, and improve scalability and consistency of IT operations.

Planning - GitHub repositories for source control

I decided to split the project across two repositories. The first would handle Terraform code and Ansible playbooks. The second would handle the Docker compose files.

Here's a diagram of the workflow:



The Terraform code works on the vSphere provider to build out a VM. It puts the Ghost VM in a particular vSphere datacenter and datastore, in a particular cluster and on its own network. I have a generic Ubuntu template that this clones the VM from. Another option would be to leverage something like Packer to build the vm image from scratch.

provider "vsphere" {
  user                 = var.vsphere_user
  password             = var.vsphere_password
  vsphere_server       = var.vsphere_server
  allow_unverified_ssl = true

data "vsphere_datacenter" "datacenter" {
  name = var.vsphere_datacenter

data "vsphere_datastore" "datastore" {
  name          = var.vsphere_datastore
  datacenter_id =

data "vsphere_compute_cluster" "cluster" {
  name          = var.vsphere_cluster
  datacenter_id =

data "vsphere_network" "network" {
  name          = var.vm_network
  datacenter_id =

data "vsphere_virtual_machine" "template" {
  name          = var.vm_template
  datacenter_id =

resource "vsphere_virtual_machine" "vm" {
  name             = var.vm_name
  resource_pool_id = data.vsphere_compute_cluster.cluster.resource_pool_id
  datastore_id     =
  num_cpus         = var.vm_cpus
  memory           = var.vm_memory
  guest_id         = data.vsphere_virtual_machine.template.guest_id
  scsi_type        = data.vsphere_virtual_machine.template.scsi_type
  network_interface {
    network_id   =
    adapter_type = data.vsphere_virtual_machine.template.network_interface_types[0]
  disk {
    label            = "disk0"
    size             = data.vsphere_virtual_machine.template.disks.0.size
    thin_provisioned = data.vsphere_virtual_machine.template.disks.0.thin_provisioned
  clone {
    template_uuid =
    customize {
      linux_options {
        host_name = var.vm_name
        domain    = var.vm_domain
      network_interface {
        ipv4_address = var.ipv4_address
        ipv4_netmask = var.ipv4_netmask
      ipv4_gateway = var.ipv4_gateway
      dns_server_list = [var.vm_dns_server]


These variables are defined in my file:

variable "vsphere_user" {
  description = "Username for vSphere API access"

variable "vsphere_password" {
  description = "Password for vSphere API access"

variable "vsphere_server" {
  description = "vSphere server hostname or IP address"

variable "vsphere_datacenter" {
  description = "Name of the vSphere datacenter where the VM will be deployed"

variable "vsphere_datastore" {
  description = "Name of the vSphere datastore where template is"

variable "vsphere_cluster" {
  description = "Name of the vSphere compute cluster where the VM will be deployed"

variable "vm_template" {
  description = "Name of the vSphere VM template to use"

variable "vm_name" {
  description = "Name for the new VM"

variable "vm_cpus" {
  description = "Number of CPUs to allocate to the new VM"

variable "vm_memory" {
  description = "Amount of memory (in MB) to allocate to the new VM"
variable "vm_network" {
  description = "VM network"
variable "vm_folder" {
  description = "Name of the vSphere folder where the new VM will be created"
variable "vm_user" {
  description = "Name of the Virtual Machine user"

variable "vm_domain" {
  description = "Domain of the VM"

variable "ipv4_address" {
  description = "ipv4 address of the VM"

variable "ipv4_netmask" {
  description = "ipv4 netmask of the VM"

variable "ipv4_gateway" {
  description = "default gateway of VM"

variable "vm_dns_server" {
  description = "configured DNS server for VM to use"

I then specify the values for these variables in a terraform.tfvars file:

vsphere_user =
vsphere_password =
vsphere_server = 
vsphere_datastore =
vsphere_datacenter =
vsphere_cluster =
vm_template = 
vm_name = 
vm_network =
vm_cpus =
vm_memory =
vm_folder = 
vm_user =
vm_domain =
ipv4_address = 
ipv4_netmask = 
ipv4_gateway = 
vm_dns_server =

Then, all that's required to create the VM are some commands from the command line of my laptop.

# initialize terraform
terraform init

# plan the deployment and see what running Terraform will do
terraform plan

# apply the code
terraform apply

The one item missing from my Terraform code is handling SSH key pair generation. For this I run a quick bash script to make sure the key pair is on my laptop and ready for Ansible to send the public key to the new VM. I've been looking at also using Hashicorp Vault for secrets management and this may be a good use case to store SSH keys for my lab environment. Anyways, onto Ansible.


A single Ansible playbook does a lot of the heavy lifting here, and I'll highlight the main components. Here's the full playbook:

- name: Configure ghost VM
  hosts: ghost_vm
  become: true
    - role: monolithprojects.github_actions_runner
    - name: Set PST time zone
        name: America/Los_Angeles

    - name: Update and upgrade apt packages
        upgrade: true
        update_cache: true
        cache_valid_time: 86400

    - name: Install required system packages
          - nfs-common
          - apt-transport-https
          - ca-certificates
          - curl
          - software-properties-common
          - python3-pip
          - virtualenv
          - python3-setuptools
        state: latest
        update_cache: true

    - name: Add Docker GPG apt Key
        state: present

    - name: Add Docker Repository
        repo: deb focal stable
        state: present

    - name: Update apt and install docker-ce
        name: docker-ce
        state: latest
        update_cache: true

    - name: Ensure group "docker" exists with correct gid
        name: docker
        state: present
        gid: 999

    - name: Create 1 of 3 docker network with custom IPAM config
        name: "{{ docker_network_1 }}"
          - subnet:

    - name: Add the user to the docker group
        name: "{{ vm_user }}"
        groups: "{{ vm_group }}"
        append: true

    - name: Run Portainer container
        name: portainer_agent
        image: portainer/agent:2.17.0
        detach: true
          - /var/run/docker.sock:/var/run/docker.sock
          - /var/lib/docker/volumes:/var/lib/docker/volumes
        restart_policy: always
          - "9001:9001"

    - name: Create Cloudflare tunnel container
        name: cloudflare_tunnel
        image: cloudflare/cloudflared
        detach: true
        restart_policy: always
        command: tunnel --no-autoupdate run --token {{ cloudflare_tunnel_token }}
          - name: proxy
              - cloudflare
              - cloudflare_tunnel
              - cf

    - name: Create multiple directories in one task
        path: "{{ item }}"
        state: directory
        mode: '755'
        - /nfs/ghost-backup

    - name: Mount ghost backup NAS share # this is where container backups live
        src: "{{ nfs_mount }}"
        path: /nfs/ghost-backup
        opts: rw
        state: mounted
        fstype: nfs

    - name: Get updated files from git repository
        repo: "https://{{ access_token }}{{ github_account }}/{{ github_repo }}.git"
        dest: /home/{{ vm_user }}/{{ github_repo }}
        version: HEAD #

    - name: Recursively change ownership of a directory
        path: "{{ repo_path }}"
        state: directory
        recurse: true
        owner: "{{ vm_user }}"

    - name: Touch acme.json for traefik and set permissions chmod 600
        path: "{{ traefik_path }}/acme.json"
        state: touch
        mode: '600'

    - name: Modify permission if file already exists
        path: "{{ traefik_path }}/acme.json"
        owner: "{{ vm_user }}"
        group: root
        mode: '0600'

    - name: Extend LV
        vg: ubuntu-vg
        lv: ubuntu-lv
        size: "+100%FREE"
        resizefs: true

Some notable items this playbook configures:

  • monolithprojects.github_actions_runner - this role configures a self hosted GitHub Actions runner for the second repository with Docker compose files for the Ghost application. When I update the Docker repository I want to trigger additional actions - like back up the database volume, pull the updated files and restart containers.
  • apt packages, users, groups, install Docker
  • mount NFS share for automated Docker volume backups
  • Portainer container - remote management in a GUI if needed. Find myself using this for quick looks at logs
  • Cloudflare tunnel container - requests for are destined for Cloudflare's edge, and then tunneled onto my Docker network where the destination is the ip address:port of the Ghost service. No need to forward ports!
  • pulls most recent version of the Ghost docker compose repo

The more I use it the more impressed I am with the Cloudflare tunnel service. I'm a huge fan of not opening ports on my firewall. Here's a quick diagram providing an overview of the Cloudflare tunnel service:

I did need to manually create the tunnel on cloudflare's site and get the token value for the tunnel. The GUI was straightforward though.

An item of further consideration will be to terminate HTTPS between Ghost and my own reverse proxy, Traefik. That way Cloudflare would not have access unencrypted data. An important item for sure but I decided to table that for now.

Ansible pulls values of variables from an encrypted inventory file handled by Ansible-Vault which provides the encryption so important secrets are not exposed.

Running the playbook is a single command:

ansible-playbook -i inventory playbook.yml --ask-vault-pass


The Docker side of this project is straightforward. A single docker-compose.yml file defines the container configurations for the automated docker volume backup container, Ghost, MySQL and Traefik reverse proxy (for local access, and to encrypt data across Cloudflare servers).

version: '3.3'
    container_name: ghost
    image: ghost:5
      - ghost-db
    restart: always
      database__client: mysql
      database__connection__host: ghost-db
      database__connection__user: ghost
      database__connection__password: "${MYSQL_PASSWORD}"
      database__connection__database: ghost
      - ghost-data:/var/lib/ghost/content
      - "traefik.enable=true"
      - "traefik.http.routers.ghost.entrypoints=http"
      - "traefik.http.routers.ghost.rule=Host(`ghost.${MY_DOMAIN}`)"
      - "traefik.http.middlewares.ghost-https-redirect.redirectscheme.scheme=https"
      - "traefik.http.routers.ghost.middlewares=ghost-https-redirect"
      - "traefik.http.routers.ghost-secure.entrypoints=https"
      - "traefik.http.routers.ghost-secure.rule=Host(`ghost.${MY_DOMAIN}`)"
      - "traefik.http.routers.ghost-secure.tls=true"
      - "traefik.http.routers.ghost-secure.tls.certresolver=cloudflare"
      - "traefik.http.routers.ghost-secure.service=ghost"
      - ""
      - ""

    container_name: ghost-db
    image: mysql:8.0
    restart: always
      MYSQL_USER: ghost
      MYSQL_DATABASE: ghost
      - SYS_NICE
      - ghost-mysql-data:/var/lib/mysql

    external: true


To run:

docker compose up -d

That's it! I can run the terraform apply command to ensure the virtual infrastructure is configured, and then the Ansible playbook to ensure the VM is configured to my specifications.

If you're reading this, then the process has worked and we're live :p

Further Consideration:

  • Terraform to handle SSH key pair generation
  • Ansible to create firewall policies, subnets etc. on Fortigate
  • Ansible to check if the Ghost docker volumes exists, and if not restore from the most recent backup
  • Ansible to further harden the VM to align with best practices
  • Hashicorp Vault or another secrets management solution to provide the various secrets used throughout either repositories including .env files for docker containers
  • Pass the Ghost container a configuration file for customization