Self-Hosting Jenkins CI/CD with Docker: A $6/Month Alternative¶
GitHub Actions is fantastic. It's a popular choice for Continuous Integration—push code, run tests, merge PR. But as any experienced engineer eventually discovers, it has its limits.
Maybe you have a complex web scraper that runs for 4 hours. Maybe you need to run model post-training jobs that exceed the 6-hour timeout. Maybe you have a scheduled task that requires a persistent file system. Or maybe you're just tired of debugging opaque YAML files that fail silently on obscure timeouts.
I found myself needing to supplement my GitHub Actions workflow with something more robust for scheduled operations. I wanted the Hybrid Strategy: let GitHub handle the PR checks, but move the heavy, long-running, and custom scheduled jobs to a dedicated server I control.
In this post, I'll walk through how I deployed a production-ready, containerized Jenkins server on DigitalOcean for the price of a fancy coffee.
The Hybrid CI/CD Philosophy
This isn't about replacing GitHub Actions—it's about complementing it. Use GitHub Actions for what it does best: PR checks, testing, and short-lived workflows. Use self-hosted Jenkins for:
- Stateful jobs that need to remember data from the last run
- Heavy cron jobs that would eat through free-tier minutes
- Long-running processes like model post-training or data pipelines that exceed GitHub's 6-hour timeout
- Debugging flexibility where you can SSH in to investigate issues
The Strategy: Why Self-Host?¶
1. The GitHub Actions Gap¶
GitHub Actions is designed to be ephemeral. You get a clean container, you do your work, and it vanishes. This is great for testing code, but less ideal for:
- Stateful Jobs: Pipelines that need to remember data from the last run
- Heavy Cron Jobs: Running a scraper every hour on GHA can eat through free-tier minutes rapidly
- Debugging: When a GHA job hangs, you can't SSH in to see what's wrong
2. The Economics of the $6 Server¶
I used DigitalOcean for this setup, but this applies to any VPS provider (Hetzner, Linode, AWS EC2). For roughly $6/month, you get a dedicated Linux environment running 24/7. Unlike serverless or per-minute billing, this is a flat rate. You can run hundreds of pipelines a month without worrying about overage charges.
The Architecture: Docker-outside-Docker (DooD)¶
The most critical technical decision in this setup was how to handle builds. I don't want to run tools directly on the Jenkins server (that creates a "snowflake" server that is hard to maintain). I want every job to run in its own isolated container.
To achieve this, I used a pattern called Docker-outside-Docker (DooD).
Instead of running a Docker Daemon inside Jenkins (which is slow and insecure), I simply mount the host's Docker socket (/var/run/docker.sock) into the Jenkins container.
- The Illusion: Jenkins thinks it has Docker installed
- The Reality: When Jenkins says "run this container," it's actually telling the Host OS to run it. The container lives on the host, right next to Jenkins, not inside it
- The Benefit: Much faster, less caching overhead, and keeps the architecture simple
DooD vs DinD
Docker-in-Docker (DinD) runs a separate Docker daemon inside the Jenkins container. This is slower, uses more resources, and creates nested filesystem layers that complicate caching.
Docker-outside-Docker (DooD) mounts the host's Docker socket, making containers siblings rather than children. This is faster, simpler, and how most production systems work.
The Setup¶
All configuration files discussed here are available in the repository:
Prerequisites¶
- A VPS: I used a DigitalOcean Droplet with the "Docker on Ubuntu" 1-Click image
- Specs: 1GB RAM is sufficient if you configure Swap (see "Gotchas" below)
- Domain: A domain pointed to your server's Reserved IP (for stability)
Step 1: The Infrastructure (Docker Compose)¶
I use Docker Compose to define the entire stack: Jenkins, Caddy (for HTTPS), and the networking wiring.
The key to the DooD setup is in the volumes and permissions:
services:
jenkins:
image: jenkins/jenkins:lts
container_name: jenkins
restart: unless-stopped
volumes:
# The Magic: Give Jenkins access to the Host's Docker Daemon
- /var/run/docker.sock:/var/run/docker.sock
- jenkins_home:/var/jenkins_home
- ./jenkins.yaml:/var/jenkins_home/casc_config/jenkins.yaml
environment:
- JAVA_OPTS=-Djenkins.install.runSetupWizard=false
- CASC_JENKINS_CONFIG=/var/jenkins_home/casc_config/jenkins.yaml
# The Fix: Jenkins runs as a specific user, so we must give it
# permission to use the docker group (usually GID 999 or 998)
group_add:
- "999" # Docker group ID from host
The critical insight: you must find your host's Docker group ID and add it to the container:
# On your VPS, check the Docker group ID
getent group docker
# Output: docker:x:999:
# Then update docker-compose.yml with that ID
Step 2: Configuration as Code (JCasC)¶
I do not configure Jenkins via the UI. If the server burns down, I don't want to click through 50 settings screens to rebuild it.
I use JCasC (Jenkins Configuration as Code) to define everything—users, security, and executors—in text:
jenkins:
systemMessage: "Personal CI/CD: Managed by JCasC."
# Set to 1 to prevent the small server from crashing under load
numExecutors: 1
securityRealm:
local:
allowsSignup: false
users:
- id: ${ADMIN_USER}
password: ${ADMIN_PASSWORD}
authorizationStrategy:
loggedInUsersCanDoAnything:
allowAnonymousRead: false
unclassified:
location:
url: ${JENKINS_URL}
Notice the use of environment variables (${ADMIN_PASSWORD}, ${JENKINS_URL}). These are injected from a .env file, keeping secrets out of git:
ADMIN_USER=admin
ADMIN_PASSWORD=your_secure_password_here
JENKINS_URL=https://jenkins.yourdomain.com
DOMAIN_NAME=jenkins.yourdomain.com
Step 3: Automatic HTTPS with Caddy¶
I didn't want to deal with Certbot cron jobs or renewing certificates. I placed Caddy in front of Jenkins. Caddy automatically provisions and renews Let's Encrypt certificates for your domain.
caddy:
image: caddy:latest
container_name: caddy
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
- caddy_data:/data
- caddy_config:/config
That's it. Caddy handles TLS certificates, renewal, and HTTPS redirection automatically.
The "Gotchas" (Lessons Learned)¶
It wasn't smooth sailing. Here are the hurdles I hit so you don't have to.
1. The "Permission Denied" Socket Error¶
What happened: Even though I mounted the Docker socket, Jenkins crashed when trying to run agents with:
Root Cause:
The user inside the container (jenkins) didn't have permission to touch the socket owned by root on the host.
Fix:
I had to find the Host's Docker Group ID and pass it into the container via group_add in Docker Compose:
2. The 1GB RAM Trap (OOM Killer)¶
What happened: Jenkins is a Java application. It loves RAM. On a $6 Droplet (1GB RAM), it quickly ate the available memory and crashed silently.
Root Cause: The Linux OOM (Out Of Memory) Killer terminated Jenkins to protect the system.
Fix: Adding a 2GB Swap File is mandatory. This gives the OS a safety net when memory pressure spikes during a build:
# Create 2GB swap file
sudo fallocate -l 2G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
# Make it permanent
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
3. The Executor Deadlock¶
What happened:
I initially set numExecutors: 0 on the master to follow enterprise best practices (offload all builds to agents). But the master couldn't even spin up the "flyweight" process needed to launch a Docker container—it was deadlocked.
Root Cause: In a single-server setup, the master is the host. With 0 executors, it can't perform the coordination work to start Docker agents.
Fix:
Set numExecutors: 1 in jenkins.yaml. The master handles coordination, while actual builds run in ephemeral Docker containers.
Pipeline Best Practices¶
Always Use Docker Agents¶
The entire point of this setup is isolation. Your pipelines should always specify a Docker agent:
pipeline {
agent {
docker {
image 'python:3.9'
}
}
stages {
stage('Test') {
steps {
sh 'pip install pytest'
sh 'pytest'
}
}
}
}
❌ Don't do this:
pipeline {
agent any // Runs directly on the Jenkins controller
stages {
stage('Test') {
steps {
sh 'pytest' // Fails if pytest not installed on controller
}
}
}
}
Configuration as Code Philosophy¶
Traditional approach: Click through Jenkins UI, configure plugins, set up credentials. Problem: Server dies → configuration dies → you forget what you clicked.
My approach: Everything in jenkins.yaml and docker-compose.yml.
Benefit: Entire infrastructure can be recreated with git clone + docker compose up.
Immutable Infrastructure
The Jenkins container itself is disposable. The only things that persist are:
jenkins_homevolume (job history, build artifacts)- Configuration files in Git
You can destroy and recreate the container without losing work.
The Hybrid Strategy in Action
Since all configuration lives in Git, you can use GitHub Actions to deploy changes to your Jenkins server. This is the hybrid approach at its best:
- GitHub Actions: Lightweight deployment (SSH in, pull changes, restart container)
- Jenkins: Heavy workloads (scrapers, model training, long-running jobs)
A simple workflow could SSH into your VPS, pull the latest config, and run docker compose up -d --force-recreate. This way, pushing to your jenkins-config repo automatically updates your production Jenkins instance.
Deployment¶
Once configured, deployment is trivial:
# Clone the repository
git clone https://github.com/fliden/jenkins-config.git
cd jenkins-config
# Create your .env file
cp .env.example .env
# Edit .env with your domain and password
# Check your Docker group ID
getent group docker
# Update docker-compose.yml with the correct GID
# Deploy
docker compose up -d --build
# Check logs
docker compose logs -f jenkins
Wait 30-60 seconds for Caddy to provision the TLS certificate, then access Jenkins at your domain.
Best Practices Adopted¶
1. Secrets Management¶
Passwords and domains are stored in an .env file (gitignored) and injected into the container at runtime. Never hardcode secrets in configuration files.
2. Immutable Infrastructure¶
The Jenkins container itself is disposable. The only thing that persists is the jenkins_home volume and the Git repo.
3. Portability¶
By using environment variables for the URL (${JENKINS_URL}), the exact same setup works on:
localhostfor testingstaging.example.comfor stagingjenkins.example.comfor production
4. Automation Over Clicks¶
Zero manual UI configuration. If you need to change a setting, edit jenkins.yaml and restart the container. Jenkins automatically reloads the configuration.
Economics: The Real Cost¶
DigitalOcean $6/month Droplet:
- 1 vCPU
- 1GB RAM + 2GB Swap
- 25GB SSD
- 1TB Transfer
What you get:
- Unlimited pipeline runs (no per-minute billing)
- Persistent storage for artifacts
- SSH access for debugging
- Full control over the environment
Trade-offs:
- You're responsible for server maintenance
- You need to configure backups for
jenkins_home - Security updates are on you
For context: 2,000 minutes on GitHub Actions costs $8/month. If you're running heavy or frequent jobs, self-hosting pays for itself immediately.
Conclusion¶
By combining the low cost of a VPS with the flexibility of Docker, I created a powerful CI/CD system that complements GitHub Actions perfectly. I now have a place to run heavy, stateful jobs without worrying about timeouts or billable minutes—all fully managed by code.
The key insights:
- DooD over DinD: Mount the host socket for speed and simplicity
- JCasC: Configuration as code ensures reproducibility
- Caddy: Automatic HTTPS with zero maintenance
- Swap: Mandatory for small servers running Java applications
- numExecutors: 1: Required for single-server Docker agent setups
This setup has been running reliably for months, handling everything from web scrapers to scheduled data pipelines. It's the infrastructure you build once and forget about.
Check out the repo, fork it, and reclaim control of your automation: