By DevOps Engineer | April 30, 2025 | 12 min read
Image: Jenkins and GitHub Actions logos side by side
Introduction
As developers and DevOps engineers, we often gravitate towards convenient, cloud-based CI/CD solutions. GitHub Actions has emerged as a popular choice due to its tight integration with GitHub repositories and relatively gentle learning curve. However, there comes a point when the limitations of cloud-based solutions start to chafe, and you begin to consider alternatives that offer more control, customization, and independence.
That's exactly where I found myself a few weeks ago. After using GitHub Actions for several projects, I decided to migrate a web application to a self-hosted Jenkins server running on an old laptop. What initially seemed like a straightforward task turned into a multi-faceted learning experience that stretched my DevOps skills and taught me valuable lessons about CI/CD infrastructure.
In this article, I'll walk you through my journey, the challenges I faced, and the solutions I discovered. Whether you're considering a similar migration or simply curious about the differences between these two popular CI/CD platforms, I hope my experience provides some useful insights.
Why Leave GitHub Actions?
Before diving into the technical details, let's address the fundamental question: Why switch from GitHub Actions to Jenkins in the first place?
GitHub Actions offers several advantages:
- Tight GitHub Integration: It's built directly into the GitHub platform
- Zero Infrastructure Management: GitHub handles all the infrastructure
- Simple YAML Configuration: Relatively easy to learn and implement
- Free Tier for Public Repositories: 2,000 minutes/month for free accounts
However, there are several compelling reasons to consider Jenkins:
Complete Control
With Jenkins, you own and control the entire CI/CD infrastructure. This means no unexpected changes to pricing models, no usage limits, and no dependency on third-party services for critical infrastructure.
Cost Effectiveness at Scale
While GitHub Actions offers free minutes for public repositories, costs can add up quickly for private repositories or larger teams. With Jenkins, you pay for the hardware once, then run as many builds as you want.
Rich Plugin Ecosystem
Jenkins has been around since 2011 (evolving from Hudson, which dates back to 2005), and its plugin ecosystem is vast and mature. There's a plugin for almost every tool, service, or workflow you might need.
Customization Options
Jenkins allows for deep customization of your build environment, security policies, and integration points. If you can script it, you can automate it with Jenkins.
Privacy and Security
For sensitive projects, keeping your build infrastructure in-house can be a significant security advantage, especially if you work with proprietary code or in regulated industries.
With these benefits in mind, I decided to take the plunge and set up a Jenkins server on an old Ubuntu laptop that had been collecting dust.
Setting Up the Jenkins Server
The Installation Process
I started with a clean Ubuntu installation on my old laptop. Installing Jenkins itself is fairly straightforward, but I encountered my first significant hurdle almost immediately.
After following the standard installation steps, I attempted to start the Jenkins service only to be greeted with this error:
Running with Java 11, which is older than the minimum required version (Java 17).
Supported Java versions are: [17, 21]
This was my first lesson: Always check software requirements before installation. Jenkins had moved beyond Java 11 and now required at minimum Java 17.
Future-Proofing with Java 22
While I could have simply installed Java 17 to meet the minimum requirement, I noticed another warning in the Jenkins logs:
Java 17 end of life in Jenkins
You are running Jenkins on Java 17, support for which will end on or after Mar 31, 2026.
Rather than setting myself up for another upgrade in the near future, I decided to install the latest Java version available - Java 22. Jenkins follows a "2+2+2" Java support model:
- Supporting new Java LTS releases for 2 years after release
- Requiring them as the minimum version for the next 2 years
- Dropping support 2 years before upstream end-of-life
By installing Java 22, I would be well-positioned for long-term support without needing frequent upgrades.
sudo apt install openjdk-22-jdk
sudo update-alternatives --config java
After configuring Jenkins to use Java 22, the service started successfully. I could access the Jenkins dashboard at http://localhost:8080 and begin the initial setup process.
Managing Credentials Securely
Security was a top priority for my Jenkins setup. My GitHub Actions workflow contained several sensitive pieces of information:
- GitHub Personal Access Token
- Docker registry credentials
- SSH keys for deployment
- API keys for various services
In GitHub Actions, these are stored as repository or organization secrets. Jenkins offers several options for credential management, with varying levels of security:
Global vs. Folder-Scoped Credentials
Initially, I stored all my credentials at the global level, accessible to all Jenkins jobs. However, I soon realized this wasn't the most secure approach. After researching best practices, I reorganized my credentials to be scoped at the folder level, limiting access to only the jobs that needed them.
This meant creating a dedicated folder for my web project and moving all the credentials to that folder's scope. The process was straightforward:
- Create a folder in Jenkins (Dashboard → New Item → Folder)
- Navigate into the folder and select "Credentials" from the sidebar
- Add credentials at the folder level
- Create pipeline jobs within this folder
This approach follows the principle of least privilege, ensuring credentials are only accessible to the jobs that require them.
Credential Types and Binding
Jenkins supports various credential types, and choosing the right one for each use case is important:
- Secret Text: For API tokens and simple secrets
- Username with Password: For services requiring both pieces of information
- SSH Username with Private Key: For SSH authentication
- Certificate: For PKCS#12 certificates
- File: For credential files
In my pipeline, I used the credentials() binding in the environment block to make these credentials available as environment variables:
environment {
REGISTRY = "ghcr.io"
IMAGE_NAME = "username/project-web"
FULL_IMAGE_NAME = "${REGISTRY}/${IMAGE_NAME}"
GITHUB_ACTOR = credentials('github-username')
GITHUB_TOKEN = credentials('github-pat')
// Additional credentials...
}
This approach ensures that credentials are securely encrypted at rest and only decrypted when needed during pipeline execution.
The Node.js Version Challenge
One of the most frustrating challenges I encountered was maintaining a consistent Node.js environment across pipeline stages. My web application required Node.js 22 and used pnpm for package management, but the Jenkins server had an older version (Node.js 12) installed by default.
My first attempt to resolve this was simple but unsuccessful:
stage('Build and Test') {
steps {
sh 'npm install -g pnpm@10.2.0'
sh 'pnpm install'
sh 'pnpm lint'
}
}
This failed with two critical errors:
npm WARN EBADENGINE Unsupported engine {
package: 'pnpm@10.2.0',
required: { node: '>=18.12' },
current: { node: 'v12.22.9', npm: '8.5.1' }
}
npm ERR! code EACCES
npm ERR! syscall mkdir
npm ERR! path /usr/local/lib/node_modules
npm ERR! errno -13
I was facing both a Node.js version incompatibility and permission issues. After several attempts, I found that using Node Version Manager (NVM) was the most effective solution:
stage('Build and Test') {
steps {
sh '''
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh"
nvm install 22
nvm use 22
node -v
npm install -g pnpm@10.2.0
pnpm install
pnpm lint
'''
}
}
This approach worked for a single stage, but I quickly discovered another issue: the Node.js environment didn't persist between stages. Each stage runs in a fresh shell environment, resetting any PATH or environment changes from previous stages.
I tried several solutions:
- Using the NodeJS Plugin: This Jenkins plugin manages Node.js installations and makes them available across stages.
- Setting Global Environment Variables: Defining PATH and NVM_DIR at the pipeline level.
- Using a Single Multi-Line Shell Script: Performing all Node.js operations in a single stage.
Ultimately, I found that explicitly sourcing the NVM script in each stage that needed Node.js was the most reliable approach for my setup:
stage('Build and Push Image') {
steps {
sh '''
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh"
nvm use 22
node -v
# Docker build commands...
'''
}
}
While this involves some repetition, it ensures a consistent Node.js environment throughout the pipeline.
Structuring the Jenkins Pipeline
With credentials and Node.js issues resolved, I could focus on structuring my pipeline to replicate the functionality of my GitHub Actions workflow. The key stages I needed were:
- Checkout: Retrieving the code from GitHub
- Build and Test: Installing dependencies and running linting
- Build and Push Image: Creating and publishing a Docker image
- Deploy: Deploying the application to a production server
Here's the high-level structure I arrived at:
pipeline {
agent any
environment {
// Registry and image name variables
// Credentials references
}
triggers {
pollSCM('H/5 * * * *') // Poll repository every 5 minutes
}
stages {
stage('Checkout') {
steps {
checkout scm
}
}
stage('Build and Test') {
steps {
// Set up Node.js environment
// Install dependencies with pnpm
// Run linting
}
}
stage('Build and Push Image') {
steps {
// Login to container registry
// Build Docker image with build args
// Push image to registry
}
}
stage('Deploy') {
steps {
// Prepare deployment files
// Use SSH to deploy to remote server
// Configure Docker on remote server
// Pull and run containers
}
}
}
post {
always {
cleanWs() // Clean workspace after build
}
}
}
Configuring Repository Polling
One significant difference between GitHub Actions and Jenkins is how they detect changes to trigger builds. GitHub Actions integrates directly with GitHub webhooks, but since I didn't want to expose my Jenkins server to the internet, I opted for repository polling instead.
Jenkins uses a cron-like syntax for polling configuration:
triggers {
pollSCM('H/5 * * * *') // Poll every 5 minutes
}
The H in the cron expression stands for "hash" and helps distribute load by slightly randomizing the exact polling time. This is a Jenkins-specific enhancement to standard cron syntax.
I found that polling every 5 minutes was sufficient for my needs. Jenkins supports polling frequencies as low as once per minute (* * * * *), but more frequent polling can increase load on both Jenkins and the Git server.
Conditional Execution
Another challenging aspect was implementing conditional execution based on which files had changed. In GitHub Actions, this is straightforward with the paths filter:
on:
push:
branches: [ "main" ]
paths:
- 'web/**'
- '.github/workflows/**'
In Jenkins, I initially tried to use the changeset directive:
stage('Build and Push Image') {
when {
branch 'main'
anyOf {
changeset "web/**"
changeset ".jenkins/**"
}
}
steps {
// Steps...
}
}
However, I encountered issues with this approach, particularly with branch detection. The branch directive wasn't correctly identifying my current branch, causing stages to be skipped unexpectedly.
After debugging, I found a more reliable approach using an expression block with direct Git commands:
when {
expression {
def changes = sh(script: 'git diff --name-only HEAD~1 || echo ""', returnStdout: true).trim()
return changes.contains('web/') || changes.contains('.jenkins/')
}
}
This approach gives more explicit control over how changes are detected and evaluated.
Deployment Strategy
The final piece of the puzzle was setting up a secure deployment process. In my GitHub Actions workflow, I used the appleboy/scp-action and appleboy/ssh-action actions to copy files to the server and execute deployment commands.
In Jenkins, I needed to implement this functionality from scratch. The sshagent step proved invaluable here:
stage('Deploy') {
steps {
// Prepare deployment files
sh 'mkdir -p /tmp/web-deploy'
sh 'cp web/docker-compose.yml /tmp/web-deploy/'
// Create .env file
sh '''
cat > /tmp/web-deploy/.env << EOL
GITHUB_REPOSITORY=${FULL_IMAGE_NAME}
TAG=latest
EOL
'''
// Deploy using SSH
sshagent(['server-ssh-key']) {
sh '''
# Create deployment directory
ssh -o StrictHostKeyChecking=no $SERVER_USERNAME@$SERVER_HOST "mkdir -p ~/web-deploy"
# Copy files to server
scp -o StrictHostKeyChecking=no /tmp/web-deploy/* $SERVER_USERNAME@$SERVER_HOST:~/web-deploy/
# Setup authentication and deploy
ssh -o StrictHostKeyChecking=no $SERVER_USERNAME@$SERVER_HOST "cd ~/web-deploy && docker-compose pull && docker-compose up -d --force-recreate"
'''
}
}
}
The sshagent step automatically injects the SSH credentials into the environment, making them available for SSH and SCP commands without exposing the keys in the script.
Security Considerations
Throughout this process, security remained a paramount concern. If you're setting up your own Jenkins server, here are some key security practices I implemented:
-
Limited Network Exposure: I kept my Jenkins server on my local network, not exposed to the internet.
-
Scoped Credentials: As mentioned earlier, I used folder-scoped credentials to limit credential access.
-
Regular Updates: I enabled automatic security updates for both the OS and Jenkins.
-
Proper User Permissions: Jenkins runs as a dedicated user with limited system access.
-
Secure Credential Storage: Jenkins encrypts credentials at rest using its secure credential store.
If you do need to expose Jenkins to the internet, additional measures become necessary:
- Setting up HTTPS with a valid certificate
- Implementing a reverse proxy like Nginx
- Configuring proper firewall rules
- Using IP filtering to restrict access
- Enabling two-factor authentication
Lessons Learned
This migration journey taught me several valuable lessons about CI/CD infrastructure:
1. Version Management is Crucial
From Java versions to Node.js environments, keeping track of required versions and dependencies is essential for a stable build environment. Always check requirements before installation and consider future support timelines when making version choices.
2. Credentials Require Careful Planning
Take time to properly scope and organize credentials. Following the principle of least privilege helps minimize security risks.
3. Environment Persistence Needs Attention
Unlike GitHub Actions, where each step inherits the environment from previous steps, Jenkins stages often run in isolated environments. Plan for environment persistence between stages, especially for tools like Node.js.
4. Conditional Logic Differs Significantly
The approach to conditional execution (running steps based on branch, changes, etc.) is fundamentally different between GitHub Actions and Jenkins. Understanding these differences is essential for proper pipeline design.
5. Shell Scripts are Your Friends
While Jenkins offers many specialized steps and plugins, sometimes a well-crafted shell script is the most reliable and flexible solution, especially for complex tasks.
Conclusion
Migrating from GitHub Actions to Jenkins was more challenging than I initially anticipated, but immensely rewarding. Although the pipeline I implemented is relatively simple compared to what Jenkins is capable of, I now have the knowledge and foundation to convert all of my GitHub Actions workflows to Jenkins and to implement more complex pipelines in the future.
This journey reinforced several key DevOps principles:
- Infrastructure as Code: The entire pipeline configuration lives in version control
- Least Privilege: Credentials and permissions are tightly scoped
- Automation First: Every manual step is an opportunity for automation
- Continuous Learning: DevOps is a field of constant evolution and learning
Is Jenkins the right choice for everyone? Definitely not. GitHub Actions offers simplicity and integration that's perfect for many projects. But if you value control, customization, and independence from cloud services, Jenkins remains a powerful and flexible option in your DevOps toolkit.
The most important takeaway is to choose the CI/CD solution that aligns with your specific needs, team capabilities, and organizational constraints. Sometimes that means embracing the convenience of cloud-based solutions like GitHub Actions; other times, it means rolling up your sleeves and diving into the world of self-hosted CI/CD with Jenkins.
What CI/CD system do you prefer for your projects? Have you migrated between different CI/CD platforms? Share your experiences in the comments below!
About the Author: DevOps Engineer with a passion for automation, infrastructure as code, and continuous improvement. When not configuring pipelines, can be found hiking trails and experimenting with home automation projects.