Ready to Work Together?
Let's discuss how our expertise can help transform your business.
Everyone loves GitLab CI and Kubernetes.
GitLab CI (Continuous Integration) is a popular tool for building and testing software developers write for applications. GitLab CI helps developers build code faster, more confidently, and detect errors quickly.
Kubernetes, popularly shortened to K8s, is a portable, extensible, open-source platform for managing containerization workloads and services. K8s is used by companies of all sizes everyday to automate deployment, scaling, and managing applications in containers.
The purpose of this post is to show how you can bolt on the Continuous Delivery (CD) piece of the puzzle to build a CI/CD pipeline so you can deploy your applications to Kubernetes. But before we get too far, we're going to need to talk about Helm, which is an important part of the puzzle.
Helm calls itself "the package manager for Kubernetes". That's a pretty accurate description. Helm is a versatile, sturdy tool DevOps engineers can use to define configuration files in, and perform variable substitution to create consistent deployments to our clusters, and have different variables for different environments.
It's certainly the right solution to the problem we're covering here.
First off, a few prerequisites. You’re going to have to have this all hammered out before you started with the project. There’s links to helpful documentation below if you need help.
Or, you could always get in touch with us and we could talk about your project together.
With those boxes checked, we can get started. You'll want to create a new repository in GitLab first for us to use in this example. Once you've done that we can get started with creating our files.
Basically, at the end our folder/file structure is going to look like this:
<dir>
├── chart/
| ├── Chart.yaml
| ├── values.yaml
| └── templates/
| ├── deployment.yaml
| ├── service.yaml
| ├── ingress.yaml
| └── configmap.yaml
└── gitlab-ci.yml
applicationName: my-first-app
certArn: your-certificate-arn
domain: your domain name
subnets: your subnets
securityGroups: your security groups
apiVersion: apps/v1
kind: Deployment
metadata:
name:
namespace:
spec:
replicas: 2
revisionHistoryLimit: 2
selector:
matchLabels:
app:
template:
metadata:
labels:
app:
spec:
containers:
- name:
imagePullPolicy: Always
image: nginx:1.19.4
ports:
- containerPort: 80
volumeMounts:
- mountPath: /usr/share/nginx/html/index.html
name: nginx-conf
subPath: index.html
volumes:
- name: nginx-conf
configMap:
name: -configmap
This is the configuration file that defines our deployment. You can see there are a few lines with . This is how we use a variable we define in our values file within our chart.
apiVersion: v1
kind: ConfigMap
metadata:
name: -configmap
namespace:
data:
index.html: |
<html>
<head>
<h1>My first Helm deployment!</h1>
</head>
<body>
<p>Thanks for checking out my first Helm deployment.</p>
</body>
</html>
This config map just defines a simple index page that we'll display for our app.
apiVersion: v1
kind: Service
metadata:
name:
namespace:
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name:
- port: 80
targetPort: 80
protocol: TCP
name:
type: NodePort
selector:
app:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name:
namespace:
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/subnets:
alb.ingress.kubernetes.io/healthcheck-path: /
alb.ingress.kubernetes.io/security-groups:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/certificate-arn:
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
spec:
rules:
- host: .
http:
paths:
- path: /*
backend:
serviceName: ssl-redirect
servicePort: use-annotation
- path: /*
backend:
serviceName:
servicePort: 80
stages:
- deploy
variables:
DOCKER_HOST: tcp://localhost:2375/
DOCKER_DRIVER: overlay2
APP_NAME: my-first-app
deploy:
stage: deploy
image: alpine/helm:3.2.1
script:
- helm upgrade ${APP_NAME} ./charts --install --values=./charts/values.yaml --namespace ${APP_NAME}
rules:
- if: $CI_COMMIT_BRANCH == 'master'
when: always
Well, after you have all the files defined and your infrastructure follows our prerequisites, there's not much left to do.
If you commit these files, GitLab will interpet your .gitlab-ci.yml file and initiate a pipeline. Our pipeline only has one stage and one job (deploy). It'll spin up a container in the cluster for the deployment using the helm:3.2.1 image and run our script command. This does all of the heavy lifting for us with creating all of the files required in our namespace and starting our application.
If you configure in Route53 a DNS record like my-first-app.my-domain.com with an A record to the load balancer that the ingress controller created, you'll see the index page we defined in the configmap!
Author at NextLink Labs
A Jenkinsfile with one stage, no scanning, no caching. Here's how NextLink Labs used Claude Code to rewrite it into a production GitLab pipeline with rootless BuildKit, Trivy scanning, Skopeo retag, and a proper DAG — in under an hour.
Alex Podobnik
·
Apr 28, 2026
Someone set that up manually a while back. Sound familiar? Here's how NextLink Labs uses Claude Code's agentic loop to import hand-built AWS infrastructure into Terraform — compressing a multi-day job into an afternoon.
Alex Podobnik
·
Apr 24, 2026
Most LLM-generated Terraform is bad — not because of the tool, but because of the prompt. Here's how NextLink Labs uses Claude Code and CLAUDE.md conventions to generate Terraform modules that are close to merge-ready.
Alex Podobnik
·
Apr 24, 2026
One account becomes five, and eventually nobody knows which guardrails are where. Here's how NextLink Labs manages AWS Organizations, OU hierarchies, and Service Control Policies with Terraform and GitLab CI.
Colin Soleim
·
Apr 22, 2026
Let's discuss how our expertise can help transform your business.