Leveraging Hardened Docker Images: How We Stay Secure and Eliminate Image Maintenance Overhead

Alex Podobnik Alex Podobnik · Mar 26, 2026
Leveraging Hardened Docker Images: How We Stay Secure and Eliminate Image Maintenance Overhead

Why Hardened Images?

The Default Is a Liability 

When developers choose base images freely, the result is always the same: a fragmented estate of ubuntu:latest, python:3.9, node:18, and similar variants sitting months or years behind current releases. Each one accumulates CVEs silently. No one owns the update cycle and security debt compounds.

At NextLink Labs, we recognized that base image maintenance is undifferentiated work and it doesn't make our clients' infrastructure better, it just keeps the lights on. The answer was to stop doing it ourselves.

What We Use Instead 

We consume hardened, minimal base images from maintained upstream sources rather than building our own. This gives us three things simultaneously: a smaller attack surface, a reduced CVE burden, and zero internal maintenance overhead.

Docker maintains and publishes a library of hardened images through Docker Official Images and Docker Verified Publishers on Docker Hub. For higher assurance workloads, Docker also offers Docker Hardened Images (DHI) which is a commercially supported catalog of minimal, distroless-style images built to strict CIS benchmarks with signed provenance and guaranteed patch SLAs. These sit alongside community-maintained sources.

What "hardened" means in practice 

Hardened images differ from standard base images in several concrete ways:

1. Non-root USER directive with explicit UID/GID: eliminates one of the most common container misconfigurations.

 

2. Read-only filesystem at the Dockerfile layer: with writable paths declared explicitly.

 

3. Package managers and shells removed or absent in distroless variants: eliminating whole categories of post-exploitation tooling.

 

4. Stripped SUID/SGID binaries.

 

5. Build-time secrets cannot leak into image layers: multi-stage build patterns are enforced by the base image design.

 

The net effect is that a hardened base image starts closer to CIS Docker Benchmark Level 1 compliance than a standard image ends up after manual hardening.

How We Integrate Them

We treat base image selection as an engineering standard, not a developer preference. In practice, this means migrating away from general-purpose images like node:20-slim to Docker Hardened Images using a multi-stage build pattern, a build stage for compilation and dependency installation, and a minimal runtime stage with no shell, no package manager, and a non-root user by default.

Getting Started 

Pick your runtime from the approved image catalog, swap your FROM directive to the appropriate DHI base, and split your Dockerfile into build and runtime stages if it isn't already. Test the build locally, then push and the pipeline handles the rest.

If your service has any of the following, flag it with the platform team before migrating:

 
Writes to the local filesystem at runtime
 
Runs scripts that require a shell (bash, sh)
 
Uses a runtime not yet in the approved catalog
 
Has a Dockerfile with more than one FROM already (complex multi-stage builds may need review)

For straightforward stateless services the process typically takes under two hours. Once merged, the nightly pipeline automatically picks up future base image updates, no further action needed from the service team.

Full image catalog: https://hub.docker.com/hardened-images/catalog

Dockerfile Configuration Example 

The following example shows a Node.js service before and after DHI migration:

Before (Current Dockerfile):

FROM node:20-slim
WORKDIR /app
COPY package*.json ./
RUN npm install --production
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]

 

After (DHI Multi-Stage):

# Build stage - has npm and build tools
FROM docker/dhi-node:20-dev AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
# Runtime stage - minimal, no shell
FROM docker/dhi-node:20-runtime
WORKDIR /app
COPY --from=builder --chown=1001:1001 /app/node_modules ./node_modules
COPY --chown=1001:1001 . .
USER 1001
EXPOSE 3000
CMD ["node", "server.js"]

 

Key changes: split into builder and runtime stages, -dev variant for build tooling, -runtime variant for production, explicit non-root USER 1001, and proper file ownership via --chown.

Once the Dockerfile is updated, our CI pipeline takes over. All FROM directives are validated against an approved image allowlist before any application code is compiled. Every built image is scanned with Docker Scout. Builds fail on CRITICAL CVEs and a signed SBOM is attached to every pipeline run. In Kubernetes environments, OPA/Kyverno admission controllers ensure only signed, DHI-based images can reach production namespaces, with Falco providing runtime behavioral monitoring as a final layer.

Pipeline Configuration Example 

The following GitLab CI job definition illustrates what this implementation looks like:

variables:
  IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
  DHI_REGISTRY: docker.io/docker

stages:
  - build
  - test
  - security
  - deploy

build:
  stage: build
  image: docker:latest
  services:
    - docker:dind
  before_script:
    - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
  script:
    - docker build -t $IMAGE_TAG .
    - docker push $IMAGE_TAG

security:
  stage: security
  image: docker:latest
  services:
    - docker:dind
  script:
    - docker pull $IMAGE_TAG
    - docker history $IMAGE_TAG | grep -q "dhi-" || exit 1
    - docker scout cves $IMAGE_TAG --exit-code
    - docker scout sbom $IMAGE_TAG --format spdx > sbom.json
  artifacts:
    reports:
      sbom: sbom.json
  allow_failure: false

 

Debugging 

Since no shell is available, this is how we would debug if needed:

# Kubernetes
kubectl debug -it <pod-name> --image=busybox --target=<container-name>

# Docker Desktop / Engine
docker debug <container-name>

 

The Maintenance Model

The key advantage of using Docker Hardened Images is that Docker handles the CVE patching cycle for us. When a vulnerability is disclosed in OpenSSL, glibc, or any base image dependency, Docker patches and republishes the affected DHI image. We rebuild our application layer on top of the new digest which means no internal triage, no manual base image updates.

We automate this with a nightly pipeline that scans our current base image digests against updated CVE feeds. When new HIGH or CRITICAL CVEs appear in a base we use, the pipeline triggers an automated application image rebuild and opens a merge request for review. Mean base image age stays in the low double digits in days rather than months.

The tradeoff is straightforward: we give up free-form base image customization, and in return we get a continuously patched foundation without dedicating any engineering time to maintaining it.

What this avoids 

Without DHI, the typical failure mode is silent accumulation. A developer picks python:3.9 in 2022 and two years later it's still in production carrying dozens of unpatched CVEs because no one owns the update cycle and touching it risks breaking something. Multiply that across hundreds of images and the security posture degrades without anyone noticing.

DHI breaks that cycle at the source. Because Docker maintains and refreshes the base images, the base is never the problem. Engineering attention stays where it belongs: application dependencies.

Summary

Hardened base images are not a security project, they are an engineering efficiency decision. By consuming well-maintained, minimal upstream bases rather than maintaining our own, NextLink Labs reduces its CVE exposure, satisfies compliance requirements as a side effect of normal build processes, and frees engineering time for work that actually differentiates our client delivery.

Alex Podobnik

Author at NextLink Labs