Featured Case Study
CI/CD Migration • DevOps Transformation
GitLab Self-Hosted Modernization
Containerizing & Upgrading a Legacy GitLab Instance to 18.9.1
Published: April 10, 2026
Key Results
Migration Scope
- 9 upgrade hops
- From GitLab 16.11.10 to 18.9.1, stopping at every required background migration checkpoint.
Operational Execution
- ~4 hours total migration window
- Planned, structured downtime
Data Integrity & Validation
- 100% data integrity
- Zero data loss across all hops, including registry, LFS, artifacts, and secrets.
Deployment & Future Scalability
- Minutes to deploy
- Future upgrades now take a single image tag change and a container restart.
Executive Summary
A growing technology client operated a legacy GitLab EE instance (v16.11.10) installed directly on a bare-metal host. As the team scaled, maintaining an uncontainerised GitLab deployment was creating real friction, manual upgrades, inconsistent environments, slow disaster recovery, and tightly coupled SSL and database configuration were slowing the engineering team down.
The engagement quickly blossomed from a standard dev-shop relationship into a long-term strategic partnership. NextLink Labs provides ongoing software development services, strategic consulting, and DevOps leadership, working hand-in-hand with Accu-Trade to ensure the continued success and scalability of their automobile trading platform.
NextLink Labs led the full migration to a Docker Compose-based deployment, executed a nine-hop major version upgrade from 16.11.10 to 18.9.1, and resolved a series of complex infrastructure challenges along the way. The result is a modernised, reproducible GitLab environment that can be upgraded, backed up, and recovered in a fraction of the time previously required.
4-hour deployment windows causing business disruption
Lack of SOC 2 compliance requirements
High infrastructure costs due to over-provisioning
Client Context
The client is a mid-size technology organization with a growing engineering team that relies on GitLab for source code management, CI/CD pipelines, container registry, and internal package distribution.
Their environment included:
GitLab EE 16.11.10 on a bare-metal Linux host
PostgreSQL RDS database
Container registry backed by AWS S3
LDAP authentication against Active Directory
SMTP delivery via AWS SES
GitLab Pages with access control enabled
Custom SSL certificates for the main domain and Pages domain
4-hour deployment windows causing business disruption
Scaling Friction: The company was accumulating a growing number of technologies and a digital footprint that was becoming increasingly complex to manage.
Strategic Gaps: Beyond just "writing code," the company recognized a need to “level-up” their DevOps strategy to ensure they were positioned to capitalize on future growth.
Lack of SOC 2 compliance requirements
High infrastructure costs due to over-provisioning
The Challenge
In this environment, GitLab was tightly coupled to the host OS, which was itself significantly out of date and no longer receiving security updates. Upgrades required careful manual coordination, and any configuration change risked destabilising adjacent services. There was no clean way to test changes before applying them to production. Compounding this, the database was running externally on Amazon RDS on an older PostgreSQL version which introduced an additional compatibility constraint that had to be resolved before the upgrade chain could begin.
Operational brittleness: GitLab was tightly coupled to the host OS, which was out of date and no longer receiving security updates. Upgrades required careful manual coordination, and any configuration change risked destabilising adjacent services. There was no clean way to test changes before applying them to production. Compounding this, the database was running on an older PostgreSQL version on Amazon RDS, introducing an additional compatibility constraint that had to be resolved before the upgrade chain could begin.
Significant upgrade debt: The instance was three major versions behind the current GitLab release. GitLab's upgrade requirements mandate stopping at specific background migration checkpoints — skipping any checkpoint risks database inconsistency and is unsupported. With nine mandatory stops between 16.11.10 and 18.9.1, the upgrade path was complex and non-trivial to execute safely.
Recovery complexity: Backup and restore procedures were manual, lacked automation, and had never been tested end-to-end. Without a containerised environment, rollback meant reverting the entire host OS state. There was no off-host backup destination, meaning a
4-hour deployment windows causing business disruption
Scaling Friction: The company was accumulating a growing number of technologies and a digital footprint that was becoming increasingly complex to manage.
Strategic Gaps: Beyond just "writing code," the company recognized a need to “level-up” their DevOps strategy to ensure they were positioned to capitalize on future growth.
Lack of SOC 2 compliance requirements
High infrastructure costs due to over-provisioning
Our Approach
Phase 1: Containerisation Design
Before touching the upgrade path, NextLink Labs redesigned the deployment architecture. A Docker Compose configuration was written to externalise all stateful concerns — configuration, logs, data, and SSL certificates — into host volume mounts. This made the container itself fully disposable; the GitLab version could be changed with a single image tag update.
Key decisions included externally mounted read-only SSL certificate volumes to decouple cert rotation from the container lifecycle, and explicit port mappings for HTTP, HTTPS, SSH, and the container registry.
Phase 2: Pre-Migration Database Remediation
Before beginning the upgrade chain, NextLink Labs audited and corrected database object ownership. A comprehensive SQL remediation script was developed to reassign all schemas, tables, sequences, views, materialised views, and stored functions to the correct application user.
A verification query confirmed zero ownership violations before any upgrade hop was initiated. For resilience on subsequent hops, a PostgreSQL event trigger approach was also implemented to automatically correct ownership on newly created objects during migrations — eliminating the root cause rather than just the symptom.
Phase 3: Nine-Hop Upgrade Execution
Each upgrade hop followed a consistent protocol:
Full backup before every hop (tarball + secrets file)
Image tag updated in docker-compose.yml to the next stop version
Container pulled and restarted; logs monitored until startup confirmed
Background migration count verified at zero before proceeding
Health checks run after every hop
Functional verification: login, repository access, CI runner connectivity
The container registry was already running on the system; when we migrated to v2 metadata, we took the opportunity to introduce S3 as the storage backend for container images, avoiding object duplication in the process.
4-hour deployment windows causing business disruption
Scaling Friction: The company was accumulating a growing number of technologies and a digital footprint that was becoming increasingly complex to manage.
Strategic Gaps: Beyond just "writing code," the company recognized a need to “level-up” their DevOps strategy to ensure they were positioned to capitalize on future growth.
Lack of SOC 2 compliance requirements
High infrastructure costs due to over-provisioning
Results & Business Impact
Operational velocity
Future GitLab upgrades now require updating a single line in the Compose file, pulling the new image, and restarting the container. What previously required host-level coordination and significant downtime is now a routine, low-risk operation.
Environment reproducibility
The entire GitLab deployment is defined in a single composable configuration file. New environments for testing upgrades or disaster recovery can be spun up from the same config within minutes.
Resilience and recoverability
The containerised architecture cleanly separates application state from the container runtime. Rollback is now deterministic: revert the image tag, restore the backup, restore the secrets file, and reconfigure. Recovery time went from hours to under 30 minutes. Backups are now automated and shipped directly to S3, giving the client a reliable, hands-off recovery point without depending on local disk or manual operator action.
Eliminated upgrade debt
The client is now on the current GitLab EE release with a clear, documented procedure for staying current. The nine-hop upgrade surfaced and resolved all latent database ownership issues, CI/CD configuration deprecations, and registry metadata inconsistencies — leaving a clean, well-understood foundation.
Improved security posture
Externalised secrets, TLS-verified LDAP connections, properly scoped database permissions, and a containerised runtime collectively improved the client's security posture compared to the previous bare-metal setup.
4-hour deployment windows causing business disruption
Scaling Friction: The company was accumulating a growing number of technologies and a digital footprint that was becoming increasingly complex to manage.
Strategic Gaps: Beyond just "writing code," the company recognized a need to “level-up” their DevOps strategy to ensure they were positioned to capitalize on future growth.
Lack of SOC 2 compliance requirements
High infrastructure costs due to over-provisioning
Why NextLink Labs
GitLab migrations and infrastructure modernisation are core NextLink Labs competencies. Our team has executed GitLab migrations across a range of scales — from single-instance Docker deployments to large-scale GitLab.com migrations involving hundreds of projects and groups.
We bring a structured, audit-driven approach to infrastructure work: every change is documented, every hop is validated, and rollback paths are defined before work begins. We don't just execute migrations — we leave clients with the documentation, runbooks, and institutional knowledge to operate confidently going forward.
Ready to modernise your GitLab infrastructure?
4-hour deployment windows causing business disruption
Scaling Friction: The company was accumulating a growing number of technologies and a digital footprint that was becoming increasingly complex to manage.
Strategic Gaps: Beyond just "writing code," the company recognized a need to “level-up” their DevOps strategy to ensure they were positioned to capitalize on future growth.
Lack of SOC 2 compliance requirements
High infrastructure costs due to over-provisioning
Project Details
Industry
Biotechnology
Project Duration
2017
4 weeks
Services Provided
500-1000 employees
DevOps Transformation, Infrastructure Modernisation, GitLab Migration
8 engineers
Services
Technologies Used
Ready for Similar Results?
Let's discuss how we can help create your custom application.
Legacy Application Modernization
Refactor aging code to reduce risk
Custom Application Development
MVPs and full-stack systems built for scale
Related Case Studies

Managed the complete migration of 15 legacy applications to a secure AWS environment, allowing the client's internal team to focus on innovation. This transition reduced system incidents by 60% and directly contributed to an 8% boost in sales conversion.

Orchestrated a secure transition to a hybrid cloud environment for a government agency, ensuring strict NIST regulatory compliance while increasing operational agility and system reliability across public service applications.

Rebuilt and mapped complex legacy infrastructure to meet strict GDPR, PII, and PHI regulations. This modernization provided full architectural visibility and enabled the client to successfully pass critical security and compliance audits.
Ready to Create Your Success Story?
Let's discuss how we can help you achieve similar results for your organization.
