How to Generate Terraform Modules with Claude

Alex Podobnik Alex Podobnik · Apr 24, 2026
How to Generate Terraform Modules with Claude

Most LLM-generated Terraform is bad. You ask for a module, you get something that looks right, and then terraform validate lights up with errors. Provider blocks inside child modules, every variable typed as string, no tags, no outputs.

But the problem usually isn't the tool. It's the prompt. With some upfront work you can get scaffolds that are close to merge-ready. Here's how we do it at NextLink Labs.

Use Claude Code

You can use the chat interface for this, but Claude Code is better. It reads your existing modules, picks up on conventions, and writes files directly into your repo. No copy-pasting. Just cd into your Terraform repo root and go.

Tell It How You Write Terraform

Before asking for any resources, lay out your conventions. Most people skip this and then wonder why the output doesn't match their style.

If you're using Claude Code, the best way to do this is with a CLAUDE.md file at your repo root. Claude Code reads it automatically at the start of every session, so you set your conventions once and they stick across every future interaction. No re-prompting.

Here's what ours looks like:

# Terraform Conventions
- Targeting Terraform >= 1.14 with the AWS provider ~> 6.0
- No provider blocks inside child modules
- All resources must include a `tags` argument that merges a
  required `var.tags` input with resource-specific tags
- Use `locals` for computed values and name construction, not
  inline expressions
- Variables should use specific types (objects, lists) rather than
  bare strings where appropriate
- Include validation blocks on variables where there are known
  constraints
- Every module needs: main.tf, variables.tf, outputs.tf,
  versions.tf

 

This kills about 80% of the cleanup you'd otherwise do. You can also add project-specific context in there, like naming conventions or which AWS accounts map to which environments. The more you put in CLAUDE.md, the less you repeat yourself in prompts.

Describe the Boundary

Don't just say "create an EKS module." Tell it what goes in, what comes out, and what the module owns.

Create a module for an EKS cluster with the following boundary:

Inputs:
- VPC ID and private subnet IDs (the networking module handles VPC creation)
- Cluster name and Kubernetes version
- Node group configuration (instance types, scaling limits, disk size)
- Tags map

Outputs:
- Cluster endpoint and certificate authority
- OIDC provider ARN (needed by the IAM module for IRSA)
- Cluster security group ID
- Node group IAM role ARN

The module should manage:
- The EKS cluster itself
- A managed node group (single, not multiple)
- The cluster IAM role and its policy attachments
- The node group IAM role and its policy attachments
- CloudWatch log group for control plane logging

Networking lives in another module. IRSA lives in another module.
This one does the cluster and its immediate dependencies.

 

If you're vague about boundaries, Claude fills the gaps. Sometimes that's fine. Other times your EKS module comes back with things that aren't relevant.

Review It in Order

Go through the output file by file.

versions.tf to confirm provider and Terraform version constraints are right.

 

variables.tf is where most problems hide. Check for loose types, missing validation blocks, weird defaults, and copy-paste descriptions.

 

outputs.tf to make sure everything you asked for is there and actually points to the right resource. Claude likes to reference aws_eks_cluster.this when the resource is named aws_eks_cluster.main.

 

main.tf last. By now you've caught the structural issues. You're just scanning for sane config, correct tag merging, and no circular references.

 

Fix, Don't Regenerate

You will find issues. Don't start over. Just tell Claude what's wrong.

Two issues:

1. The node group IAM role is missing the
   AmazonSSMManagedInstanceCore policy attachment. We need that for
   Session Manager access.

2. The cluster_version variable should have a validation block that
   only allows versions matching the pattern "1.XX".

 

Same as code review. You don't rewrite a whole PR over a missing policy attachment.

Validate and Plan

terraform fmt -recursive
terraform validate
terraform plan -var-file=example.tfvars

 

fmt fixes formatting. validate catches syntax problems. plan is the one that actually matters because it'll surface bad ARN formats, broken references, and missing data sources.

In Claude Code you can have it run these and fix whatever breaks. That feedback loop is fast.

Automate It in CI

Running these checks locally is fine while you're iterating. But once the module is in a merge request, you want the same checks running automatically. Here's a basic GitLab CI pipeline that does that:

stages:
  - lint
  - validate
  - plan

fmt:
  stage: lint
  image: hashicorp/terraform:1.14
  script:
    - terraform fmt -check -recursive
  rules:
    - changes:
      - "**/*.tf"

validate:
  stage: validate
  image: hashicorp/terraform:1.14
  script:
    - terraform init -backend=false
    - terraform validate
  rules:
    - changes:
      - "**/*.tf"

plan:
  stage: plan
  image: hashicorp/terraform:1.14
  script:
    - terraform init
    - terraform plan -out=tfplan
  artifacts:
    paths:
      - tfplan
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event"'

 

Nothing too complicated. Three stages: fmt checks that formatting is clean, validate catches syntax issues without needing a backend, and plan runs on merge requests so reviewers can see what the module actually produces. The -backend=false flag on validate is important because you don't want your lint stage needing cloud credentials.

In practice you'll probably want to add your AWS credentials as CI/CD variables and maybe cache the .terraform directory so init isn't slow on every run. But this covers the basics and catches the same things you'd catch locally.

Gotchas

 
Data sources vs resources. Claude sometimes uses a data block to look up something the module should be creating. If it crosses the module boundary, it should be a variable or an output. Not a data source.
 
Hardcoded ARN partitions. Watch for literal arn:aws: instead of data.aws_partition.current.
 
Broad IAM policies. Claude reaches for AmazonEKSClusterPolicy and similar managed policies without thinking about least privilege. Fine for scaffolding, but tighten it before production.
 
Missing depends_on. Implicit dependencies through resource references are usually correct. But AWS has spots where you need explicit ones, like IAM policy attachments completing before a node group can use the role.

Wrapping Up

This doesn't replace knowing Terraform. You still need to know what resources you need and where to draw module boundaries. What it saves you is the 30 to 60 minutes of boilerplate writing that doesn't require much thought.

We've been doing this internally at NextLink and it's cut our time to first review pretty significantly. The generated code is never perfect, but a 90% scaffold that needs some cleanup beats an empty main.tf.

Be specific about conventions. Describe boundaries, not just resources. And run terraform plan before you trust anything.

Alex Podobnik

Author at NextLink Labs