Managing AWS Organizations and SCPs with Infrastructure as Code

Alex Podobnik Alex Podobnik · Apr 22, 2026
Managing AWS Organizations and SCPs with Infrastructure as Code

Most AWS environments grow in the same direction: one account becomes two, two becomes five, and eventually someone is manually clicking through the console trying to remember which account has which guardrails. We've seen this pattern repeatedly, both in our own infrastructure at NextLink Labs and across the client engagements we run. AWS Organizations with Service Control Policies (SCPs) is the right fix. Terraform is the right tool to manage it.

This post covers the account structure we use, the Terraform patterns behind it, and the SCP logic you need to lock things down without breaking your teams.

Why Organizations & Terraform?

AWS Organizations sits above your individual accounts and gives you three things worth caring about: consolidated billing, Service Control Policies, and delegated administration for security services like GuardDuty, Security Hub, and Config.

SCPs are a ceiling, not a floor. An SCP that allows s3:* does not grant anyone S3 access. It just means S3 access is not blocked at the organization level. IAM still has to permit it. SCPs can only restrict permissions, not grant them.

Terraform is what keeps this manageable over time. Without it, SCPs drift. Someone adds an exception through the console, nobody documents it, and six months later you're debugging a broken deployment because an SCP was silently blocking an API call. We treat any console-based SCP change as immediate technical debt.

Account Structure

Before writing a line of Terraform, you need an OU hierarchy. Here is what ours looks like at NextLink, and the same structure is what we bring to new client engagements as a starting point:

Root
├── DevOps
├── Internal Developments
├── Partner
├── Sandbox
└── Security

 

 
DevOps holds tooling accounts for CI/CD, infrastructure automation, and anything the engineering team runs internally.
 
Internal Developments is where we run our own product and R&D workloads, kept separate from client-facing infrastructure.
 
Partner contains accounts tied to AWS partner program activity, including Partner Central.
 
Sandbox holds accounts for experimentation. A spend cap SCP is attached here so things don't get out of hand.
 
Security is locked down. Logs from CloudTrail, Config, and VPC flow logs aggregate here, and nobody, including admins, can delete them.

Terraform Structure and the Module We Use

We keep Organizations Terraform in a dedicated root module, separate from workload infrastructure. It runs from the management account with elevated permissions, and changes here have a blast radius across every account in the org.

Rather than building all of this from scratch every time, we have an internal module that handles the OU structure, account creation, SCP attachment, and the backend configuration in one pass. It cuts the setup time significantly on new client engagements and ensures we are not reinventing the same patterns each time. The directory layout it produces looks like this:

aws-organizations/
├── main.tf
├── variables.tf
├── outputs.tf
├── modules/
│ ├── organizational-unit/
│ └── scp/
└── policies/
    ├── deny-root-usage.json
    ├── deny-region-lockdown.json
    ├── deny-leave-org.json
    ├── sandbox-spend-cap.json
    └── production-guardrails.json

 

Keeping SCP JSON in separate files rather than inline HEREDOCs makes diffs readable and lets you lint the JSON independently.

Core Terraform Resources

The Organization 

resource "aws_organizations_organization" "this" {
  aws_service_access_principals = [
    "cloudtrail.amazonaws.com",
    "config.amazonaws.com",
    "guardduty.amazonaws.com",
    "securityhub.amazonaws.com",
    "sso.amazonaws.com",
    "ram.amazonaws.com",
  ]
  feature_set = "ALL" # Required for SCPs
}

 

feature_set = "ALL" is required to use SCPs. If you are importing an existing org that was created with consolidated billing only, enabling ALL features requires acceptance from each member account. Plan for that change window.

Organizational Units 

resource "aws_organizations_organizational_unit" "workloads" {
  name = "Workloads"
  parent_id = aws_organizations_organization.this.roots[0].id
}

resource "aws_organizations_organizational_unit" "production" {
  name = "Production"
  parent_id = aws_organizations_organizational_unit.workloads.id
}

resource "aws_organizations_organizational_unit" "non_production" {
  name = "Non-Production"
  parent_id = aws_organizations_organizational_unit.workloads.id
}

 

Member Accounts 

resource "aws_organizations_account" "production_app" {
  name = "prod-app"
  email = "aws+prod-app@yourcompany.com"
  parent_id = aws_organizations_organizational_unit.production.id

  # Prevent Terraform from closing the account on destroy
  close_on_deletion = false

  lifecycle {
    # Email cannot be changed after creation
    ignore_changes = [email]
  }
}

 

AWS account creation is eventually consistent and takes a few minutes. If you create accounts and attach SCPs in the same apply, add depends_on chains or split it into two stages.

If your org, OUs, or accounts already exist and were created manually, you will need to import them into Terraform state before managing them. The commands follow the same pattern but the IDs are not obvious if you have not done this before.

# Import the organization itself
terraform import aws_organizations_organization.this <organization_id>

# Import an OU (use the OU ID, not the name)
terraform import aws_organizations_organizational_unit.security <ou_id>

# Import a member account
terraform import aws_organizations_account.production_app <account_id>

 

You can find the org ID, OU IDs, and account IDs in the AWS console under AWS Organizations, or by running aws organizations describe-organization and aws organizations list-organizational-units-for-parent. Run the imports before doing anything else, otherwise Terraform will try to create resources that already exist and fail.

SCP Patterns That Matter

1. Deny Root Usage 

No workload should ever use the root user. This SCP enforces that at the org level:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "DenyRootUser",
      "Effect": "Deny",
      "Action": "*",
      "Resource": "*",
      "Condition": {
        "StringLike": {
          "aws:PrincipalArn": "arn:aws:iam::*:root"
        }
      }
    }
  ]
}

 

Attach this to every OU except the management account root, where you may legitimately need root for billing operations.

2. Region Lockdown 

Restrict accounts to the regions you actually use. This prevents accidental deployments to unmonitored regions and reduces your exposure from resource-based policies:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "DenyNonApprovedRegions",
      "Effect": "Deny",
      "NotAction": [
        "a4b:*", "acm:*", "aws-marketplace-management:*",
        "aws-marketplace:*", "budgets:*", "ce:*",
        "cloudfront:*", "config:*", "cur:*",
        "directconnect:*", "ec2:DescribeRegions",
        "globalaccelerator:*", "health:*", "iam:*",
        "kms:*", "organizations:*", "route53:*",
        "route53domains:*", "s3:GetAccountPublic*",
        "s3:ListAllMyBuckets", "s3:PutAccountPublic*",
        "shield:*", "sts:*", "support:*",
        "trustedadvisor:*", "waf:*", "wafv2:*"
      ],
      "Resource": "*",
      "Condition": {
        "StringNotEquals": {
          "aws:RequestedRegion": [
            "us-east-1",
            "us-west-2",
            "eu-west-1"
          ]
        }
      }
    }
  ]
}

 

The NotAction list is critical here. Global services like IAM, Route 53, CloudFront, and billing have no region concept, so if you block them with a region condition you will break your accounts in ways that are confusing to debug. The list above covers the standard set; review it against any other global services you are using.

3. Deny Leaving the Organization 

This one prevents an account from being removed from your org. It is a common technique attackers use when an account is compromised:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "DenyLeaveOrganization",
      "Effect": "Deny",
      "Action": [
        "organizations:LeaveOrganization"
      ],
      "Resource": "*"
    }
  ]
}

 

4. Protect Security Tooling 

On the Security and Infrastructure OUs, deny any action that would disable your visibility into what is happening:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "ProtectSecurityServices",
      "Effect": "Deny",
      "Action": [
        "cloudtrail:DeleteTrail",
        "cloudtrail:StopLogging",
        "cloudtrail:UpdateTrail",
        "config:DeleteConfigRule",
        "config:DeleteConfigurationRecorder",
        "config:DeleteDeliveryChannel",
        "config:StopConfigurationRecorder",
        "guardduty:DeleteDetector",
        "guardduty:DisassociateFromMasterAccount",
        "guardduty:StopMonitoringMembers",
        "securityhub:DisableSecurityHub"
      ],
      "Resource": "*",
      "Condition": {
        "ArnNotLike": {
          "aws:PrincipalArn": [
            "arn:aws:iam::*:role/SecurityBreakGlassRole"
          ]
        }
      }
    }
  ]
}

 

The condition carves out a break-glass role so you are not completely locked out when you need to make a legitimate change. That role should require MFA and every assumption should be logged.

5. Sandbox Spend Control 

For sandbox accounts, we block expensive instance types at the policy level rather than relying on developers to self-police:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "DenyExpensiveInstances",
      "Effect": "Deny",
      "Action": "ec2:RunInstances",
      "Resource": "arn:aws:ec2:*:*:instance/*",
      "Condition": {
        "StringNotLike": {
          "ec2:InstanceType": [
            "t3.*", "t4g.*", "m5.large", "m5.xlarge"
          ]
        }
      }
    },
    {
      "Sid": "DenyRDSLargeInstances",
      "Resource": "arn:aws:rds:*:*:db:*",
      "Effect": "Deny",
      "Action": "rds:CreateDBInstance",
      "Condition": {
        "StringNotLike": {
          "rds:DatabaseClass": [
            "db.t3.*", "db.t4g.*"
          ]
        }
      }
    }
  ]
}

 

Attaching SCPs in Terraform

resource "aws_organizations_policy" "deny_root" {
  name = "deny-root-usage"
  content = file("${path.module}/policies/deny-root-usage.json")
  type = "SERVICE_CONTROL_POLICY"
}

resource "aws_organizations_policy_attachment" "deny_root_workloads" {
  policy_id = aws_organizations_policy.deny_root.id
  target_id = aws_organizations_organizational_unit.workloads.id
}

resource "aws_organizations_policy_attachment" "deny_root_sandbox" {
  policy_id = aws_organizations_policy.deny_root.id
  target_id = aws_organizations_organizational_unit.sandbox.id
}

 

SCPs attach to OUs rather than accounts directly, though account-level attachment is available. OU-level is almost always the right choice because new accounts inherit policies automatically when they are moved into the OU.

Testing SCPs Before Applying

SCPs are not forgiving. A misconfigured policy attached to the root can break all accounts at the same time, which is a bad situation to be in on a Friday afternoon.

Before attaching anything to a production OU, we follow a consistent process:

1. Run aws accessanalyzer validate-policy against the SCP JSON to catch syntax and semantic issues before apply.

 

2. Test on an isolated sandbox account. Create a throwaway account, attach the SCP, and verify it behaves as expected.

 

3. Use the IAM Policy Simulator, which understands SCPs when you provide them as a permission boundary context.

 

4. Run aws organizations describe-effective-policy to see the merged SCP that actually applies to an account, accounting for inheritance from all parent OUs.

 

For the region lockdown SCP specifically, test from an account with active workloads and confirm that us-east-1 global service calls (IAM, STS, etc.) still work before rolling it out more broadly.

It is also worth setting up an EventBridge rule to alert when any SCP is modified outside of Terraform. CloudTrail logs every organizations: API call, so catching a console change is straightforward.

resource "aws_cloudwatch_event_rule" "scp_changes" {
  name = "detect-scp-changes"
  description = "Fires when an SCP is created, updated, or deleted"
  event_pattern = jsonencode({
    source = ["aws.organizations"]
    detail-type = ["AWS API Call via CloudTrail"]
    detail = {
      eventName = [
        "CreatePolicy", "UpdatePolicy", "DeletePolicy",
        "AttachPolicy", "DetachPolicy"
      ]
    }
  })
}

resource "aws_cloudwatch_event_target" "scp_changes_sns" {
  rule = aws_cloudwatch_event_rule.scp_changes.name
  arn = aws_sns_topic.alerts.arn
}

 

This rule lives in the management account and catches any SCP modification regardless of how it was made. If something changes outside of a pipeline run, you will know about it.

State and Access Considerations

The management account Terraform state should live in an S3 backend in the management account itself, with DynamoDB locking. Do not store it in a member account. If that account is compromised or an SCP change locks you out, you lose state access.

terraform {
  backend "s3" {
    bucket = "yourcompany-tfstate-management"
    key = "organizations/terraform.tfstate"
    region = "us-east-1"
    dynamodb_table = "terraform-locks"
    encrypt = true
  }
}

 

The IAM role running this Terraform needs organizations:* and the ability to assume roles in member accounts if you are managing account-level resources. Keep this role separate from your standard admin roles and log every assumption.

Running This in GitLab CI

We manage the Organization's Terraform through a dedicated GitLab CI pipeline rather than running it locally. Any change to SCPs or account structure goes through a merge request, requiring sign-off from both a tech lead and a senior engineer before anything touches the pipeline. Given that a bad change here can affect every account in the org simultaneously, having a second and third set of eyes is not optional.

The pipeline itself is straightforward. On every push, it runs terraform plan and posts the output as an MR comment so reviewers can see exactly what will change without having to run anything locally. The apply job is manual and gated behind MR approval, so it cannot run until the required reviewers have signed off.

stages:
  - validate
  - plan
  - apply

variables:
  TF_ROOT: "aws-organizations"

validate:
  stage: validate
  script:
    - cd $TF_ROOT
    - terraform init -backend=false
    - terraform validate

plan:
  stage: plan
  script:
    - cd $TF_ROOT
    - terraform init
    - terraform plan -out=plan.tfplan
  artifacts:
    paths:
      - $TF_ROOT/plan.tfplan

apply:
  stage: apply
  script:
    - cd $TF_ROOT
    - terraform init
    - terraform apply plan.tfplan
  when: manual
  rules:
    - if: $CI_MERGE_REQUEST_APPROVED

 

The CI runner assumes an IAM role with the permissions needed to manage Organizations resources. Credentials are passed in as CI/CD variables and never stored in the repository. We also have branch protection enabled on main so that direct pushes are blocked entirely. Every change has a paper trail, which matters when you need to explain why a permission is blocked or when something breaks after a deployment.

Common Mistakes

 
Removing FullAWSAccess from root. When you enable SCPs, AWS attaches a FullAWSAccess policy to the root by default. If you remove it without replacing it, every account in your org loses all permissions immediately. Leave it at the root and use deny-based SCPs at the OU level.
 
An incomplete NotAction list on the region lockdown. If your list is missing global services, you will start seeing IAM and STS failures that look like account-level permission issues. The errors are not obvious and the debugging path is slow. Use a complete list from the start.
 
Running workloads in the management account. SCPs do not apply to the management account, by design. It is always exempt. This means it has no guardrails, which is exactly why nothing should run there.
 
Blocking Service-Linked Roles. Some AWS services create SLRs automatically. If your SCP blocks iam:CreateRole broadly, those services will fail silently and the error messages will not point you to the SCP. Scope IAM restrictions carefully.

Wrapping Up

Get the OU structure right before you start writing resources, because reorganizing it later means moving accounts around and re-testing SCP inheritance. Everything else is easier to change.

The parts that take the most time in practice are the region lockdown NotAction list and the break-glass carveouts on the security tooling SCP. Both need real testing against live workloads before you roll them to production OUs.

If you are starting from an existing multi-account setup without Organizations, you will need to accept a features invitation from each member account before SCPs are available. Give account owners a heads up and plan for a change window.

Need Help?

If you are dealing with a messy multi-account setup, starting from scratch, or just want someone to do this properly the first time, we can help. This is work we do regularly, both for our own infrastructure and for clients across a range of AWS environments.

Most companies we talk to are one bad SCP change away from a production outage they can't roll back. If that sounds familiar, let's talk.

Alex Podobnik

Author at NextLink Labs