How We Built a Weekly AWS Cost Digest Powered by AI

Alex Podobnik Alex Podobnik · Mar 19, 2026
How We Built a Weekly AWS Cost Digest Powered by AI

Stop reading raw Cost Explorer graphs. Let Claude write your FinOps briefing.

The Problem With AWS Cost Explorery

Every Monday morning, someone on your team opens AWS Cost Explorer, squints at a bar chart, and tries to figure out why the bill went up. They spend 20 minutes clicking through service breakdowns, correlating dates with deployments, and writing a Slack message that says something like: "EC2 is up this week, probably the new EKS nodes."

That process is manual, inconsistent, and usually incomplete. The person doing it has to hold a lot of context in their head. They need to know which deployments happened, which teams changed what, what the baseline looks like. And if they're wrong about the root cause, no one catches it until the next month's bill.

We built a script that automates this entirely. It pulls 4 weeks of cost data from AWS Cost Explorer, feeds it to Claude with a FinOps-focused prompt, and produces a clear digest every Monday morning, complete with root cause hypotheses, anomaly flags, and recommended actions.

Engineering teams running this report save 30-60 minutes of manual analysis per week and catch cost anomalies 5-7 days earlier than they would through manual review.AWS Cost Explorer-1

What the Script Does

At a high level, the pipeline looks like this:

AWS Cost Explorer API > Python script > Claude API > Markdown digest (Slack) 

Each weekly run does five things: 

1. Pulls 4 weeks of AWS service cost: Data broken down by service, with week-over-week deltas pre-computed.

  

2. Fetches AWS-native anomaly detections: AWS Cost Anomaly Detection runs in the background in most accounts and flags statistical outliers.

 

3. Breaks down spend by environment tag: prod vs. staging vs. dev, so you can see where money is actually going.

  

4. Sends all of it to Claude: With a system prompt that instructs it to behave as a senior FinOps engineer, not just describe numbers but hypothesize causes.

 

5. Outputs structured Markdown: Saved as a file, optionally posted to Slack, and archived as a GitLab pipeline artifact.

AWS Weekly Cost Digest-3

How It Works

Step 1: Pulling the Data 

We use boto3 to call the AWS Cost Explorer API. The core function pulls cost grouped by service for any date range:

def get_cost_by_service(ce_client, start, end):
    response = ce_client.get_cost_and_usage(
        TimePeriod={'Start': start, 'End': end},
        Granularity='MONTHLY',
        Metrics=['UnblendedCost'],
        GroupBy=[{'Type': 'DIMENSION', 'Key': 'SERVICE'}]
    )
    # Returns {service_name: total_cost_usd}

 

We run this function across four weekly windows to build a trend, then compute week-over-week deltas for every service. Anything below $5 is filtered out as noise.

Note: Cost Explorer data has a 24-48 hour lag. Running this on Monday morning gives you clean data through the end of last week, which is why the GitLab schedule fires at 08:00 UTC on Mondays. 

Step 2: Fetching Anomalies 

AWS Cost Anomaly Detection is a free service that runs statistical anomaly detection on your spend in the background. Most teams never look at it. We pull it programmatically and feed it into the digest:

response = ce_client.get_anomalies(
    DateInterval={'StartDate': str(start), 'EndDate': str(end)},
    TotalImpact={'NumericOperator': 'GREATER_THAN', 'StartValue': 5.0}
)

 

Each anomaly includes the affected service, region, dollar impact, and detection date — all of which goes into the prompt 

Step 3: Building the Prompt Payload  

This is the most important part. Rather than sending raw API responses to Claude, we assemble a clean JSON payload with pre-computed deltas. 

{
    "report_date": "2026-03-11",
    "period": "2026-03-04 → 2026-03-11",
    "total_current_week_usd": 258.8,
    "total_prev_week_usd": 253.55,
    "total_wow_delta_pct": 2.1,
    "top_services_wow": {
      "Savings Plans for AWS Compute usage": { "current_usd": 84.0, "prev_usd": 84.0, "delta_pct": 0.0 },
      "EC2 - Other": { "current_usd": 60.82, "prev_usd": 64.96, "delta_pct": -6.4 },
      "Amazon Virtual Private Cloud": { "current_usd": 44.31, "prev_usd": 40.93, "delta_pct": 8.3 },
      "Amazon Elastic Load Balancing": { "current_usd": 24.87, "prev_usd": 21.65, "delta_pct": 14.9 },
      "Amazon Relational Database Service": { "current_usd": 21.86, "prev_usd": 19.52, "delta_pct": 12.0 },
      "AmazonCloudWatch": { "current_usd": 8.48, "prev_usd": 4.66, "delta_pct": 82.0 },
      "Amazon ElastiCache": { "current_usd": 7.34, "prev_usd": 4.95, "delta_pct": 48.5 }
    },
    "weekly_trend": [
      {
        "week": "2026-02-11 → 2026-02-18",
        "costs": {
          "AWS Config": 0.042, "AWS Cost Explorer": 0.36, "AWS Key Management Service": 1.5,
          "AWS Secrets Manager": 1.30, "AWS WAF": 2.0, "Amazon DynamoDB": 0.00001,
          "Amazon EC2 Container Registry (ECR)": 0.018, "EC2 - Other": 34.49,
          "Amazon Elastic Load Balancing": 15.12, "Amazon Relational Database Service": 13.78,
          "Amazon Route 53": 0.044, "Amazon Simple Queue Service": 0.113,
          "Amazon Simple Storage Service": 0.398, "Amazon Virtual Private Cloud": 33.75,
          "AmazonCloudWatch": 3.13, "Savings Plans for AWS Compute usage": 84.0
        }
      },
      {
        "week": "2026-02-18 → 2026-02-25",
        "costs": {
          "AWS Config": 0.084, "AWS Key Management Service": 1.54, "AWS Secrets Manager": 1.63,
          "AWS WAF": 2.0, "Amazon ElastiCache": 2.79, "EC2 - Other": 94.85,
          "Amazon Elastic Container Service": 6.81, "Amazon Elastic Load Balancing": 18.81,
          "Amazon Relational Database Service": 17.30, "Amazon Route 53": 0.547,
          "Amazon Simple Queue Service": 0.132, "Amazon Simple Storage Service": 0.465,
          "Amazon Virtual Private Cloud": 37.55, "AmazonCloudWatch": 5.55,
          "Savings Plans for AWS Compute usage": 84.0
        }
      },
      {
        "week": "2026-02-25 → 2026-03-04",
        "costs": {
          "AWS Config": 0.042, "AWS Key Management Service": 1.44, "AWS Secrets Manager": 1.62,
          "AWS WAF": 1.92, "Amazon DynamoDB": 0.000153, "Amazon ElastiCache": 4.95,
          "EC2 - Other": 64.96, "Amazon Elastic Container Service": 2.28,
          "Amazon Elastic Load Balancing": 21.65, "Amazon Relational Database Service": 19.52,
          "Amazon Route 53": 4.05, "Amazon Simple Queue Service": 0.075,
          "Amazon Simple Storage Service": 0.445, "Amazon Virtual Private Cloud": 40.93,
          "AmazonCloudWatch": 4.66, "Savings Plans for AWS Compute usage": 84.0, "Tax": 0.98
        }
      },
      {
        "week": "2026-03-04 → 2026-03-11",
        "costs": {
          "AWS Config": 0.042, "AWS Cost Explorer": 0.14, "AWS Key Management Service": 1.36,
          "AWS Secrets Manager": 1.65, "AWS WAF": 1.81, "Amazon DynamoDB": 0.000374,
          "Amazon EC2 Container Registry (ECR)": 0.109, "Amazon ElastiCache": 7.34,
          "EC2 - Other": 60.82, "Amazon Elastic Container Service": 1.0,
          "Amazon Elastic Load Balancing": 24.87, "Amazon Relational Database Service": 21.86,
          "Amazon Route 53": 0.569, "Amazon Simple Storage Service": 0.438,
          "Amazon Virtual Private Cloud": 44.31, "AmazonCloudWatch": 8.48,
          "Savings Plans for AWS Compute usage": 84.0
        }
      }
    ],
    "aws_anomalies": [],
    "environment_breakdown": {
      "untagged": 258.80
    }
}

 

The payload includes:

Total spend this week vs. last week
Top 10 services by current spend with WoW deltas
The 4-week weekly trend per service
AWS anomaly detections
The environment tag breakdown

Pre-computing deltas in Python rather than asking Claude to calculate them from raw numbers produces more reliable, consistent output.

 

Step 4: The System Prompt  

The system prompt is where most of the value lives. We instruct Claude to behave as a senior FinOps engineer, not a data summarizer:

SYSTEM_PROMPT = '''

You are a senior FinOps engineer. Your job is to analyze AWS cost data and produce a clear, actionable weekly spend digest.

Be specific with dollar amounts. Use plain English, not jargon. Avoid hedging — give your best hypothesis even with limited data. Do not hallucinate services or numbers not present in the data.

'''

 

The "avoid hedging" instruction is critical. Without it, Claude tends to write things like "this could potentially be related to..." which is useless for an engineering team. We want a concrete hypothesis, even if it's wrong. A wrong hypothesis that someone can validate is more useful than a non-answer. 

Prompt tip: "Do not hallucinate services or numbers not present in the data" is an explicit guardrail. Without it, Claude occasionally invents plausible-sounding but fabricated explanations for cost movements. 

Step 5: Output and Delivery  

The digest is saved as a dated Markdown file. If SLACK_WEBHOOK_URL is set, it's posted automatically. In GitLab CI, it's saved as a pipeline artifact with a 90-day retention window giving you a searchable archive of every weekly digest.

The Business Values

1. Speed of detection 

Manual cost review typically happens once a week at best, often once a month. Automated digests running every Monday mean anomalies are surfaced within days of occurring, not weeks. At enterprise cloud spend levels ($50k+/month), catching a misconfiguration a week earlier routinely saves thousands of dollars.

2. Consistency 

Human reviewers have good weeks and bad weeks. They miss things when they're busy, focus on the services they know best, and skip the "boring" ones. Claude reviews every service, every week, with the same level of attention.

3. Root cause hypotheses 

This is the feature that surprises people most. Raw cost data tells you what changed. The LLM layer hypothesizes why. "EC2 up 34%" becomes "likely tied to your EKS node group scale-out on Tuesday, check whether your HPA scale-down trigger is configured." That hypothesis takes an engineer from an hour of investigation to a 5-minute confirmation. 

4. Cross-team communication 

Finance teams, engineering managers, and CTOs all need cost visibility but none of them want to read Cost Explorer graphs. A plain digest that arrives in Slack on Monday morning becomes a shared artifact that the whole organization can act on.

Running It Yourself

Prerequisites

Python 3.12+
AWS credentials with ce:GetCostAndUsage and ce:GetAnomalies permissions
Anthropic API key (console.anthropic.com)
Optional: Slack webhook for automated posting

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Local run

pip install boto3 anthropic

export AWS_ACCESS_KEY_ID=...
export AWS_SECRET_ACCESS_KEY=...
export ANTHROPIC_API_KEY=...
export SLACK_WEBHOOK_URL=... # optional

python cost_digest.py

 

Done. The digest ran successfully and was saved to ./aws-cost-digest-2026-03-11.md.

Quick summary:
- Total this week: $258.80 (+2.1% WoW)
- Biggest movers: CloudWatch +82%, ElastiCache +49%, ELB +15%
- No AWS anomaly alerts triggered
- 100% of spend is untagged – flagged as a governance gap

Top action items from Claude: investigate the CloudWatch spike, implement resource tagging, and audit ElastiCache/ELB for orphaned resources.

 

GitLab scheduled pipeline

The repo includes a .gitlab-ci.yml that runs every Monday at 08:00 UTC. Add your credentials as masked CI/CD variables, create a pipeline schedule, and you're done.

stages:
  - digest

aws-cost-digest:
  stage: digest
  image: python:3.12-slim
  rules:
    - if: $CI_PIPELINE_SOURCE == "schedule"
    - if: $CI_PIPELINE_SOURCE == "web"
  before_script:
    - pip install --quiet boto3 anthropic
  script:
    - python cost_digest.py
  artifacts:
    paths:
      - aws-cost-digest-*.md
    expire_in: 90 days
  variables:
    AWS_DEFAULT_REGION: us-east-1

Extending the Script

A few directions worth exploring depending on your environment:

Multi-account AWS Organizations: add GroupBy: LINKED_ACCOUNT to break costs across accounts
Regional breakdown: Add GroupBy: REGION to catch unexpected cross-region data transfer costs
Terraform change correlation: Cross-reference digest dates against your Terraform CI pipeline runs to auto-suggest which deployment caused a spike
Historical trend RAG: Store digests in S3 and give Claude access to previous weeks for longer-horizon trend analysis
Email delivery: Swap Slack for AWS SES or SendGrid if your team prefers email
 
 The full source code for this script is available on GitLab.

Alex Podobnik

Author at NextLink Labs