How To Connect Claude Code to Your Rails Models with MCP Servers

Colin Soleim Colin Soleim · Apr 22, 2026
How To Connect Claude Code to Your Rails Models with MCP Servers

If you're using Claude Code (Anthropic's CLI coding assistant), you've probably hit this wall: your AI pair-programmer can read your code, run your tests, and even refactor entire modules, but it has no idea what's actually in your database. It can't tell you how many active students are in the system, what the most common appointment statuses are, or whether that migration you wrote six months ago left orphaned records behind.

We solved this by building a lightweight MCP (Model Context Protocol) server that gives Claude Code safe, read-only SQL access to our Rails application's PostgreSQL database. The whole thing is about 100 lines of Node.js and 60 lines of Ruby. This post walks through what we built, why we made the design decisions we did, and how you can do the same for your own Rails app.

What Is MCP and Why Should You Care?

MCP is an open protocol that lets AI assistants like Claude Code interact with external tools and data sources. Think of it as a standardized way to give your AI access to things beyond the codebase, like databases, APIs, documentation, and deployment systems.

Without MCP, when you ask Claude Code "which students have appointments this week?", it would have to read through your models, guess at the schema, and suggest a query you'd then have to run yourself. With our MCP server in place, it can just run the query directly and give you the answer.

The protocol works through a simple client-server model. Claude Code launches the MCP server as a subprocess, communicates over stdio, and calls tools the server exposes. In our case, those tools are list_tables, describe_table, and execute_query.

The Architecture

Our setup has two pieces:

 
A Node.js MCP server that Claude Code launches and communicates with via stdio.
 
Three Rails API endpoints that execute the actual database operations.

The Node server doesn't touch the database directly. It's a thin proxy that forwards requests to Rails, which handles authentication, enforces read-only transactions, and applies timeouts. This keeps all the safety logic in one place (your Rails app) rather than splitting it across two runtimes.

Claude Code(stdio)Node MCP Server(HTTP)Rails APIPostgreSQL

 

The Rails Side

We added a single controller with three actions. No models, no serializers, no service objects. Just a controller that talks to the database and returns JSON.

class Api::McpController < ActionController::API
  before_action :authenticate_mcp_key

  def query
    sql = params[:sql].to_s.strip
    return render json: { error: "Missing sql parameter" },
      status: :bad_request if sql.blank?

    result = execute_readonly(sql)
    render json: {
      columns: result.columns,
      rows: result.rows,
      count: result.rows.size
    }
  rescue ActiveRecord::StatementInvalid => e
    render json: { error: e.message }, status: :unprocessable_entity
  end

  def tables
    result = ActiveRecord::Base.connection.select_all(<<~SQL)
      SELECT table_name
      FROM information_schema.tables
      WHERE table_schema = 'public'
        AND table_type = 'BASE TABLE'
      ORDER BY table_name
    SQL
    render json: { tables: result.rows.flatten }
  end

  def describe
    table = params[:table].to_s.strip
    return render json: { error: "Missing table parameter" },
      status: :bad_request if table.blank?

    unless table.match?(/\A[a-z_][a-z0-9_]*\z/)
      return render json: { error: "Invalid table name" },
        status: :bad_request
    end

    columns = ActiveRecord::Base.connection.select_all(<<~SQL)
      SELECT column_name, data_type, is_nullable, column_default
      FROM information_schema.columns
      WHERE table_schema = 'public'
        AND table_name = #{ActiveRecord::Base.connection.quote(table)}
      ORDER BY ordinal_position
    SQL

    render json: { table: table, columns: columns.to_a }
  end

  private

  def authenticate_mcp_key
    expected = ENV["MCP_API_KEY"]
    return render json: { error: "MCP_API_KEY not configured on server" },
      status: :service_unavailable if expected.blank?

    provided = request.headers["X-MCP-Key"] || params[:api_key]
    unless ActiveSupport::SecurityUtils.secure_compare(
      provided.to_s, expected
    )
      return render json: { error: "Invalid or missing API key" },
        status: :unauthorized
    end
  end

  def execute_readonly(sql)
    ActiveRecord::Base.connection.execute(
      "SET LOCAL statement_timeout = '30s'"
    )
    ActiveRecord::Base.transaction do
      ActiveRecord::Base.connection.execute("SET TRANSACTION READ ONLY")
      ActiveRecord::Base.connection.select_all(sql)
    end
  end
end

 

There are a few design decisions here that are worth explaining.

Read-Only Transactions 

The execute_readonly method wraps every query in a PostgreSQL transaction with SET TRANSACTION READ ONLY. This is enforced at the database level, so even if someone crafts a DROP TABLE statement and sends it through, Postgres will reject it. We're not relying on regex parsing or SQL keyword filtering, which are easy to get wrong.

Statement Timeout 

We set a 30-second timeout with SET LOCAL statement_timeout. The LOCAL qualifier scopes it to the current transaction, so it doesn't affect other connections. This prevents a runaway SELECT * FROM appointments on a table with millions of rows from locking up your database.

API Key Authentication 

We use a shared secret passed via the X-MCP-Key header. The comparison uses ActiveSupport::SecurityUtils.secure_compare to prevent timing attacks. This might seem like overkill for a local dev tool, but security habits are easier to maintain than to retrofit. This same controller could serve a production monitoring use case down the road.

Table Name Validation 

The describe action validates table names against /\A[a-z_][a-z0-9_]*\z/. We're already using parameterized queries via ActiveRecord::Base.connection.quote, but explicit validation on input that becomes part of a query is a good habit to have.

The routes are minimal:

namespace :api do
  scope :mcp do
    post "query", to: "mcp#query"
    get "tables", to: "mcp#tables"
    get "describe/:table", to: "mcp#describe"
  end
end

 

We're using scope instead of namespace here. This keeps the URL structure clean (/api/mcp/query) without requiring a nested mcp/ directory for the controller.

The Node MCP Server

The MCP server uses the official @modelcontextprotocol/sdk package and exposes three tools that map directly to our Rails endpoints:

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";

const API_URL = process.env.MCP_API_URL || "http://localhost:3000";
const API_KEY = process.env.MCP_API_KEY;

if (!API_KEY) {
  console.error("MCP_API_KEY environment variable is required");
  process.exit(1);
}

async function apiRequest(path, options = {}) {
  const url = `${API_URL}/api/mcp${path}`;
  const res = await fetch(url, {
    ...options,
    headers: {
      "Content-Type": "application/json",
      "X-MCP-Key": API_KEY,
      ...options.headers,
    },
  });

  const data = await res.json();
  if (!res.ok) {
    throw new Error(data.error || `HTTP ${res.status}`);
  }
  return data;
}

const server = new McpServer({
  name: "rails-db",
  version: "1.0.0",
});

 

Each tool definition is a few lines: a name, description, Zod schema for inputs, and a handler that calls the Rails API and formats the response.

server.tool(
  "execute_query",
  "Run a read-only SQL query against the Rails database.",
  { sql: z.string().describe("The SELECT SQL query to execute") },
  async ({ sql }) => {
    const data = await apiRequest("/query", {
      method: "POST",
      body: JSON.stringify({ sql }),
    });

    const header = data.columns.join(" | ");
    const rows = data.rows
      .map((row) =>
        row.map((v) => (v === null ? "NULL" : String(v))).join(" | ")
      )
      .join("\n");

    return {
      content: [
        {
          type: "text",
          text: `${header}\n${"—".repeat(header.length)}\n${rows}\n\n(${data.count} rows)`,
        },
      ],
    };
  }
);

 

One thing we learned building this: response formatting matters more than you'd expect. Claude Code is a text-based tool, so we format query results as pipe-delimited tables rather than returning raw JSON. This makes it much easier for the AI to parse and reason about the data.

The full server is about 100 lines including the list_tables and describe_table tools, which follow the same pattern.

Wiring It Up

Claude Code discovers MCP servers through a .mcp.json file in your project root:

{
  "mcpServers": {
    "rails-db": {
      "command": "/opt/homebrew/bin/node",
      "args": ["mcp-server/index.js"],
      "env": {
        "MCP_API_KEY": "your-secret-key-here",
        "MCP_API_URL": "http://localhost:3000"
      }
    }
  }
}

 

We recommend using absolute paths for the Node binary. Claude Code doesn't always inherit your shell's PATH, so node might not resolve. We use the Homebrew path explicitly. The env block passes secrets directly to the MCP server process, which keeps them out of your shell environment.

After adding this file, restart Claude Code and your Rails server. Claude will automatically launch the MCP server and make the three tools available.

What This Enables

Once connected, you can ask Claude Code questions that span your codebase and your data:

 
"How many students have active referral services but no appointments in the last 30 days?"
 
"Show me the schema for the appointments table and explain what each status means based on the code"
 
"Are there any orphaned goal_notes records where the referenced appointment has been soft-deleted?"

The AI can introspect the schema, run exploratory queries, and correlate what it finds with the application code, all without you switching to a separate database client.

We've found this particularly useful for debugging data issues. For example, when a student was showing zero sessions on the dashboard, we asked Claude to check the cache, query the underlying data, and trace the discrepancy. It found the problem in about 30 seconds. It's also been helpful for onboarding new developers — Claude can explain not just what the code does but what the data actually looks like, which gives people a much better mental model of the system. After running migrations, we also use it to verify the data landed correctly.

Hardening for Production

The implementation above is designed for local development. If you're pointing this at a staging or production database, even a read replica, you'll want to add protections beyond the API key. Here's what we would recommend.

Lock Down by IP Address 

The simplest hardening step is restricting which IPs can reach the MCP endpoints. In Rails, you can do this directly in the controller with a before_action:

ALLOWED_IPS = %w[1.1.1.1].freeze

before_action :restrict_ip

def restrict_ip
  unless ALLOWED_IPS.include?(request.remote_ip)
    render json: { error: "Forbidden" }, status: :forbidden
  end
end

 

If you're running behind nginx or a load balancer, you can also enforce this at the proxy level:

location /api/mcp/ {
  allow 127.0.0.1;
  allow 10.0.1.0/24;
  deny all;

  proxy_pass http://rails_backend;
}

 

Rate Limiting 

A shared API key with no rate limit means a single misconfigured client could hammer your database. Rack::Attack makes this easy to set up:

# config/initializers/rack_attack.rb
Rack::Attack.throttle("mcp/query", limit: 30, period: 60) do |req|
  req.ip if req.path.start_with?("/api/mcp")
end

 

Thirty queries per minute is generous for an AI assistant exploring your schema. If you're seeing bursts above that, something is probably looping, and you want to catch it before it saturates your connection pool.

Query Auditing 

In production, you need to be able to see what's being run. Log every query that comes through the MCP controller, including who sent it and how long it took:

def execute_readonly(sql)
  start = Process.clock_gettime(Process::CLOCK_MONOTONIC)
  ActiveRecord::Base.connection.execute(
    "SET LOCAL statement_timeout = '30s'"
  )
  result = ActiveRecord::Base.transaction do
    ActiveRecord::Base.connection.execute("SET TRANSACTION READ ONLY")
    ActiveRecord::Base.connection.select_all(sql)
  end
  duration = Process.clock_gettime(Process::CLOCK_MONOTONIC) - start

  Rails.logger.info(
    "[MCP] ip=#{request.remote_ip} rows=#{result.rows.size} " \
    "duration=#{duration.round(3)}s sql=#{sql.truncate(500)}"
  )
  result
end

 

Row Limits 

Our implementation doesn't cap result sizes, which is fine locally but dangerous in production. A SELECT * FROM appointments on a table with millions of rows will blow up Claude's context window and potentially lock a database connection for the full 30-second timeout. We recommend wrapping incoming queries with a hard limit:

def execute_readonly(sql)
  capped_sql = "SELECT * FROM (#{sql}) AS _mcp_limited LIMIT 1000"
  # ... rest of the method
end

 

Credential Rotation and Scoping 

A static API key that lives forever in a .env file is a liability. We recommend rotating it regularly (openssl rand -hex 32 takes two seconds), using different keys per environment so a leaked dev key doesn't affect production, and considering short-lived JWTs with a 1-hour expiry for production use if you want to eliminate the risk of long-lived secrets in config files.

Restrict to a Read Replica 

If your infrastructure supports it, point the MCP controller at a read replica rather than your primary database. This gives you physical isolation, so even if something goes wrong, your primary is untouched:

def execute_readonly(sql)
  ActiveRecord::Base.connected_to(role: :reading) do
    # ... execute query against replica
  end
end

 

Rails 6+ has built-in support for multiple database roles, so this is usually a configuration change rather than an architectural one.

Keep .mcp.json Out of Version Control 

Your .mcp.json file contains the API key in plaintext. Make sure it's in your .gitignore. If you need to share the MCP configuration with your team, use a template file (.mcp.json.example) with placeholder values and document the setup process in your README.

Getting Started

If you want to add this to your own Rails app:

1. Create the controller. Copy the McpController above, adjusting the authentication mechanism if needed.

 

2. Add the routes. Three lines in your routes file.

 

3. Set up the Node server. Run npm init, install @modelcontextprotocol/sdk, and write your tool definitions.

 

4. Configure .mcp.json. Point it at your Node server with the correct environment variables.

 

5. Generate an API key. openssl rand -hex 32 gives you a solid shared secret.

 

6. Restart Claude Code. It picks up MCP servers on launch.

 

The whole setup takes about 30 minutes. The first time Claude answers a data question without you opening psql, you'll wish you'd set this up sooner.

Wrapping Up

MCP closes the gap between your AI assistant's understanding of your code and what's actually in your database. The implementation is simple: a thin Node proxy and a locked-down Rails controller. We get useful capabilities without introducing a lot of complexity or risk.

The Model Context Protocol is still relatively new, and the ecosystem is growing quickly. We expect to see more integrations emerge as the tooling matures. For now, read-only database access is a great starting point that makes Claude Code significantly more useful for everyday development work.

Colin Soleim

Author at NextLink Labs