Building a Custom MCP Server for Postgres Schema Introspection
Build a custom MCP server for Postgres schema introspection so Claude Code can read your database structure without shipping raw credentials.
You ask Claude Code to write a query against your production Postgres and it cheerfully hallucinates a column that does not exist. You paste the schema into the chat, it works for one question, and on the next prompt the model has already forgotten half of it. The problem is not the model — it is that the model has no durable, structured view of your database. A custom MCP server for Postgres schema introspection solves that in about 80 lines of Python, without ever handing the model a raw connection string.
This guide walks through building one end to end, why it beats pasting schemas into the chat, and where the trade-offs are.
TL;DR
- MCP (Model Context Protocol) is Anthropic’s open standard for connecting tools and data sources to LLM clients like Claude Code.
- A schema introspection server exposes tools like
list_tables,describe_table, andget_relationships— not a raw SQL runner. - You keep the DB credentials on the server side; the model only sees structured metadata.
- Setup is roughly 80 lines of Python using the
mcpSDK andpsycopg, plus a two-line Claude Code config entry.
Why Schema Introspection, Not a Raw SQL Tool
The tempting first version of a Postgres MCP server exposes a single run_sql tool. It works for five minutes, then you remember the LLM can now DROP TABLE in production.
Introspection-only servers avoid that entire class of problem:
- Read-only by design — the server never exposes write or DDL access.
- Scoped surface —
list_tables,describe_table,list_foreign_keys,sample_rows(with a hard row cap) are enough for the model to write correct queries. - Credential isolation — the connection string lives in the server’s environment. The client never sees it.
- Deterministic output — schema metadata changes rarely, so MCP responses cache well inside a session.
The model writes the SQL. A human — or a separate gated tool — runs it.
Deep Dive: Building the Server
Project layout
pg-schema-mcp/
├── pyproject.toml
├── server.py
└── .env
pyproject.toml dependencies:
[project]
name = "pg-schema-mcp"
version = "0.1.0"
dependencies = [
"mcp>=1.0.0",
"psycopg[binary]>=3.2",
"python-dotenv>=1.0",
]
server.py
import os
import psycopg
from dotenv import load_dotenv
from mcp.server.fastmcp import FastMCP
load_dotenv()
DSN = os.environ["POSTGRES_DSN"]
mcp = FastMCP("pg-schema")
def query(sql: str, params: tuple = ()) -> list[dict]:
with psycopg.connect(DSN, autocommit=True) as conn:
with conn.cursor() as cur:
cur.execute(sql, params)
cols = [c.name for c in cur.description]
return [dict(zip(cols, row)) for row in cur.fetchall()]
@mcp.tool()
def list_tables(schema: str = "public") -> list[dict]:
"""List tables in a schema with row estimates."""
return query(
"""
SELECT c.relname AS table, c.reltuples::bigint AS row_estimate
FROM pg_class c
JOIN pg_namespace n ON n.oid = c.relnamespace
WHERE c.relkind = 'r' AND n.nspname = %s
ORDER BY c.relname
""",
(schema,),
)
@mcp.tool()
def describe_table(table: str, schema: str = "public") -> list[dict]:
"""Return column name, type, nullability, and default for a table."""
return query(
"""
SELECT column_name, data_type, is_nullable, column_default
FROM information_schema.columns
WHERE table_schema = %s AND table_name = %s
ORDER BY ordinal_position
""",
(schema, table),
)
@mcp.tool()
def list_foreign_keys(schema: str = "public") -> list[dict]:
"""List foreign key relationships in a schema."""
return query(
"""
SELECT
tc.table_name AS from_table,
kcu.column_name AS from_column,
ccu.table_name AS to_table,
ccu.column_name AS to_column
FROM information_schema.table_constraints tc
JOIN information_schema.key_column_usage kcu
ON tc.constraint_name = kcu.constraint_name
JOIN information_schema.constraint_column_usage ccu
ON ccu.constraint_name = tc.constraint_name
WHERE tc.constraint_type = 'FOREIGN KEY' AND tc.table_schema = %s
""",
(schema,),
)
if __name__ == "__main__":
mcp.run()
Note the absence of a generic run_sql entry. Every query is parameterized and scoped to metadata tables.
Wiring Claude Code
In ~/.claude/mcp.json (or the project-scoped MCP config):
{
"mcpServers": {
"pg-schema": {
"command": "uv",
"args": ["run", "--directory", "/path/to/pg-schema-mcp", "python", "server.py"],
"env": { "POSTGRES_DSN": "postgresql://readonly_user:***@host/db" }
}
}
}
Use a read-only role on the DB side. The MCP tool surface should not be the only layer of defense.
Once restarted, Claude Code exposes list_tables, describe_table, and list_foreign_keys as callable tools. Ask “what tables are in the billing schema and how do they relate?” and the model runs the introspection tools and writes accurate SQL on the first try.
Pros & Cons
| Pros | Cons | |
|---|---|---|
| Accuracy | Model sees real column names and types — no hallucinated fields | Needs schema refresh if DDL changes mid-session |
| Security | Credentials never leave the server; no write path exposed | You must trust the MCP host (Claude Code) to call tools honestly |
| Maintenance | ~80 lines of code, standard libraries | One more process to run and keep updated |
| Portability | Works with any MCP client — Claude Code, Cursor, custom hosts | MCP is still pre-1.0 stable; expect minor spec shifts |
Who Should Use This
- Backend engineers using Claude Code against real databases, not toy ones.
- Teams where DB credentials must never touch a chat context.
- Data teams that already have read-only roles and want the LLM to author SQL without guessing.
- Anyone already burned by a model inventing a
users.email_verified_atcolumn that does not exist.
If you are still running queries by hand and pasting results into the chat, this is the upgrade. If you only ever touch SQLite locally, this is overkill — use the built-in file tools.
For a related piece on automating the rest of your AI-assisted workflow, see Claude Code hooks for pre-commit lint automation.
FAQ
Do I need to write a full MCP server, or can I use an existing one?
Community Postgres MCP servers exist, but most expose raw SQL. If you care about the read-only property, audit what the server is actually doing or write a slim one like the example above.
How does MCP differ from Claude Code tools or function calling?
MCP is a protocol for exposing tools to any compatible client. Function calling is model-specific. Claude Code tools are the host implementation. MCP sits in the middle — portable across clients.
Does this work with managed Postgres (RDS, Supabase, Neon)?
Yes. Any Postgres that speaks the standard protocol and lets you create a read-only role will work.
Can I cache the schema inside the server?
Yes — wrap each query call with a short TTL cache (60–300s). Schemas rarely change mid-session, and caching cuts tool-call latency noticeably.
What about other databases — MySQL, BigQuery, Snowflake?
Same pattern. Swap psycopg for the relevant driver and use that engine’s information schema. The MCP surface stays identical.
Where is the official MCP documentation?
See Anthropic’s docs at modelcontextprotocol.io for the spec and SDKs.
Bottom Line
A custom MCP server for Postgres schema introspection turns Claude Code from a confident guesser into an informed author — without handing the model production credentials or write access.
Ship the read-only version first. Add sampled-row and index-stat tools later only when you actually need them.
Product recommendations are based on independent research and testing. We may earn a commission through affiliate links at no extra cost to you.