MCP for Enterprise: SSO, Audit Trails, and Gateway Patterns

The Model Context Protocol (MCP) crossed 110 million monthly downloads in early 2026. What started as Anthropic's open standard for connecting AI agents to external tools has become the connective tissue of the agentic automation ecosystem, with contributions from OpenAI, Google, and hundreds of enterprise vendors.
But adoption at scale exposed a gap. Most MCP tutorials teach you how to build a server and connect a client. They assume a single developer, a single API key, and a trusted environment. Enterprise reality looks different: hundreds of MCP servers across teams, shared credentials, no centralized governance, and no audit trail for what agents do with the tools they access.
The 2026 MCP roadmap addresses this directly, prioritizing enterprise readiness alongside transport evolution, agent communication, and governance maturation. This tutorial walks you through the three pillars of enterprise MCP: SSO-integrated authentication, structured audit trails, and gateway proxy patterns.
Prerequisites
Before starting, ensure you have:
- Working knowledge of MCP (server and client basics — see our Build an MCP Server in TypeScript tutorial)
- Node.js 20+ and TypeScript
- An OAuth 2.0 / OIDC identity provider (Keycloak, Entra ID, Auth0, or similar)
- Basic understanding of reverse proxies (Nginx, Caddy, or cloud API gateways)
- Docker for running local infrastructure
What You Will Build
A production-ready MCP deployment with:
- SSO-integrated authentication — agents and users authenticate through your existing identity provider
- Structured audit logging — every tool invocation is logged with who, what, when, and outcome
- A gateway proxy — centralized control plane for rate limiting, policy enforcement, and authorization propagation
Part 1: SSO Integration for MCP Servers
The Problem with API Keys
Most MCP server examples use static API keys or bearer tokens. In production, this creates three problems:
- No identity context: the server knows a valid key was used, but not who used it or what they should be allowed to do
- Key sprawl: each team manages its own keys, with no rotation policy or central revocation
- No SSO integration: users authenticate separately for MCP tools even though they already have enterprise credentials
OAuth 2.1 in MCP
The MCP specification added OAuth 2.1 support in June 2025. Instead of each MCP client managing its own credentials, access is brokered through your organization's existing identity layer: SSO in, scoped tokens out, IT stays in the loop.
Here is how to wire it up:
Step 1: Configure Your Identity Provider
Create an OAuth application in your IdP. This example uses Keycloak, but the pattern applies to Entra ID, Auth0, or Okta.
// Keycloak client configuration
// realm: "enterprise"
// client_id: "mcp-gateway"
// client_secret: stored in vault
// Valid redirect URIs: https://mcp-gateway.internal/callback
// Scopes: openid, profile, mcp:tools:read, mcp:tools:executeDefine custom scopes that map to MCP permissions:
mcp:tools:read — list and describe available tools
mcp:tools:execute — invoke tools
mcp:resources:read — read MCP resources
mcp:admin — manage server configuration
Step 2: Implement the Auth Middleware
Create an authentication middleware that validates OAuth tokens before any MCP request reaches your server:
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import jwt from "jsonwebtoken";
import jwksClient from "jwks-rsa";
const client = jwksClient({
jwksUri: "https://keycloak.internal/realms/enterprise/protocol/openid-connect/certs",
cache: true,
rateLimit: true,
});
function getSigningKey(kid: string): Promise<string> {
return new Promise((resolve, reject) => {
client.getSigningKey(kid, (err, key) => {
if (err) reject(err);
else resolve(key!.getPublicKey());
});
});
}
interface McpTokenPayload {
sub: string;
email: string;
realm_access: { roles: string[] };
scope: string;
iat: number;
exp: number;
}
async function validateToken(token: string): Promise<McpTokenPayload> {
const decoded = jwt.decode(token, { complete: true });
if (!decoded?.header.kid) throw new Error("Invalid token header");
const signingKey = await getSigningKey(decoded.header.kid);
return jwt.verify(token, signingKey, {
audience: "mcp-gateway",
issuer: "https://keycloak.internal/realms/enterprise",
}) as McpTokenPayload;
}
function hasScope(payload: McpTokenPayload, required: string): boolean {
return payload.scope.split(" ").includes(required);
}Step 3: Scope-Based Tool Authorization
Wrap your MCP tool handlers with authorization checks:
const server = new McpServer({
name: "enterprise-tools",
version: "1.0.0",
});
server.tool(
"query_database",
"Run a read-only SQL query against the analytics database",
{ query: { type: "string", description: "SQL SELECT query" } },
async (args, extra) => {
const token = extra.meta?.authToken as string;
const payload = await validateToken(token);
if (!hasScope(payload, "mcp:tools:execute")) {
return {
content: [{ type: "text", text: "Forbidden: missing mcp:tools:execute scope" }],
isError: true,
};
}
// Log the invocation (covered in Part 2)
await auditLog({
user: payload.email,
tool: "query_database",
args,
timestamp: new Date().toISOString(),
});
const result = await executeReadOnlyQuery(args.query);
return { content: [{ type: "text", text: JSON.stringify(result) }] };
}
);Part 2: Structured Audit Trails
Why Audit Logging Matters Now
The EU AI Act requires automatic logging and traceability for high-risk AI systems, with key obligations taking effect on August 2, 2026. SOX requires audit logs retained for at least seven years. Even without regulatory pressure, knowing what your AI agents did, with which tools, and what happened is basic operational hygiene.
What to Log
Every audit entry should capture:
| Field | Description | Example |
|---|---|---|
timestamp | ISO 8601 with timezone | 2026-04-23T08:15:32.441Z |
request_id | Unique trace ID | req_a7b3c9d2 |
user_id | Authenticated identity | jane@company.com |
agent_id | Which AI agent made the call | support-bot-v2 |
tool_name | MCP tool invoked | query_database |
tool_args | Parameters (sanitized) | redact secrets |
outcome | success / error / denied | success |
response_summary | Truncated result | 42 rows returned |
duration_ms | Execution time | 234 |
client_ip | Source address | 10.0.1.15 |
Step 4: Implement the Audit Logger
import { createLogger, format, transports } from "winston";
interface AuditEntry {
timestamp: string;
requestId: string;
userId: string;
agentId?: string;
toolName: string;
toolArgs: Record<string, unknown>;
outcome: "success" | "error" | "denied";
responseSummary?: string;
durationMs?: number;
clientIp?: string;
}
const auditLogger = createLogger({
level: "info",
format: format.combine(
format.timestamp(),
format.json()
),
defaultMeta: { service: "mcp-audit" },
transports: [
new transports.File({
filename: "/var/log/mcp/audit.jsonl",
maxsize: 100 * 1024 * 1024, // 100MB rotation
maxFiles: 365,
}),
],
});
function sanitizeArgs(args: Record<string, unknown>): Record<string, unknown> {
const sensitive = ["password", "secret", "token", "key", "credential"];
const sanitized = { ...args };
for (const key of Object.keys(sanitized)) {
if (sensitive.some((s) => key.toLowerCase().includes(s))) {
sanitized[key] = "[REDACTED]";
}
}
return sanitized;
}
async function auditLog(entry: AuditEntry): Promise<void> {
const sanitized = {
...entry,
toolArgs: sanitizeArgs(entry.toolArgs),
};
auditLogger.info("tool_invocation", sanitized);
}Step 5: Wrap All Tool Handlers
Create a higher-order function that automatically wraps every tool with audit logging:
import { randomUUID } from "crypto";
type ToolHandler = (
args: Record<string, unknown>,
extra: Record<string, unknown>
) => Promise<{ content: Array<{ type: string; text: string }>; isError?: boolean }>;
function withAudit(toolName: string, handler: ToolHandler): ToolHandler {
return async (args, extra) => {
const requestId = `req_${randomUUID().slice(0, 8)}`;
const token = extra.meta?.authToken as string;
const startTime = Date.now();
let payload: McpTokenPayload | null = null;
try {
payload = await validateToken(token);
} catch {
await auditLog({
timestamp: new Date().toISOString(),
requestId,
userId: "unknown",
toolName,
toolArgs: args,
outcome: "denied",
responseSummary: "Authentication failed",
});
return {
content: [{ type: "text", text: "Authentication required" }],
isError: true,
};
}
try {
const result = await handler(args, extra);
await auditLog({
timestamp: new Date().toISOString(),
requestId,
userId: payload.email,
toolName,
toolArgs: args,
outcome: result.isError ? "error" : "success",
responseSummary: result.content[0]?.text?.slice(0, 200),
durationMs: Date.now() - startTime,
});
return result;
} catch (err) {
await auditLog({
timestamp: new Date().toISOString(),
requestId,
userId: payload.email,
toolName,
toolArgs: args,
outcome: "error",
responseSummary: String(err),
durationMs: Date.now() - startTime,
});
throw err;
}
};
}Querying Audit Logs
Since logs are in JSONL format, you can query them with standard tools:
# All denied requests in the last 24 hours
cat /var/log/mcp/audit.jsonl | \
jq 'select(.outcome == "denied" and
(.timestamp | fromdateiso8601) > (now - 86400))'
# Tool usage by user
cat /var/log/mcp/audit.jsonl | \
jq -r '.userId' | sort | uniq -c | sort -rn
# Average response time per tool
cat /var/log/mcp/audit.jsonl | \
jq -r '[.toolName, .durationMs] | @tsv' | \
datamash -g 1 mean 2 median 2 max 2Part 3: Gateway Proxy Pattern
Why a Gateway
In production, most enterprise MCP deployments should not be direct client-to-server connections. A gateway intermediary provides:
- Authorization propagation: downstream servers know what the original client was authorized to do
- Rate limiting: prevent runaway agents from overwhelming backend services
- Policy enforcement: centralized rules for what tools are available to which roles
- Service discovery: clients connect to one endpoint; the gateway routes to the right server
- TLS termination: manage certificates centrally
Architecture Overview
┌─────────────┐ ┌──────────────────┐ ┌─────────────────┐
│ AI Agent │────▶│ MCP Gateway │────▶│ MCP Server A │
│ (Client) │ │ │ │ (DB Tools) │
└─────────────┘ │ - Auth (OAuth) │ └─────────────────┘
│ - Rate Limit │
┌─────────────┐ │ - Audit Log │ ┌─────────────────┐
│ AI Agent │────▶│ - Routing │────▶│ MCP Server B │
│ (Client) │ │ - Policy │ │ (File Tools) │
└─────────────┘ └──────────────────┘ └─────────────────┘
Step 6: Build the Gateway
Here is a minimal MCP gateway using Express that handles auth, routing, and audit:
import express from "express";
import httpProxy from "http-proxy-middleware";
import rateLimit from "express-rate-limit";
const app = express();
// Rate limiting per authenticated user
const limiter = rateLimit({
windowMs: 60 * 1000,
max: 100,
keyGenerator: (req) => req.headers["x-user-id"] as string || req.ip,
message: { error: "Rate limit exceeded. Try again in 60 seconds." },
});
app.use(limiter);
// Auth middleware: validate token, extract identity, set headers
app.use(async (req, res, next) => {
const authHeader = req.headers.authorization;
if (!authHeader?.startsWith("Bearer ")) {
return res.status(401).json({ error: "Missing authorization" });
}
try {
const payload = await validateToken(authHeader.slice(7));
req.headers["x-user-id"] = payload.email;
req.headers["x-user-roles"] = payload.realm_access.roles.join(",");
req.headers["x-user-scopes"] = payload.scope;
next();
} catch {
return res.status(401).json({ error: "Invalid token" });
}
});
// Policy enforcement: check role-based access to MCP servers
const serverPolicies: Record<string, string[]> = {
"/mcp/database": ["db-admin", "data-analyst"],
"/mcp/files": ["developer", "admin"],
"/mcp/deploy": ["admin", "devops"],
};
app.use("/mcp/:server", (req, res, next) => {
const serverPath = `/mcp/${req.params.server}`;
const userRoles = (req.headers["x-user-roles"] as string)?.split(",") || [];
const allowedRoles = serverPolicies[serverPath];
if (allowedRoles && !userRoles.some((r) => allowedRoles.includes(r))) {
auditLog({
timestamp: new Date().toISOString(),
requestId: `gw_${Date.now()}`,
userId: req.headers["x-user-id"] as string,
toolName: `gateway:${req.params.server}`,
toolArgs: {},
outcome: "denied",
responseSummary: `Role check failed. Required: ${allowedRoles.join(",")}`,
});
return res.status(403).json({ error: "Insufficient permissions" });
}
next();
});Step 7: Server Registry and Routing
interface McpServerEntry {
name: string;
url: string;
healthCheck: string;
allowedRoles: string[];
rateLimit: number;
description: string;
}
const serverRegistry: McpServerEntry[] = [
{
name: "database",
url: "http://mcp-db.internal:3001",
healthCheck: "/health",
allowedRoles: ["db-admin", "data-analyst"],
rateLimit: 50,
description: "Read-only database query tools",
},
{
name: "files",
url: "http://mcp-files.internal:3002",
healthCheck: "/health",
allowedRoles: ["developer", "admin"],
rateLimit: 100,
description: "File system and document tools",
},
{
name: "deploy",
url: "http://mcp-deploy.internal:3003",
healthCheck: "/health",
allowedRoles: ["admin", "devops"],
rateLimit: 10,
description: "Deployment and infrastructure tools",
},
];
// Dynamic routing based on registry
for (const server of serverRegistry) {
app.use(
`/mcp/${server.name}`,
httpProxy.createProxyMiddleware({
target: server.url,
changeOrigin: true,
pathRewrite: { [`^/mcp/${server.name}`]: "" },
onProxyReq: (proxyReq) => {
// Forward identity headers to downstream MCP server
// Server trusts gateway — no second auth needed
},
})
);
}
app.listen(4000, () => {
console.log("MCP Gateway running on :4000");
});Step 8: Governance-as-Code
Define your MCP policies as declarative configuration that lives in version control:
# mcp-policies.yaml
version: "1.0"
policies:
- name: "database-access"
servers: ["database"]
roles: ["db-admin", "data-analyst"]
scopes: ["mcp:tools:execute"]
rate_limit:
requests_per_minute: 50
tools:
allowed: ["query_database", "list_tables", "describe_table"]
denied: ["drop_table", "truncate"]
- name: "deployment-access"
servers: ["deploy"]
roles: ["admin", "devops"]
scopes: ["mcp:tools:execute", "mcp:admin"]
rate_limit:
requests_per_minute: 10
tools:
allowed: ["deploy_staging", "rollback"]
denied: ["deploy_production"]
approval_required:
- tool: "deploy_production"
approvers: ["platform-lead"]
- name: "default-deny"
servers: ["*"]
roles: ["*"]
effect: "deny"
message: "No policy matches this request. Contact platform team."Load and enforce these policies at the gateway:
import { readFileSync } from "fs";
import YAML from "yaml";
interface Policy {
name: string;
servers: string[];
roles: string[];
scopes: string[];
rate_limit: { requests_per_minute: number };
tools?: { allowed?: string[]; denied?: string[] };
approval_required?: Array<{ tool: string; approvers: string[] }>;
effect?: string;
}
function loadPolicies(path: string): Policy[] {
const raw = readFileSync(path, "utf-8");
return YAML.parse(raw).policies;
}
function evaluatePolicy(
policies: Policy[],
server: string,
tool: string,
userRoles: string[]
): { allowed: boolean; reason: string } {
for (const policy of policies) {
const serverMatch =
policy.servers.includes(server) || policy.servers.includes("*");
const roleMatch =
userRoles.some((r) => policy.roles.includes(r)) ||
policy.roles.includes("*");
if (!serverMatch || !roleMatch) continue;
if (policy.effect === "deny") {
return { allowed: false, reason: policy.name };
}
if (policy.tools?.denied?.includes(tool)) {
return { allowed: false, reason: `${policy.name}: tool denied` };
}
if (policy.tools?.allowed && !policy.tools.allowed.includes(tool)) {
return { allowed: false, reason: `${policy.name}: tool not in allowlist` };
}
return { allowed: true, reason: policy.name };
}
return { allowed: false, reason: "no matching policy" };
}Deployment Checklist
Before going to production:
- TLS everywhere — gateway to client and gateway to MCP servers
- Token rotation — configure short-lived access tokens (15 minutes) with refresh tokens
- Log retention — set retention policies aligned with your compliance requirements (7 years for SOX)
- Health checks — monitor MCP server health from the gateway, remove unhealthy backends
- Secret management — store OAuth client secrets and signing keys in a vault, never in config files
- Load testing — validate rate limits behave correctly under concurrent agent load
Troubleshooting
Token validation fails with "kid not found": Your JWKS cache may be stale. Restart the gateway or reduce the cache TTL after IdP key rotation.
Gateway returns 502 to agents: The downstream MCP server is unreachable. Check health endpoints and network policies between the gateway and server pods.
Audit logs missing entries: Ensure the audit logger flushes synchronously before the response is sent. Async loggers can lose entries during process restarts.
Agents get rate-limited too aggressively: Tune per-user limits based on actual usage patterns. Start permissive and tighten after collecting a week of baseline data.
Next Steps
- Explore Google A2A Protocol for inter-agent communication patterns
- Review our Build an MCP Server in TypeScript tutorial for server-side fundamentals
- Build an MCP Client in TypeScript to test against your gateway
Conclusion
Enterprise MCP is not about adding more tools to your AI agents. It is about governing the tools they already have. SSO integration gives you identity context. Audit trails give you visibility. Gateway patterns give you control. Together, they turn a collection of unmanaged MCP servers into governed, compliant, production infrastructure.
If your organization is scaling MCP adoption and needs help implementing these patterns, our MCP Integration service covers architecture design, gateway deployment, and policy configuration — from first server to full governance.
Discuss Your Project with Us
We're here to help with your web development needs. Schedule a call to discuss your project and how we can assist you.
Let's find the best solutions for your needs.
Related Articles

Voice Control for Cline: VS Code + ElevenLabs MCP
Enable hands-free interaction with the Cline AI agent in VS Code. This tutorial guides you through creating a voice assistant extension using ElevenLabs for speech-to-text via the Model Context Protocol (MCP).

Improve GitLab Communication with Webhooks
Learn how to use GitLab webhooks to centralize communication and improve response times. Includes code examples in Laravel and Next.js.

Introduction to Model Context Protocol (MCP)
Learn about the Model Context Protocol (MCP), its use cases, advantages, and how to build and use an MCP server with TypeScript.