OAuth2-Proxy - Securing MCP Servers on Kubernetes Before Hackers Find Them First
// MCP servers like DBHub expose databases, filesystems, and code execution over HTTP with zero authentication. Learn how to deploy OAuth2-Proxy on Kubernetes to add SSO, group-based access control, and session management to any MCP server without changing a single line of code.
The Model Context Protocol is everywhere. In less than two years since Anthropic announced it, MCP has become the standard way AI agents connect to external tools: databases, filesystems, code execution environments, Kubernetes clusters. OpenAI adopted it. The Linux Foundation governs it. There are over 5,000 community MCP servers and counting.
Here’s the problem: many of the self-hosted ones have zero authentication.
To be clear, not all MCP servers have this problem. Vendor-managed MCP servers like Atlassian’s (Jira, Confluence), Slack, or Datadog handle authentication on their side through API tokens and OAuth flows baked into their platform. The problem is specifically with self-hosted MCP servers: the ones you deploy yourself on your own infrastructure. Servers like DBHub for databases, filesystem servers, Kubernetes MCP servers, or code execution environments. When you run these in HTTP transport mode, anyone who can reach the endpoint can use them. No API key. No token validation. No user identity.
The MCP protocol itself makes authentication optional, and most self-hosted implementations skip it entirely. Trend Micro researchers found 492 MCP servers exposed on the internet with no client authentication or traffic encryption. Over 30 CVEs were filed against MCP servers in the first 60 days of 2026 alone.
This is where OAuth2-Proxy comes in. It’s a lightweight reverse proxy that adds OAuth2/OIDC authentication in front of any HTTP service, including MCP servers, without touching a single line of the application’s code. In this post, I’ll walk through why self-hosted MCP servers are a security liability, how OAuth2-Proxy works, and how to deploy it on Kubernetes to protect servers like DBHub with corporate SSO.
Who Should Read This?#
This post is for:
- Platform Engineers deploying MCP servers for AI-powered developer tools and internal platforms
- SREs responsible for securing AI infrastructure running on Kubernetes
- Security Engineers evaluating the attack surface of MCP deployments
- DevOps Teams self-hosting MCP servers like DBHub, filesystem servers, or code execution environments
If you’re running MCP servers in HTTP mode on any network, even behind a VPN, and haven’t added authentication, this post is for you.
The MCP Authentication Gap#
What is MCP?#
The Model Context Protocol (MCP) is an open protocol for connecting LLM applications to external data sources and tools. Think of it as a USB-C port for AI: a standardized interface that any AI client (Claude Desktop, Cursor, VS Code Copilot) can use to connect to any tool server (databases, APIs, filesystems).
MCP servers expose three primitives:
| Primitive | Purpose | Example |
|---|---|---|
| Tools | Functions the LLM can invoke | execute_sql, search_files, kubectl_get |
| Resources | Data the LLM can read | Database schemas, file contents, API docs |
| Prompts | Pre-built prompt templates | Query builders, code generators |
MCP supports two transport mechanisms:
- stdio: Local only. The client launches the server as a subprocess and communicates via stdin/stdout. Inherently secure, no network exposure.
- Streamable HTTP: Network-accessible. A single HTTP endpoint handling JSON-RPC 2.0 messages over POST/GET with optional SSE streaming. This is where the security problem lives.
Vendor-Managed vs Self-Hosted MCP Servers#
Not all MCP servers are created equal when it comes to security. There are three distinct categories:
| Category | Examples | Auth Handling |
|---|---|---|
| Vendor-managed | Atlassian (Jira, Confluence), Slack, Datadog, PagerDuty | Authentication is handled by the vendor’s platform. You connect using API tokens or OAuth credentials issued by the service. The vendor controls access, rate limits, and audit logging. |
| Self-hosted open-source | DBHub, Filesystem, PostgreSQL, Kubernetes MCP, Puppeteer, Desktop Commander | You deploy and operate the server. Authentication is your responsibility. Most ship with zero auth out of the box. |
| Custom internal | Company-built MCP servers wrapping internal APIs, databases, deployment pipelines, or business logic | You build and operate the server. Unless your team explicitly implements auth (and most don’t in v1), these are wide open. |
This post focuses on the last two categories. When Atlassian ships an MCP server for Jira, they enforce authentication through their existing API token and OAuth infrastructure. That’s their problem, not yours.
But when you deploy DBHub to connect AI agents to your PostgreSQL database, or when your platform team builds a custom MCP server that wraps internal APIs (think: an onboarding tool that provisions cloud accounts, a deployment server that triggers CI/CD pipelines, or a data gateway that queries internal data warehouses), there is nothing between the agent and your systems unless you put it there. Custom internal MCP servers are especially risky because they tend to have broad access to internal resources and are often built quickly without a security review.
The Problem: Self-Hosted HTTP Transport with No Auth#
When a self-hosted MCP server runs in HTTP transport mode, it exposes a network endpoint that accepts JSON-RPC requests. Let’s look at DBHub as a concrete example.
DBHub is a popular MCP server by Bytebase that provides a unified interface to PostgreSQL, MySQL, MariaDB, SQL Server, and SQLite. It exposes two MCP tools: execute_sql and search_objects. Running it in HTTP mode is a single command:
docker run --rm --init --name dbhub --publish 8080:8080 \
bytebase/dbhub --transport http --port 8080 \
--dsn "postgres://user:password@db-host:5432/production?sslmode=require"
That’s it. Port 8080 is now accepting unauthenticated requests that can execute arbitrary SQL against your database. Anyone who can reach this port (a compromised pod in the same cluster, a misconfigured ingress, a port-forward left open) has full database access.
DBHub is not unique. This is the default for nearly every self-hosted MCP server running in HTTP mode:
| MCP Server | What It Exposes | Auth Built In? |
|---|---|---|
| DBHub (Bytebase) | SQL execution on production databases | No |
| Filesystem (official) | Read/write access to host filesystem | No |
| PostgreSQL (official) | Direct database queries | No |
| GitHub (official) | Repository management, PRs, issues | Token-based (but no user auth) |
| Desktop Commander | Terminal access, process management | No |
| Kubernetes MCP | Full cluster API access | No |
| Puppeteer | Browser automation on internal apps | No |
| Your internal MCP | Company APIs, deployment pipelines, data warehouses | Up to you (usually no) |
The Numbers Are Alarming#
The security research paints a grim picture:
- 492 MCP servers discovered exposed on the internet with no authentication (Trend Micro)
- 30+ CVEs filed against MCP servers in January-February 2026
- 88% of MCP servers require credentials to backend services, but 53% use insecure static secrets
- Only 8.5% implement OAuth, the rest rely on nothing or basic API keys
- ~74% of exposed servers run on major cloud providers
- At least 8 directly managed cloud resources (create/delete VMs, storage, etc.)
- CVE-2025-6514: mcp-remote OAuth proxy vulnerability, CVSS 9.6, affecting 437K+ downloads
Why This Matters: An unauthenticated MCP server isn’t just an information disclosure risk. It’s a lateral movement goldmine. A compromised MCP server with database access gives an attacker everything: schemas, data, credentials stored in tables, and a pivot point to every system that database connects to.
Why Not Just Implement MCP Auth?#
The MCP specification does include an authorization draft that describes OAuth 2.1 integration. But there are practical problems:
- It’s optional. The spec says servers “SHOULD” implement auth when using HTTP transport. Most don’t.
- It’s complex. Full implementation requires OAuth 2.1 + Protected Resource Metadata (RFC 9728) + PKCE + Resource Indicators (RFC 8707).
- It’s per-server. Each MCP server would need its own auth implementation, no centralization.
- Most servers haven’t adopted it. Community and open-source MCP servers overwhelmingly skip auth.
- Enterprise criticism. The spec has been called “a mess” for conflating resource server and authorization server roles.
The pragmatic solution? Don’t wait for MCP servers to implement auth. Put an auth proxy in front of them.
What is OAuth2-Proxy?#
OAuth2-Proxy is a CNCF Sandbox project that provides authentication and session management as a reverse proxy. Originally created at Bitly in 2014, it’s now maintained by a dedicated community with contributions from engineers at Microsoft, OpenAI, Adobe, and Morgan Stanley.
Here’s how it works:
- A user or agent requests a protected resource
- OAuth2-Proxy checks for a valid session cookie
- If no session exists, the user is redirected to the configured identity provider (Google, GitHub, Keycloak, Entra ID, etc.)
- After successful login, the provider returns an authorization code to OAuth2-Proxy’s callback endpoint
- OAuth2-Proxy exchanges the code for tokens, validates the user against configured rules (email domain, group membership, allowed users), creates a session cookie, and forwards the request upstream
- Subsequent requests include the session cookie, no re-authentication until the session expires
The key insight: the upstream MCP server never handles authentication. By the time a request reaches DBHub or any other MCP server, it has already been validated. OAuth2-Proxy also forwards user identity via HTTP headers (X-Auth-Request-User, X-Auth-Request-Email, X-Auth-Request-Groups), enabling audit logging of who executed what.
sequenceDiagram
participant User as AI Agent / User
participant Proxy as OAuth2-Proxy
participant IdP as Identity Provider
participant MCP as MCP Server
User->>Proxy: 1. MCP Request
Proxy->>Proxy: Check session cookie
alt No valid session
Proxy->>User: 2. Redirect to IdP
User->>IdP: 3. Login
IdP->>Proxy: 4. Authorization code
Proxy->>IdP: 5. Exchange code for tokens
IdP->>Proxy: 6. ID token + access token
Proxy->>Proxy: Validate user (email/group)
Proxy->>User: 7. Set session cookie
end
User->>Proxy: 8. Request with session cookie
Proxy->>MCP: 9. Forward + identity headers
MCP->>Proxy: 10. Response
Proxy->>User: 11. Response
OAuth2-Proxy at a Glance#
| Attribute | Detail |
|---|---|
| Latest Version | v7.15.1 (March 2026) |
| Language | Go |
| License | MIT |
| CNCF Status | Sandbox (October 2025) |
| GitHub Stars | 14,100+ |
| Contributors | 435+ |
| Monthly Downloads | ~30 million (quay.io) |
| Container Base | Distroless (since v7.6.0) |
| Supported Providers | 20 (Google, GitHub, GitLab, Entra ID, Keycloak OIDC, generic OIDC, and more) |
Why OAuth2-Proxy for MCP Servers?#
There are other auth proxies like Authelia, Pomerium, and Keycloak. Here’s why OAuth2-Proxy is the best fit for MCP:
| Aspect | OAuth2-Proxy | Authelia | Pomerium | Keycloak |
|---|---|---|---|---|
| Primary Role | Auth reverse proxy | Auth gateway + MFA | Identity-aware proxy | Full IdP + auth server |
| Resource Footprint | Light (~50MB) | Very light (<100MB) | Medium | Heavy (512MB+ JVM) |
| Built-in MFA | No (delegated to IdP) | Yes (TOTP, WebAuthn) | No (delegated to IdP) | Yes (built-in) |
| Complexity | Low | Medium | Medium-High | High |
| Setup Time | Minutes | Hours | Hours | Days |
| CNCF Status | Sandbox | None | None | Incubating |
| Best For | Quick drop-in auth | Self-hosted MFA + ACLs | Enterprise zero-trust | Full identity platform |
Bottom line: MCP servers need authentication now, not after a multi-week Keycloak rollout. OAuth2-Proxy gives you SSO in front of any MCP server in minutes, with zero code changes. If you later need fine-grained per-tool authorization or built-in MFA, layer in Authelia or OPA.
What OAuth2-Proxy is NOT#
- Not an identity provider: It doesn’t store users or credentials. It delegates to external IdPs like Google, GitHub, or Keycloak.
- Not a fine-grained authorization engine: It does binary allow/deny based on email, domain, or group. For per-tool MCP access control, you’ll need additional policy layers (OPA, Authelia).
- Not an MCP-aware gateway: It proxies HTTP traffic generically. It doesn’t understand MCP JSON-RPC messages or tool invocations. It just ensures the caller is authenticated before any request reaches the MCP server.
- Not designed for machine-to-machine auth: It’s built around browser-based OAuth flows. For service-to-service MCP calls, you’ll need bearer token validation at the API gateway level.
Deployment Patterns for MCP on Kubernetes#
Pattern 1: Sidecar Proxy (Per-Server Isolation)#
OAuth2-Proxy runs as a sidecar in the same Pod as the MCP server. All traffic enters through the proxy on the exposed port, which forwards authenticated requests to the MCP server on localhost.
sequenceDiagram
box Pod: dbhub-secured
participant Proxy as OAuth2-Proxy :4180
participant MCP as DBHub :8080
end
participant User as AI Agent / User
participant IdP as Identity Provider
User->>Proxy: MCP request
alt No valid session
Proxy->>User: Redirect to IdP
User->>IdP: Login
IdP->>Proxy: Authorization code
Proxy->>IdP: Exchange for tokens
IdP->>Proxy: Tokens
Proxy->>User: Set session cookie
end
User->>Proxy: Request with cookie
Proxy->>MCP: Forward to localhost:8080
MCP->>Proxy: Response
Proxy->>User: Response
When to use: When each MCP server needs its own auth configuration (different allowed groups, different IdP settings), or when you need strict Pod-level isolation.
Pattern 2: Gateway API External Auth (Centralized)#
The gateway data plane (Envoy Gateway, Istio, Traefik) makes an ext-auth call to OAuth2-Proxy on every incoming request. One OAuth2-Proxy Deployment protects all MCP servers in the cluster.
Note: The community
ingress-nginxcontroller was retired in March 2026 due to unfixable security vulnerabilities. The examples in this post use the Kubernetes Gateway API with Envoy Gateway as the data plane. The ext-auth pattern works similarly with Istio (viaAuthorizationPolicy) and Traefik (viaForwardAuthmiddleware).
sequenceDiagram
participant User as AI Agent / User
participant GW as Envoy Gateway
participant Proxy as OAuth2-Proxy
participant IdP as Identity Provider
participant MCP as MCP Server (DBHub, FS, K8s)
User->>GW: MCP request
GW->>Proxy: ext-auth /oauth2/auth
alt Valid session
Proxy->>GW: 200 OK + identity headers
GW->>MCP: Forward request + X-Auth-Request-User/Email
MCP->>GW: Response
GW->>User: Response
else No valid session
Proxy->>GW: 401 Unauthorized
GW->>User: Redirect to /oauth2/start
User->>Proxy: Start OAuth flow
Proxy->>User: Redirect to IdP
User->>IdP: Login
IdP->>Proxy: Authorization code
Proxy->>IdP: Exchange for tokens
IdP->>Proxy: Tokens
Proxy->>User: Set session cookie + redirect back
User->>GW: Retry original request with cookie
GW->>Proxy: ext-auth /oauth2/auth
Proxy->>GW: 200 OK
GW->>MCP: Forward request
MCP->>GW: Response
GW->>User: Response
end
When to use: Production Kubernetes with multiple MCP servers sharing the same auth requirements. Most scalable pattern with a single OAuth2-Proxy instance and single OIDC client configuration.
Pattern 3: Standalone Gateway#
OAuth2-Proxy as a dedicated Deployment with its own Service, sitting in front of a single MCP server. Traffic flows: Client → OAuth2-Proxy Service → MCP Service.
When to use: When you don’t have an ingress controller, or for protecting a single high-value MCP server (e.g., a production database gateway).
Practical Examples#
Example 1: Securing DBHub with OAuth2-Proxy Sidecar#
This example deploys DBHub as an MCP server for PostgreSQL with OAuth2-Proxy as a sidecar, using GitHub organization-based authentication.
Prerequisites:
- A GitHub OAuth App (Settings → Developer Settings → OAuth Apps)
- Callback URL:
https://dbhub.example.com/oauth2/callback - Envoy Gateway installed (or another Gateway API implementation)
Step 1: Create Secrets
apiVersion: v1
kind: Secret
metadata:
name: oauth2-proxy-secret
namespace: mcp-servers
type: Opaque
stringData:
client-id: "your-github-oauth-app-client-id"
client-secret: "your-github-oauth-app-client-secret"
cookie-secret: "output-of-openssl-rand-base64-32"
---
apiVersion: v1
kind: Secret
metadata:
name: dbhub-secret
namespace: mcp-servers
type: Opaque
stringData:
dsn: "postgres://readonly_user:password@pg.database.svc.cluster.local:5432/appdb?sslmode=require"
Tip: Generate the cookie secret with
openssl rand -base64 32. Never reuse secrets across environments.
Step 2: Deploy DBHub + OAuth2-Proxy Sidecar
apiVersion: apps/v1
kind: Deployment
metadata:
name: dbhub
namespace: mcp-servers
labels:
app.kubernetes.io/name: dbhub
app.kubernetes.io/version: "latest"
app.kubernetes.io/managed-by: kubectl
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: dbhub
template:
metadata:
labels:
app.kubernetes.io/name: dbhub
spec:
containers:
# OAuth2-Proxy sidecar: the only externally accessible container
- name: oauth2-proxy
image: quay.io/oauth2-proxy/oauth2-proxy:v7.15.1
args:
- --provider=github
- --github-org=your-org-name
- --email-domain=*
- --upstream=http://localhost:8080
- --http-address=0.0.0.0:4180
- --reverse-proxy=true
- --set-xauthrequest=true
- --cookie-secure=true
- --cookie-samesite=lax
- --cookie-expire=8h
- --cookie-refresh=1h
- --session-store-type=redis
- --redis-connection-url=redis://redis.mcp-servers.svc.cluster.local:6379
- --silence-ping-logging=true
- --skip-auth-route=^/healthz$
env:
- name: OAUTH2_PROXY_CLIENT_ID
valueFrom:
secretKeyRef:
name: oauth2-proxy-secret
key: client-id
- name: OAUTH2_PROXY_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: oauth2-proxy-secret
key: client-secret
- name: OAUTH2_PROXY_COOKIE_SECRET
valueFrom:
secretKeyRef:
name: oauth2-proxy-secret
key: cookie-secret
ports:
- containerPort: 4180
name: http
protocol: TCP
livenessProbe:
httpGet:
path: /ping
port: http
initialDelaySeconds: 5
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: http
initialDelaySeconds: 5
periodSeconds: 10
resources:
requests:
cpu: 10m
memory: 32Mi
limits:
cpu: 100m
memory: 64Mi
securityContext:
runAsNonRoot: true
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
# DBHub MCP server: only accessible via localhost
- name: dbhub
image: bytebase/dbhub:latest
args:
- --transport
- http
- --port
- "8080"
env:
- name: DSN
valueFrom:
secretKeyRef:
name: dbhub-secret
key: dsn
ports:
- containerPort: 8080
name: mcp
protocol: TCP
resources:
requests:
cpu: 10m
memory: 64Mi
limits:
cpu: 200m
memory: 128Mi
securityContext:
runAsNonRoot: true
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
---
apiVersion: v1
kind: Service
metadata:
name: dbhub
namespace: mcp-servers
spec:
selector:
app.kubernetes.io/name: dbhub
ports:
- port: 4180
targetPort: http
protocol: TCP
name: http
Step 3: Expose via Gateway API + HTTPRoute
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: dbhub
namespace: mcp-servers
spec:
parentRefs:
- name: main-gateway
namespace: envoy-gateway-system
hostnames:
- dbhub.example.com
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- name: dbhub
port: 4180
timeouts:
request: 3600s
backendRequest: 3600s
Note: The extended timeouts (
3600s) are important for MCP servers. SSE connections and long-running queries need more than the default timeout.
Result: Users hitting dbhub.example.com are redirected to GitHub for login. Only members of your-org-name can access the MCP server. The user’s email is forwarded via X-Auth-Request-Email, enabling audit logging of who executed which SQL queries.
Example 2: Centralized Auth for Multiple MCP Servers#
When you’re running several MCP servers, a centralized OAuth2-Proxy with Gateway API external auth is more efficient. One proxy instance, one OIDC client, one set of credentials.
Step 1: Deploy OAuth2-Proxy as a Shared Service
apiVersion: apps/v1
kind: Deployment
metadata:
name: oauth2-proxy
namespace: mcp-auth
spec:
replicas: 2
selector:
matchLabels:
app.kubernetes.io/name: oauth2-proxy
template:
metadata:
labels:
app.kubernetes.io/name: oauth2-proxy
app.kubernetes.io/managed-by: kubectl
spec:
containers:
- name: oauth2-proxy
image: quay.io/oauth2-proxy/oauth2-proxy:v7.15.1
args:
- --provider=keycloak-oidc
- --oidc-issuer-url=https://keycloak.example.com/realms/platform
- --email-domain=example.com
- --allowed-group=/mcp-users
- --upstream=static://200
- --http-address=0.0.0.0:4180
- --reverse-proxy=true
- --set-xauthrequest=true
- --cookie-secure=true
- --cookie-expire=8h
- --cookie-refresh=1h
- --session-store-type=redis
- --redis-connection-url=redis://redis.mcp-auth.svc.cluster.local:6379
- --silence-ping-logging=true
- --skip-auth-route=^/healthz$
- --skip-auth-route=^/readyz$
env:
- name: OAUTH2_PROXY_CLIENT_ID
valueFrom:
secretKeyRef:
name: oauth2-proxy-secret
key: client-id
- name: OAUTH2_PROXY_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: oauth2-proxy-secret
key: client-secret
- name: OAUTH2_PROXY_COOKIE_SECRET
valueFrom:
secretKeyRef:
name: oauth2-proxy-secret
key: cookie-secret
ports:
- containerPort: 4180
name: http
livenessProbe:
httpGet:
path: /ping
port: http
readinessProbe:
httpGet:
path: /ready
port: http
resources:
requests:
cpu: 10m
memory: 32Mi
limits:
cpu: 100m
memory: 64Mi
securityContext:
runAsNonRoot: true
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
---
apiVersion: v1
kind: Service
metadata:
name: oauth2-proxy
namespace: mcp-auth
spec:
selector:
app.kubernetes.io/name: oauth2-proxy
ports:
- port: 4180
targetPort: http
protocol: TCP
name: http
Step 2: Create a SecurityPolicy for ext-auth
With Envoy Gateway, you create a SecurityPolicy that tells Envoy to call OAuth2-Proxy before forwarding any request. This policy can target an individual HTTPRoute or an entire Gateway:
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
name: oauth2-proxy-auth
namespace: mcp-servers
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: dbhub
extAuth:
http:
backendRefs:
- name: oauth2-proxy
namespace: mcp-auth
port: 4180
path: /oauth2/auth
headersToBackend:
- cookie
- authorization
Step 3: Create HTTPRoutes for each MCP server
Now any MCP server just needs an HTTPRoute. The SecurityPolicy applies auth automatically:
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: dbhub
namespace: mcp-servers
spec:
parentRefs:
- name: main-gateway
namespace: envoy-gateway-system
hostnames:
- dbhub.example.com
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- name: dbhub
port: 8080
timeouts:
request: 3600s
backendRequest: 3600s
To protect another MCP server, just create another HTTPRoute and reference the same SecurityPolicy (or target the Gateway itself to protect all routes):
# Target the Gateway to protect ALL HTTPRoutes behind it
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
name: oauth2-proxy-auth-global
namespace: envoy-gateway-system
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: main-gateway
extAuth:
http:
backendRefs:
- name: oauth2-proxy
namespace: mcp-auth
port: 4180
path: /oauth2/auth
headersToBackend:
- cookie
- authorization
Result: One OAuth2-Proxy Deployment in mcp-auth namespace protects every MCP server in the cluster. All users must authenticate via Keycloak and belong to the /mcp-users group. Session state is shared across replicas via Redis.
Example 3: Helm Chart Quick Start#
For the fastest path to production, use the official Helm chart:
# Add the Helm repository
helm repo add oauth2-proxy https://oauth2-proxy.github.io/manifests
helm repo update
Create a values.yaml:
config:
existingSecret: oauth2-proxy-secret
extraArgs:
provider: github
github-org: your-org
email-domain: "*"
set-xauthrequest: "true"
reverse-proxy: "true"
cookie-secure: "true"
cookie-expire: "8h"
cookie-refresh: "1h"
session-store-type: redis
redis-connection-url: redis://redis.mcp-auth.svc.cluster.local:6379
silence-ping-logging: "true"
skip-auth-route: "^/healthz$"
replicaCount: 2
resources:
requests:
cpu: 10m
memory: 32Mi
limits:
cpu: 100m
memory: 64Mi
podSecurityContext:
runAsNonRoot: true
securityContext:
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
helm install oauth2-proxy oauth2-proxy/oauth2-proxy \
--namespace mcp-auth \
--create-namespace \
-f values.yaml
Hardening MCP Deployments: Beyond Authentication#
OAuth2-Proxy solves the authentication problem, but a production MCP deployment needs defense in depth. Here’s the full security stack:
Network Isolation#
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: dbhub-isolation
namespace: mcp-servers
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: dbhub
policyTypes:
- Ingress
- Egress
ingress:
# Only allow traffic from Envoy Gateway
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: envoy-gateway-system
ports:
- port: 4180
protocol: TCP
egress:
# Allow DNS resolution
- to:
- namespaceSelector: {}
ports:
- port: 53
protocol: UDP
- port: 53
protocol: TCP
# Allow connection to the database
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: database
ports:
- port: 5432
protocol: TCP
# Allow OAuth2-Proxy to reach the IdP (external)
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.0.0.0/8
- 172.16.0.0/12
- 192.168.0.0/16
ports:
- port: 443
protocol: TCP
Security Layers Summary#
| Layer | What It Does | Tool |
|---|---|---|
| Authentication | Verify user identity via SSO | OAuth2-Proxy |
| Group-based access | Restrict to specific teams/orgs | OAuth2-Proxy --allowed-group |
| Network isolation | Limit pod-to-pod traffic | Kubernetes NetworkPolicies |
| Database access | Read-only credentials, row limits | DBHub --readonly, database RBAC |
| Pod security | Non-root, read-only FS, dropped capabilities | Pod Security Standards |
| TLS | Encrypt all traffic in transit | Ingress TLS termination + cert-manager |
| Audit logging | Track who did what | OAuth2-Proxy identity headers + application logs |
| Session management | Expire sessions, refresh tokens | OAuth2-Proxy --cookie-expire + Redis |
MCP-Specific Considerations#
When deploying MCP servers behind OAuth2-Proxy, keep these MCP-specific details in mind:
- SSE connections need long timeouts. MCP’s Streamable HTTP transport uses Server-Sent Events for streaming responses. Set
proxy-read-timeoutandproxy-send-timeoutto at least3600seconds on your Ingress. - Session affinity may be required. If the MCP server maintains session state (via
Mcp-Session-Idheader), configure session affinity on your Service or Ingress to route requests to the same backend. - Health endpoints must bypass auth. Use
--skip-auth-route=^/healthz$to ensure Kubernetes probes don’t trigger authentication redirects. - Use read-only database credentials. When deploying DBHub for AI agents, the database user should have
SELECTprivileges only. Never connect with a user that canDROP,DELETE, orALTER. - Separate namespaces per sensitivity tier. Production database MCP servers should not share a namespace with development or internal-tool MCP servers.
OAuth2-Proxy: Pros and Cons#
Pros#
| Advantage | Description |
|---|---|
| Zero code changes | Protect any MCP server without modifying its source code |
| Extremely lightweight | Single Go binary, ~50MB memory. Minimal overhead next to any MCP server |
| 20 identity providers | Native support for Google, GitHub, Entra ID, Keycloak, and any OIDC-compliant IdP |
| Battle-tested at scale | 12 years of production use. Adopted by Microsoft, OpenAI, Adobe, Morgan Stanley |
| CNCF Sandbox | Community-governed, vendor-neutral. Long-term sustainability |
| Identity header injection | Forwards X-Auth-Request-User/Email/Groups for audit logging |
| Config validation | --config-test flag catches misconfigurations before deployment |
| Distroless images | Minimal attack surface with distroless container base since v7.6.0 |
Cons#
| Limitation | Description |
|---|---|
| Auth only, not authz | Binary allow/deny. Cannot restrict which MCP tools a user can invoke |
| No built-in MFA | Relies entirely on the upstream IdP for multi-factor authentication |
| Cookie size limits | Default cookie sessions can exceed 4KB. Redis required for production |
| Single provider only | Cannot configure multiple IdPs simultaneously |
| Browser-first design | OAuth2 redirect flow works for browser users, not headless AI agents |
| No MCP awareness | Doesn’t understand JSON-RPC messages. It’s a generic HTTP proxy |
| No admin UI | All configuration via flags, env vars, or config file |
Troubleshooting#
| Issue | Symptoms | Resolution |
|---|---|---|
| Redirect loop | Browser cycles between app and IdP | Check --redirect-url matches OAuth app’s callback URL exactly. Verify --cookie-domain if using subdomains |
| 403 after login | User authenticates but gets forbidden | Verify --email-domain, --allowed-group includes the user. Check IdP returns email claim in the token |
| 502 from gateway | ext-auth call fails | Ensure OAuth2-Proxy Service is reachable from the gateway namespace. Check SecurityPolicy backendRefs uses correct service name and namespace |
| Cookie too large | Browser rejects session cookie | Switch to --session-store-type=redis. Large OIDC tokens exceed 4KB cookie limit |
| SSE connection drops | MCP streaming responses cut off | Increase proxy-read-timeout and proxy-send-timeout on the Ingress to 3600+ seconds |
| MCP session lost | Agent re-authenticates mid-conversation | Enable session affinity if MCP server uses Mcp-Session-Id. Verify Redis session storage is working |
| Health probes failing | Pods restarting, readiness failures | Add --skip-auth-route=^/healthz$. Ensure --http-address binds to 0.0.0.0, not 127.0.0.1 |
Hands-On Demo Repository#
I’ve built a complete demo that deploys DBHub behind OAuth2-Proxy on a local Kubernetes cluster:
srekubecraft-demo/oauth2-proxy-mcp
What’s Included#
| Path | Purpose |
|---|---|
kubernetes/cluster/ | Kind config: 3-node cluster with Cilium CNI |
kubernetes/oauth2-proxy/ | Namespace and secret template |
kubernetes/dbhub/ | DBHub + OAuth2-Proxy sidecar deployment |
kubernetes/redis/ | Redis for session storage |
kubernetes/postgres/ | PostgreSQL with sample data and read-only user |
kubernetes/policies/ | NetworkPolicies for pod isolation |
Prerequisites#
You need a GitHub OAuth App. Create one at GitHub Developer Settings:
- Homepage URL:
http://dbhub.localhost - Callback URL:
http://dbhub.localhost/oauth2/callback
Quick Start#
The demo uses Taskfile for automation:
git clone https://github.com/nicknikolakakis/srekubecraft-demo.git
cd srekubecraft-demo/oauth2-proxy-mcp
# Full setup (will prompt for GitHub OAuth credentials)
task setup
# Start port-forward
sudo kubectl -n mcp-servers port-forward svc/dbhub-sidecar 80:4180
# Open in browser
open http://dbhub.localhost
What You’ll See#
- Open
http://dbhub.localhostand you get a “Sign in with GitHub” page. OAuth2-Proxy blocks all unauthenticated MCP requests. - Click sign in, authorize the GitHub OAuth App, and you land on the DBHub web UI with full access to
execute_sqlandsearch_objectsMCP tools. - Run
SELECT name, department FROM employeesto query the sample database through the authenticated MCP endpoint. - Apply NetworkPolicies with
task demo:netpolto block direct pod-to-pod access, so the only path to the MCP server is through OAuth2-Proxy.
Cleanup#
task clean
Conclusion#
MCP servers are becoming the backbone of AI-powered workflows, connecting agents to databases, filesystems, Kubernetes clusters, and cloud APIs. But the protocol’s optional authentication model means most servers ship with no auth at all. In a world where 492 exposed MCP servers were found on the public internet and 30+ CVEs dropped in two months, “we’ll add auth later” is a risk you can’t afford.
OAuth2-Proxy is the fastest way to close this gap:
- Minutes to deploy - drop it in front of any MCP server as a sidecar or centralized gateway
- Zero code changes - DBHub, filesystem servers, Kubernetes MCP servers, all protected without touching their source
- Corporate SSO - restrict access to your GitHub org, Keycloak group, or Google Workspace domain
- Audit trail - identity headers tell you exactly who executed which query or tool invocation
- CNCF-backed - community-governed, battle-tested by organizations running it at scale for over a decade
The MCP ecosystem is moving fast. Authentication shouldn’t be the thing that gets left behind. Start with OAuth2-Proxy and NetworkPolicies, use read-only database credentials, and treat every MCP server like what it is: a privileged access point to your infrastructure.
If you found this useful, you might also enjoy my related posts on Kubernetes security and platform tooling:
- Kubernetes Security with Kubescape
- External Secrets Operator - Secure Secrets Management for Kubernetes
- Admission Controller Policies
- Kubernetes Multi-Tenancy
