OAuth2-Proxy - Securing MCP Servers on Kubernetes Before Hackers Find Them First
~/posts/oauth2-proxy.md21 min · 4283 words

OAuth2-Proxy - Securing MCP Servers on Kubernetes Before Hackers Find Them First

// MCP servers like DBHub expose databases, filesystems, and code execution over HTTP with zero authentication. Learn how to deploy OAuth2-Proxy on Kubernetes to add SSO, group-based access control, and session management to any MCP server without changing a single line of code.

$ date

The Model Context Protocol is everywhere. In less than two years since Anthropic announced it, MCP has become the standard way AI agents connect to external tools: databases, filesystems, code execution environments, Kubernetes clusters. OpenAI adopted it. The Linux Foundation governs it. There are over 5,000 community MCP servers and counting.

Here’s the problem: many of the self-hosted ones have zero authentication.

To be clear, not all MCP servers have this problem. Vendor-managed MCP servers like Atlassian’s (Jira, Confluence), Slack, or Datadog handle authentication on their side through API tokens and OAuth flows baked into their platform. The problem is specifically with self-hosted MCP servers: the ones you deploy yourself on your own infrastructure. Servers like DBHub for databases, filesystem servers, Kubernetes MCP servers, or code execution environments. When you run these in HTTP transport mode, anyone who can reach the endpoint can use them. No API key. No token validation. No user identity.

The MCP protocol itself makes authentication optional, and most self-hosted implementations skip it entirely. Trend Micro researchers found 492 MCP servers exposed on the internet with no client authentication or traffic encryption. Over 30 CVEs were filed against MCP servers in the first 60 days of 2026 alone.

This is where OAuth2-Proxy comes in. It’s a lightweight reverse proxy that adds OAuth2/OIDC authentication in front of any HTTP service, including MCP servers, without touching a single line of the application’s code. In this post, I’ll walk through why self-hosted MCP servers are a security liability, how OAuth2-Proxy works, and how to deploy it on Kubernetes to protect servers like DBHub with corporate SSO.

Who Should Read This?#

This post is for:

  • Platform Engineers deploying MCP servers for AI-powered developer tools and internal platforms
  • SREs responsible for securing AI infrastructure running on Kubernetes
  • Security Engineers evaluating the attack surface of MCP deployments
  • DevOps Teams self-hosting MCP servers like DBHub, filesystem servers, or code execution environments

If you’re running MCP servers in HTTP mode on any network, even behind a VPN, and haven’t added authentication, this post is for you.

The MCP Authentication Gap#

What is MCP?#

The Model Context Protocol (MCP) is an open protocol for connecting LLM applications to external data sources and tools. Think of it as a USB-C port for AI: a standardized interface that any AI client (Claude Desktop, Cursor, VS Code Copilot) can use to connect to any tool server (databases, APIs, filesystems).

MCP servers expose three primitives:

PrimitivePurposeExample
ToolsFunctions the LLM can invokeexecute_sql, search_files, kubectl_get
ResourcesData the LLM can readDatabase schemas, file contents, API docs
PromptsPre-built prompt templatesQuery builders, code generators

MCP supports two transport mechanisms:

  • stdio: Local only. The client launches the server as a subprocess and communicates via stdin/stdout. Inherently secure, no network exposure.
  • Streamable HTTP: Network-accessible. A single HTTP endpoint handling JSON-RPC 2.0 messages over POST/GET with optional SSE streaming. This is where the security problem lives.

Vendor-Managed vs Self-Hosted MCP Servers#

Not all MCP servers are created equal when it comes to security. There are three distinct categories:

CategoryExamplesAuth Handling
Vendor-managedAtlassian (Jira, Confluence), Slack, Datadog, PagerDutyAuthentication is handled by the vendor’s platform. You connect using API tokens or OAuth credentials issued by the service. The vendor controls access, rate limits, and audit logging.
Self-hosted open-sourceDBHub, Filesystem, PostgreSQL, Kubernetes MCP, Puppeteer, Desktop CommanderYou deploy and operate the server. Authentication is your responsibility. Most ship with zero auth out of the box.
Custom internalCompany-built MCP servers wrapping internal APIs, databases, deployment pipelines, or business logicYou build and operate the server. Unless your team explicitly implements auth (and most don’t in v1), these are wide open.

This post focuses on the last two categories. When Atlassian ships an MCP server for Jira, they enforce authentication through their existing API token and OAuth infrastructure. That’s their problem, not yours.

But when you deploy DBHub to connect AI agents to your PostgreSQL database, or when your platform team builds a custom MCP server that wraps internal APIs (think: an onboarding tool that provisions cloud accounts, a deployment server that triggers CI/CD pipelines, or a data gateway that queries internal data warehouses), there is nothing between the agent and your systems unless you put it there. Custom internal MCP servers are especially risky because they tend to have broad access to internal resources and are often built quickly without a security review.

The Problem: Self-Hosted HTTP Transport with No Auth#

When a self-hosted MCP server runs in HTTP transport mode, it exposes a network endpoint that accepts JSON-RPC requests. Let’s look at DBHub as a concrete example.

DBHub is a popular MCP server by Bytebase that provides a unified interface to PostgreSQL, MySQL, MariaDB, SQL Server, and SQLite. It exposes two MCP tools: execute_sql and search_objects. Running it in HTTP mode is a single command:

docker run --rm --init --name dbhub --publish 8080:8080 \
  bytebase/dbhub --transport http --port 8080 \
  --dsn "postgres://user:password@db-host:5432/production?sslmode=require"

That’s it. Port 8080 is now accepting unauthenticated requests that can execute arbitrary SQL against your database. Anyone who can reach this port (a compromised pod in the same cluster, a misconfigured ingress, a port-forward left open) has full database access.

DBHub is not unique. This is the default for nearly every self-hosted MCP server running in HTTP mode:

MCP ServerWhat It ExposesAuth Built In?
DBHub (Bytebase)SQL execution on production databasesNo
Filesystem (official)Read/write access to host filesystemNo
PostgreSQL (official)Direct database queriesNo
GitHub (official)Repository management, PRs, issuesToken-based (but no user auth)
Desktop CommanderTerminal access, process managementNo
Kubernetes MCPFull cluster API accessNo
PuppeteerBrowser automation on internal appsNo
Your internal MCPCompany APIs, deployment pipelines, data warehousesUp to you (usually no)

The Numbers Are Alarming#

The security research paints a grim picture:

  • 492 MCP servers discovered exposed on the internet with no authentication (Trend Micro)
  • 30+ CVEs filed against MCP servers in January-February 2026
  • 88% of MCP servers require credentials to backend services, but 53% use insecure static secrets
  • Only 8.5% implement OAuth, the rest rely on nothing or basic API keys
  • ~74% of exposed servers run on major cloud providers
  • At least 8 directly managed cloud resources (create/delete VMs, storage, etc.)
  • CVE-2025-6514: mcp-remote OAuth proxy vulnerability, CVSS 9.6, affecting 437K+ downloads

Why This Matters: An unauthenticated MCP server isn’t just an information disclosure risk. It’s a lateral movement goldmine. A compromised MCP server with database access gives an attacker everything: schemas, data, credentials stored in tables, and a pivot point to every system that database connects to.

Why Not Just Implement MCP Auth?#

The MCP specification does include an authorization draft that describes OAuth 2.1 integration. But there are practical problems:

  1. It’s optional. The spec says servers “SHOULD” implement auth when using HTTP transport. Most don’t.
  2. It’s complex. Full implementation requires OAuth 2.1 + Protected Resource Metadata (RFC 9728) + PKCE + Resource Indicators (RFC 8707).
  3. It’s per-server. Each MCP server would need its own auth implementation, no centralization.
  4. Most servers haven’t adopted it. Community and open-source MCP servers overwhelmingly skip auth.
  5. Enterprise criticism. The spec has been called “a mess” for conflating resource server and authorization server roles.

The pragmatic solution? Don’t wait for MCP servers to implement auth. Put an auth proxy in front of them.

What is OAuth2-Proxy?#

OAuth2-Proxy is a CNCF Sandbox project that provides authentication and session management as a reverse proxy. Originally created at Bitly in 2014, it’s now maintained by a dedicated community with contributions from engineers at Microsoft, OpenAI, Adobe, and Morgan Stanley.

Here’s how it works:

  1. A user or agent requests a protected resource
  2. OAuth2-Proxy checks for a valid session cookie
  3. If no session exists, the user is redirected to the configured identity provider (Google, GitHub, Keycloak, Entra ID, etc.)
  4. After successful login, the provider returns an authorization code to OAuth2-Proxy’s callback endpoint
  5. OAuth2-Proxy exchanges the code for tokens, validates the user against configured rules (email domain, group membership, allowed users), creates a session cookie, and forwards the request upstream
  6. Subsequent requests include the session cookie, no re-authentication until the session expires

The key insight: the upstream MCP server never handles authentication. By the time a request reaches DBHub or any other MCP server, it has already been validated. OAuth2-Proxy also forwards user identity via HTTP headers (X-Auth-Request-User, X-Auth-Request-Email, X-Auth-Request-Groups), enabling audit logging of who executed what.

sequenceDiagram
    participant User as AI Agent / User
    participant Proxy as OAuth2-Proxy
    participant IdP as Identity Provider
    participant MCP as MCP Server

    User->>Proxy: 1. MCP Request
    Proxy->>Proxy: Check session cookie
    alt No valid session
        Proxy->>User: 2. Redirect to IdP
        User->>IdP: 3. Login
        IdP->>Proxy: 4. Authorization code
        Proxy->>IdP: 5. Exchange code for tokens
        IdP->>Proxy: 6. ID token + access token
        Proxy->>Proxy: Validate user (email/group)
        Proxy->>User: 7. Set session cookie
    end
    User->>Proxy: 8. Request with session cookie
    Proxy->>MCP: 9. Forward + identity headers
    MCP->>Proxy: 10. Response
    Proxy->>User: 11. Response

OAuth2-Proxy at a Glance#

AttributeDetail
Latest Versionv7.15.1 (March 2026)
LanguageGo
LicenseMIT
CNCF StatusSandbox (October 2025)
GitHub Stars14,100+
Contributors435+
Monthly Downloads~30 million (quay.io)
Container BaseDistroless (since v7.6.0)
Supported Providers20 (Google, GitHub, GitLab, Entra ID, Keycloak OIDC, generic OIDC, and more)

Why OAuth2-Proxy for MCP Servers?#

There are other auth proxies like Authelia, Pomerium, and Keycloak. Here’s why OAuth2-Proxy is the best fit for MCP:

AspectOAuth2-ProxyAutheliaPomeriumKeycloak
Primary RoleAuth reverse proxyAuth gateway + MFAIdentity-aware proxyFull IdP + auth server
Resource FootprintLight (~50MB)Very light (<100MB)MediumHeavy (512MB+ JVM)
Built-in MFANo (delegated to IdP)Yes (TOTP, WebAuthn)No (delegated to IdP)Yes (built-in)
ComplexityLowMediumMedium-HighHigh
Setup TimeMinutesHoursHoursDays
CNCF StatusSandboxNoneNoneIncubating
Best ForQuick drop-in authSelf-hosted MFA + ACLsEnterprise zero-trustFull identity platform

Bottom line: MCP servers need authentication now, not after a multi-week Keycloak rollout. OAuth2-Proxy gives you SSO in front of any MCP server in minutes, with zero code changes. If you later need fine-grained per-tool authorization or built-in MFA, layer in Authelia or OPA.

What OAuth2-Proxy is NOT#

  • Not an identity provider: It doesn’t store users or credentials. It delegates to external IdPs like Google, GitHub, or Keycloak.
  • Not a fine-grained authorization engine: It does binary allow/deny based on email, domain, or group. For per-tool MCP access control, you’ll need additional policy layers (OPA, Authelia).
  • Not an MCP-aware gateway: It proxies HTTP traffic generically. It doesn’t understand MCP JSON-RPC messages or tool invocations. It just ensures the caller is authenticated before any request reaches the MCP server.
  • Not designed for machine-to-machine auth: It’s built around browser-based OAuth flows. For service-to-service MCP calls, you’ll need bearer token validation at the API gateway level.

Deployment Patterns for MCP on Kubernetes#

Pattern 1: Sidecar Proxy (Per-Server Isolation)#

OAuth2-Proxy runs as a sidecar in the same Pod as the MCP server. All traffic enters through the proxy on the exposed port, which forwards authenticated requests to the MCP server on localhost.

sequenceDiagram
    box Pod: dbhub-secured
        participant Proxy as OAuth2-Proxy :4180
        participant MCP as DBHub :8080
    end
    participant User as AI Agent / User
    participant IdP as Identity Provider

    User->>Proxy: MCP request
    alt No valid session
        Proxy->>User: Redirect to IdP
        User->>IdP: Login
        IdP->>Proxy: Authorization code
        Proxy->>IdP: Exchange for tokens
        IdP->>Proxy: Tokens
        Proxy->>User: Set session cookie
    end
    User->>Proxy: Request with cookie
    Proxy->>MCP: Forward to localhost:8080
    MCP->>Proxy: Response
    Proxy->>User: Response

When to use: When each MCP server needs its own auth configuration (different allowed groups, different IdP settings), or when you need strict Pod-level isolation.

Pattern 2: Gateway API External Auth (Centralized)#

The gateway data plane (Envoy Gateway, Istio, Traefik) makes an ext-auth call to OAuth2-Proxy on every incoming request. One OAuth2-Proxy Deployment protects all MCP servers in the cluster.

Note: The community ingress-nginx controller was retired in March 2026 due to unfixable security vulnerabilities. The examples in this post use the Kubernetes Gateway API with Envoy Gateway as the data plane. The ext-auth pattern works similarly with Istio (via AuthorizationPolicy) and Traefik (via ForwardAuth middleware).

sequenceDiagram
    participant User as AI Agent / User
    participant GW as Envoy Gateway
    participant Proxy as OAuth2-Proxy
    participant IdP as Identity Provider
    participant MCP as MCP Server (DBHub, FS, K8s)

    User->>GW: MCP request
    GW->>Proxy: ext-auth /oauth2/auth
    alt Valid session
        Proxy->>GW: 200 OK + identity headers
        GW->>MCP: Forward request + X-Auth-Request-User/Email
        MCP->>GW: Response
        GW->>User: Response
    else No valid session
        Proxy->>GW: 401 Unauthorized
        GW->>User: Redirect to /oauth2/start
        User->>Proxy: Start OAuth flow
        Proxy->>User: Redirect to IdP
        User->>IdP: Login
        IdP->>Proxy: Authorization code
        Proxy->>IdP: Exchange for tokens
        IdP->>Proxy: Tokens
        Proxy->>User: Set session cookie + redirect back
        User->>GW: Retry original request with cookie
        GW->>Proxy: ext-auth /oauth2/auth
        Proxy->>GW: 200 OK
        GW->>MCP: Forward request
        MCP->>GW: Response
        GW->>User: Response
    end

When to use: Production Kubernetes with multiple MCP servers sharing the same auth requirements. Most scalable pattern with a single OAuth2-Proxy instance and single OIDC client configuration.

Pattern 3: Standalone Gateway#

OAuth2-Proxy as a dedicated Deployment with its own Service, sitting in front of a single MCP server. Traffic flows: Client → OAuth2-Proxy Service → MCP Service.

When to use: When you don’t have an ingress controller, or for protecting a single high-value MCP server (e.g., a production database gateway).

Practical Examples#

Example 1: Securing DBHub with OAuth2-Proxy Sidecar#

This example deploys DBHub as an MCP server for PostgreSQL with OAuth2-Proxy as a sidecar, using GitHub organization-based authentication.

Prerequisites:

  • A GitHub OAuth App (Settings → Developer Settings → OAuth Apps)
  • Callback URL: https://dbhub.example.com/oauth2/callback
  • Envoy Gateway installed (or another Gateway API implementation)

Step 1: Create Secrets

apiVersion: v1
kind: Secret
metadata:
  name: oauth2-proxy-secret
  namespace: mcp-servers
type: Opaque
stringData:
  client-id: "your-github-oauth-app-client-id"
  client-secret: "your-github-oauth-app-client-secret"
  cookie-secret: "output-of-openssl-rand-base64-32"
---
apiVersion: v1
kind: Secret
metadata:
  name: dbhub-secret
  namespace: mcp-servers
type: Opaque
stringData:
  dsn: "postgres://readonly_user:password@pg.database.svc.cluster.local:5432/appdb?sslmode=require"

Tip: Generate the cookie secret with openssl rand -base64 32. Never reuse secrets across environments.

Step 2: Deploy DBHub + OAuth2-Proxy Sidecar

apiVersion: apps/v1
kind: Deployment
metadata:
  name: dbhub
  namespace: mcp-servers
  labels:
    app.kubernetes.io/name: dbhub
    app.kubernetes.io/version: "latest"
    app.kubernetes.io/managed-by: kubectl
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: dbhub
  template:
    metadata:
      labels:
        app.kubernetes.io/name: dbhub
    spec:
      containers:
        # OAuth2-Proxy sidecar: the only externally accessible container
        - name: oauth2-proxy
          image: quay.io/oauth2-proxy/oauth2-proxy:v7.15.1
          args:
            - --provider=github
            - --github-org=your-org-name
            - --email-domain=*
            - --upstream=http://localhost:8080
            - --http-address=0.0.0.0:4180
            - --reverse-proxy=true
            - --set-xauthrequest=true
            - --cookie-secure=true
            - --cookie-samesite=lax
            - --cookie-expire=8h
            - --cookie-refresh=1h
            - --session-store-type=redis
            - --redis-connection-url=redis://redis.mcp-servers.svc.cluster.local:6379
            - --silence-ping-logging=true
            - --skip-auth-route=^/healthz$
          env:
            - name: OAUTH2_PROXY_CLIENT_ID
              valueFrom:
                secretKeyRef:
                  name: oauth2-proxy-secret
                  key: client-id
            - name: OAUTH2_PROXY_CLIENT_SECRET
              valueFrom:
                secretKeyRef:
                  name: oauth2-proxy-secret
                  key: client-secret
            - name: OAUTH2_PROXY_COOKIE_SECRET
              valueFrom:
                secretKeyRef:
                  name: oauth2-proxy-secret
                  key: cookie-secret
          ports:
            - containerPort: 4180
              name: http
              protocol: TCP
          livenessProbe:
            httpGet:
              path: /ping
              port: http
            initialDelaySeconds: 5
            periodSeconds: 10
          readinessProbe:
            httpGet:
              path: /ready
              port: http
            initialDelaySeconds: 5
            periodSeconds: 10
          resources:
            requests:
              cpu: 10m
              memory: 32Mi
            limits:
              cpu: 100m
              memory: 64Mi
          securityContext:
            runAsNonRoot: true
            readOnlyRootFilesystem: true
            allowPrivilegeEscalation: false
            capabilities:
              drop:
                - ALL

        # DBHub MCP server: only accessible via localhost
        - name: dbhub
          image: bytebase/dbhub:latest
          args:
            - --transport
            - http
            - --port
            - "8080"
          env:
            - name: DSN
              valueFrom:
                secretKeyRef:
                  name: dbhub-secret
                  key: dsn
          ports:
            - containerPort: 8080
              name: mcp
              protocol: TCP
          resources:
            requests:
              cpu: 10m
              memory: 64Mi
            limits:
              cpu: 200m
              memory: 128Mi
          securityContext:
            runAsNonRoot: true
            readOnlyRootFilesystem: true
            allowPrivilegeEscalation: false
            capabilities:
              drop:
                - ALL
---
apiVersion: v1
kind: Service
metadata:
  name: dbhub
  namespace: mcp-servers
spec:
  selector:
    app.kubernetes.io/name: dbhub
  ports:
    - port: 4180
      targetPort: http
      protocol: TCP
      name: http

Step 3: Expose via Gateway API + HTTPRoute

apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: dbhub
  namespace: mcp-servers
spec:
  parentRefs:
    - name: main-gateway
      namespace: envoy-gateway-system
  hostnames:
    - dbhub.example.com
  rules:
    - matches:
        - path:
            type: PathPrefix
            value: /
      backendRefs:
        - name: dbhub
          port: 4180
      timeouts:
        request: 3600s
        backendRequest: 3600s

Note: The extended timeouts (3600s) are important for MCP servers. SSE connections and long-running queries need more than the default timeout.

Result: Users hitting dbhub.example.com are redirected to GitHub for login. Only members of your-org-name can access the MCP server. The user’s email is forwarded via X-Auth-Request-Email, enabling audit logging of who executed which SQL queries.

Example 2: Centralized Auth for Multiple MCP Servers#

When you’re running several MCP servers, a centralized OAuth2-Proxy with Gateway API external auth is more efficient. One proxy instance, one OIDC client, one set of credentials.

Step 1: Deploy OAuth2-Proxy as a Shared Service

apiVersion: apps/v1
kind: Deployment
metadata:
  name: oauth2-proxy
  namespace: mcp-auth
spec:
  replicas: 2
  selector:
    matchLabels:
      app.kubernetes.io/name: oauth2-proxy
  template:
    metadata:
      labels:
        app.kubernetes.io/name: oauth2-proxy
        app.kubernetes.io/managed-by: kubectl
    spec:
      containers:
        - name: oauth2-proxy
          image: quay.io/oauth2-proxy/oauth2-proxy:v7.15.1
          args:
            - --provider=keycloak-oidc
            - --oidc-issuer-url=https://keycloak.example.com/realms/platform
            - --email-domain=example.com
            - --allowed-group=/mcp-users
            - --upstream=static://200
            - --http-address=0.0.0.0:4180
            - --reverse-proxy=true
            - --set-xauthrequest=true
            - --cookie-secure=true
            - --cookie-expire=8h
            - --cookie-refresh=1h
            - --session-store-type=redis
            - --redis-connection-url=redis://redis.mcp-auth.svc.cluster.local:6379
            - --silence-ping-logging=true
            - --skip-auth-route=^/healthz$
            - --skip-auth-route=^/readyz$
          env:
            - name: OAUTH2_PROXY_CLIENT_ID
              valueFrom:
                secretKeyRef:
                  name: oauth2-proxy-secret
                  key: client-id
            - name: OAUTH2_PROXY_CLIENT_SECRET
              valueFrom:
                secretKeyRef:
                  name: oauth2-proxy-secret
                  key: client-secret
            - name: OAUTH2_PROXY_COOKIE_SECRET
              valueFrom:
                secretKeyRef:
                  name: oauth2-proxy-secret
                  key: cookie-secret
          ports:
            - containerPort: 4180
              name: http
          livenessProbe:
            httpGet:
              path: /ping
              port: http
          readinessProbe:
            httpGet:
              path: /ready
              port: http
          resources:
            requests:
              cpu: 10m
              memory: 32Mi
            limits:
              cpu: 100m
              memory: 64Mi
          securityContext:
            runAsNonRoot: true
            readOnlyRootFilesystem: true
            allowPrivilegeEscalation: false
            capabilities:
              drop:
                - ALL
---
apiVersion: v1
kind: Service
metadata:
  name: oauth2-proxy
  namespace: mcp-auth
spec:
  selector:
    app.kubernetes.io/name: oauth2-proxy
  ports:
    - port: 4180
      targetPort: http
      protocol: TCP
      name: http

Step 2: Create a SecurityPolicy for ext-auth

With Envoy Gateway, you create a SecurityPolicy that tells Envoy to call OAuth2-Proxy before forwarding any request. This policy can target an individual HTTPRoute or an entire Gateway:

apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
  name: oauth2-proxy-auth
  namespace: mcp-servers
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute
      name: dbhub
  extAuth:
    http:
      backendRefs:
        - name: oauth2-proxy
          namespace: mcp-auth
          port: 4180
      path: /oauth2/auth
      headersToBackend:
        - cookie
        - authorization

Step 3: Create HTTPRoutes for each MCP server

Now any MCP server just needs an HTTPRoute. The SecurityPolicy applies auth automatically:

apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: dbhub
  namespace: mcp-servers
spec:
  parentRefs:
    - name: main-gateway
      namespace: envoy-gateway-system
  hostnames:
    - dbhub.example.com
  rules:
    - matches:
        - path:
            type: PathPrefix
            value: /
      backendRefs:
        - name: dbhub
          port: 8080
      timeouts:
        request: 3600s
        backendRequest: 3600s

To protect another MCP server, just create another HTTPRoute and reference the same SecurityPolicy (or target the Gateway itself to protect all routes):

# Target the Gateway to protect ALL HTTPRoutes behind it
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
  name: oauth2-proxy-auth-global
  namespace: envoy-gateway-system
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: Gateway
      name: main-gateway
  extAuth:
    http:
      backendRefs:
        - name: oauth2-proxy
          namespace: mcp-auth
          port: 4180
      path: /oauth2/auth
      headersToBackend:
        - cookie
        - authorization

Result: One OAuth2-Proxy Deployment in mcp-auth namespace protects every MCP server in the cluster. All users must authenticate via Keycloak and belong to the /mcp-users group. Session state is shared across replicas via Redis.

Example 3: Helm Chart Quick Start#

For the fastest path to production, use the official Helm chart:

# Add the Helm repository
helm repo add oauth2-proxy https://oauth2-proxy.github.io/manifests
helm repo update

Create a values.yaml:

config:
  existingSecret: oauth2-proxy-secret

extraArgs:
  provider: github
  github-org: your-org
  email-domain: "*"
  set-xauthrequest: "true"
  reverse-proxy: "true"
  cookie-secure: "true"
  cookie-expire: "8h"
  cookie-refresh: "1h"
  session-store-type: redis
  redis-connection-url: redis://redis.mcp-auth.svc.cluster.local:6379
  silence-ping-logging: "true"
  skip-auth-route: "^/healthz$"

replicaCount: 2

resources:
  requests:
    cpu: 10m
    memory: 32Mi
  limits:
    cpu: 100m
    memory: 64Mi

podSecurityContext:
  runAsNonRoot: true

securityContext:
  readOnlyRootFilesystem: true
  allowPrivilegeEscalation: false
  capabilities:
    drop:
      - ALL
helm install oauth2-proxy oauth2-proxy/oauth2-proxy \
  --namespace mcp-auth \
  --create-namespace \
  -f values.yaml

Hardening MCP Deployments: Beyond Authentication#

OAuth2-Proxy solves the authentication problem, but a production MCP deployment needs defense in depth. Here’s the full security stack:

Network Isolation#

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: dbhub-isolation
  namespace: mcp-servers
spec:
  podSelector:
    matchLabels:
      app.kubernetes.io/name: dbhub
  policyTypes:
    - Ingress
    - Egress
  ingress:
    # Only allow traffic from Envoy Gateway
    - from:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: envoy-gateway-system
      ports:
        - port: 4180
          protocol: TCP
  egress:
    # Allow DNS resolution
    - to:
        - namespaceSelector: {}
      ports:
        - port: 53
          protocol: UDP
        - port: 53
          protocol: TCP
    # Allow connection to the database
    - to:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: database
      ports:
        - port: 5432
          protocol: TCP
    # Allow OAuth2-Proxy to reach the IdP (external)
    - to:
        - ipBlock:
            cidr: 0.0.0.0/0
            except:
              - 10.0.0.0/8
              - 172.16.0.0/12
              - 192.168.0.0/16
      ports:
        - port: 443
          protocol: TCP

Security Layers Summary#

LayerWhat It DoesTool
AuthenticationVerify user identity via SSOOAuth2-Proxy
Group-based accessRestrict to specific teams/orgsOAuth2-Proxy --allowed-group
Network isolationLimit pod-to-pod trafficKubernetes NetworkPolicies
Database accessRead-only credentials, row limitsDBHub --readonly, database RBAC
Pod securityNon-root, read-only FS, dropped capabilitiesPod Security Standards
TLSEncrypt all traffic in transitIngress TLS termination + cert-manager
Audit loggingTrack who did whatOAuth2-Proxy identity headers + application logs
Session managementExpire sessions, refresh tokensOAuth2-Proxy --cookie-expire + Redis

MCP-Specific Considerations#

When deploying MCP servers behind OAuth2-Proxy, keep these MCP-specific details in mind:

  • SSE connections need long timeouts. MCP’s Streamable HTTP transport uses Server-Sent Events for streaming responses. Set proxy-read-timeout and proxy-send-timeout to at least 3600 seconds on your Ingress.
  • Session affinity may be required. If the MCP server maintains session state (via Mcp-Session-Id header), configure session affinity on your Service or Ingress to route requests to the same backend.
  • Health endpoints must bypass auth. Use --skip-auth-route=^/healthz$ to ensure Kubernetes probes don’t trigger authentication redirects.
  • Use read-only database credentials. When deploying DBHub for AI agents, the database user should have SELECT privileges only. Never connect with a user that can DROP, DELETE, or ALTER.
  • Separate namespaces per sensitivity tier. Production database MCP servers should not share a namespace with development or internal-tool MCP servers.

OAuth2-Proxy: Pros and Cons#

Pros#

AdvantageDescription
Zero code changesProtect any MCP server without modifying its source code
Extremely lightweightSingle Go binary, ~50MB memory. Minimal overhead next to any MCP server
20 identity providersNative support for Google, GitHub, Entra ID, Keycloak, and any OIDC-compliant IdP
Battle-tested at scale12 years of production use. Adopted by Microsoft, OpenAI, Adobe, Morgan Stanley
CNCF SandboxCommunity-governed, vendor-neutral. Long-term sustainability
Identity header injectionForwards X-Auth-Request-User/Email/Groups for audit logging
Config validation--config-test flag catches misconfigurations before deployment
Distroless imagesMinimal attack surface with distroless container base since v7.6.0

Cons#

LimitationDescription
Auth only, not authzBinary allow/deny. Cannot restrict which MCP tools a user can invoke
No built-in MFARelies entirely on the upstream IdP for multi-factor authentication
Cookie size limitsDefault cookie sessions can exceed 4KB. Redis required for production
Single provider onlyCannot configure multiple IdPs simultaneously
Browser-first designOAuth2 redirect flow works for browser users, not headless AI agents
No MCP awarenessDoesn’t understand JSON-RPC messages. It’s a generic HTTP proxy
No admin UIAll configuration via flags, env vars, or config file

Troubleshooting#

IssueSymptomsResolution
Redirect loopBrowser cycles between app and IdPCheck --redirect-url matches OAuth app’s callback URL exactly. Verify --cookie-domain if using subdomains
403 after loginUser authenticates but gets forbiddenVerify --email-domain, --allowed-group includes the user. Check IdP returns email claim in the token
502 from gatewayext-auth call failsEnsure OAuth2-Proxy Service is reachable from the gateway namespace. Check SecurityPolicy backendRefs uses correct service name and namespace
Cookie too largeBrowser rejects session cookieSwitch to --session-store-type=redis. Large OIDC tokens exceed 4KB cookie limit
SSE connection dropsMCP streaming responses cut offIncrease proxy-read-timeout and proxy-send-timeout on the Ingress to 3600+ seconds
MCP session lostAgent re-authenticates mid-conversationEnable session affinity if MCP server uses Mcp-Session-Id. Verify Redis session storage is working
Health probes failingPods restarting, readiness failuresAdd --skip-auth-route=^/healthz$. Ensure --http-address binds to 0.0.0.0, not 127.0.0.1

Hands-On Demo Repository#

I’ve built a complete demo that deploys DBHub behind OAuth2-Proxy on a local Kubernetes cluster:

srekubecraft-demo/oauth2-proxy-mcp

What’s Included#

PathPurpose
kubernetes/cluster/Kind config: 3-node cluster with Cilium CNI
kubernetes/oauth2-proxy/Namespace and secret template
kubernetes/dbhub/DBHub + OAuth2-Proxy sidecar deployment
kubernetes/redis/Redis for session storage
kubernetes/postgres/PostgreSQL with sample data and read-only user
kubernetes/policies/NetworkPolicies for pod isolation

Prerequisites#

You need a GitHub OAuth App. Create one at GitHub Developer Settings:

  • Homepage URL: http://dbhub.localhost
  • Callback URL: http://dbhub.localhost/oauth2/callback

Quick Start#

The demo uses Taskfile for automation:

git clone https://github.com/nicknikolakakis/srekubecraft-demo.git
cd srekubecraft-demo/oauth2-proxy-mcp

# Full setup (will prompt for GitHub OAuth credentials)
task setup

# Start port-forward
sudo kubectl -n mcp-servers port-forward svc/dbhub-sidecar 80:4180

# Open in browser
open http://dbhub.localhost

What You’ll See#

  1. Open http://dbhub.localhost and you get a “Sign in with GitHub” page. OAuth2-Proxy blocks all unauthenticated MCP requests.
  2. Click sign in, authorize the GitHub OAuth App, and you land on the DBHub web UI with full access to execute_sql and search_objects MCP tools.
  3. Run SELECT name, department FROM employees to query the sample database through the authenticated MCP endpoint.
  4. Apply NetworkPolicies with task demo:netpol to block direct pod-to-pod access, so the only path to the MCP server is through OAuth2-Proxy.

Cleanup#

task clean

Conclusion#

MCP servers are becoming the backbone of AI-powered workflows, connecting agents to databases, filesystems, Kubernetes clusters, and cloud APIs. But the protocol’s optional authentication model means most servers ship with no auth at all. In a world where 492 exposed MCP servers were found on the public internet and 30+ CVEs dropped in two months, “we’ll add auth later” is a risk you can’t afford.

OAuth2-Proxy is the fastest way to close this gap:

  • Minutes to deploy - drop it in front of any MCP server as a sidecar or centralized gateway
  • Zero code changes - DBHub, filesystem servers, Kubernetes MCP servers, all protected without touching their source
  • Corporate SSO - restrict access to your GitHub org, Keycloak group, or Google Workspace domain
  • Audit trail - identity headers tell you exactly who executed which query or tool invocation
  • CNCF-backed - community-governed, battle-tested by organizations running it at scale for over a decade

The MCP ecosystem is moving fast. Authentication shouldn’t be the thing that gets left behind. Start with OAuth2-Proxy and NetworkPolicies, use read-only database credentials, and treat every MCP server like what it is: a privileged access point to your infrastructure.


If you found this useful, you might also enjoy my related posts on Kubernetes security and platform tooling:

OAuth2-Proxy - Securing MCP Servers on Kubernetes

EOF · 21 min · 4283 words
$ continue exploring
Knative - The Platform Engineer's Guide to Serverless on Kubernetes // A guide to Knative for SREs and Platform Engineers. Learn how Knative Serving, Eventing, and Functions bring serverless capabilities to any Kubernetes cluster with autoscaling, scale-to-zero, and event-driven architectures. #sre #kubernetes #knative
// author
Nikos Nikolakakis
Nikos Nikolakakis Principal SRE & Platform Engineer // Writing about Kubernetes, SRE practices, and cloud-native infrastructure
$ exit logout connection closed. cd ~/home ↵
ESC
Type to search...