Dapr - Building a Safe Platform for Citizen Developer Apps on Kubernetes
~/posts/dapr.md20 min · 4245 words

Dapr - Building a Safe Platform for Citizen Developer Apps on Kubernetes

// How to use Dapr, Flux, Harbor, and a shared Helm chart to safely host apps built by non-technical citizen developers using LLMs on Kubernetes.

$ date

Something shifted in 2025. People who had never written a line of code started building working applications. Not toy scripts - real Flask APIs, Express dashboards, data pipelines. The catalyst was LLMs like Claude, ChatGPT, and Copilot. A marketing analyst could describe what they needed in plain English and walk away with a functioning Python app. An operations manager could build an internal dashboard without filing a Jira ticket to engineering.

This is genuinely exciting. But if you’re the platform or SRE team responsible for running infrastructure, you probably felt the other side of that excitement: a Slack message asking “hey, can you deploy this app I built with Claude?” followed by a Python file with hardcoded API keys, no health checks, and a requirements.txt that pins nothing.

The demand is real. Non-technical teams across organizations are producing applications that need to run somewhere. And “somewhere” increasingly means Kubernetes - because that’s where your platform already is. The question isn’t whether to support these citizen developers. It’s how to support them without compromising the security, reliability, and operational standards your platform depends on.

This post walks through one answer to that question: a dedicated Kubernetes cluster built with Dapr as the application runtime, backed by an entire CNCF stack that abstracts away the infrastructure complexity citizen developers shouldn’t need to think about.

Who Should Read This?#

This post is for:

  • Platform Engineers building internal developer platforms who need to support non-technical app creators
  • SREs managing Kubernetes clusters who want to safely host citizen developer workloads without compromising production standards
  • Engineering Managers evaluating how to enable business teams to self-serve while maintaining governance
  • DevOps Teams looking at Dapr as a runtime abstraction layer for multi-tenant app hosting

TL;DR#

Problem: LLMs are turning non-technical employees into app developers. Those apps need hosting, but the people who built them can’t manage secrets, networking, or Kubernetes. Solution: A dedicated cluster with Dapr as the runtime - apps call localhost:3500 for state, secrets, and service invocation. A shared Helm chart enforces security, isolation, and resource limits. Flux + Harbor + Cilium + Prometheus handle GitOps delivery, supply chain, networking, and observability. Result: The developer provides an image and a values file. The platform provides everything else. Full demo repo.


The Citizen Developer Problem#

Let’s be specific about what goes wrong when non-technical users build apps with LLMs and hand them to your platform team:

They don’t know what they don’t know. A citizen developer building a message classifier with Claude will get working Python code. But that code will:

  • Import redis directly and hardcode redis://localhost:6379 as the connection string
  • Store API keys in environment variables (or worse, in the source code)
  • Have no concept of health checks, readiness probes, or graceful shutdown
  • Assume it’s the only thing running on the machine
  • Have no idea that it needs network policies, resource limits, or mTLS

They can’t describe infrastructure requirements. Ask a citizen developer “what secrets management solution do you need?” and you’ll get a blank stare. They don’t know what a secret store is. They don’t know that their Slack bot token shouldn’t be in a .env file committed to Git. They don’t know that their app needs to talk to Redis through a sidecar proxy with mutual TLS.

They can’t predict operational concerns. Resource contention, noisy-neighbor effects, blast radius, certificate rotation, secret syncing, drift detection - none of these concepts exist in their mental model. And they shouldn’t have to. These are platform concerns.

The traditional answer is “teach them Kubernetes.” But that defeats the purpose. The whole point of citizen developers using LLMs is that they don’t need to learn infrastructure. The platform needs to meet them where they are.

What the Platform Needs to Provide#

  1. Apps get secrets without knowing how - call an API, the platform routes it to Vault.
  2. Apps store state without managing databases - save data, the platform handles Redis.
  3. Apps communicate without service discovery - call another app by name, the platform handles DNS, mTLS, and cross-namespace routing.
  4. Apps are isolated by default - network policies, resource quotas, and dedicated nodes prevent one app from affecting another.
  5. Everything is auditable and recoverable - GitOps ensures the platform can be rebuilt from scratch.

This is exactly what Dapr enables.

What is Dapr?#

Dapr (Distributed Application Runtime) is a CNCF incubating project that provides distributed system building blocks as simple HTTP/gRPC APIs. It runs as a sidecar container next to your application and exposes a local API on localhost:3500.

The key insight: your application code talks to localhost:3500 for everything - state, secrets, service-to-service calls, pub/sub, bindings. Dapr translates those calls into the actual infrastructure behind the scenes. It simplifies the app-facing surface, but it does not remove the need for platform engineering discipline underneath.

Dapr Building Blocks#

Building BlockWhat the App SeesWhat Dapr Handles
State StorePOST localhost:3500/v1.0/state/statestoreRedis, PostgreSQL, CosmosDB, DynamoDB
Secret StoreGET localhost:3500/v1.0/secrets/secretstore/keyVault, AWS Secrets Manager, Azure Key Vault
Service InvocationPOST localhost:3500/v1.0/invoke/app-id/method/endpointDNS resolution, mTLS, load balancing, retries
Pub/SubPOST localhost:3500/v1.0/publish/pubsub/topicRedis Streams, Kafka, RabbitMQ, NATS
BindingsPOST localhost:3500/v1.0/bindings/slackSlack, Twilio, SendGrid, HTTP webhooks

Why Dapr for Citizen Developers?#

AspectWithout DaprWith Dapr
State managementimport redis; r = redis.Redis(host='...')requests.post('http://localhost:3500/v1.0/state/statestore', ...)
Secretsos.environ['API_KEY'] or hardcodedrequests.get('http://localhost:3500/v1.0/secrets/secretstore/api-key')
Service callsrequests.get('http://app1.app1-ns.svc.cluster.local:8080/...')requests.post('http://localhost:3500/v1.0/invoke/app1.app1-ns/method/...')
Infrastructure knowledgeMust understand Redis, Vault, K8s DNS, mTLSOnly needs to know localhost:3500
Security postureDeveloper manages credentialsPlatform manages everything via sidecar

What Dapr is NOT#

  • Not a service mesh - Dapr removes the need for developers to understand infrastructure. A service mesh (Cilium, Istio) enforces how that infrastructure behaves - traffic policies, rate limits, circuit breaking. They solve different problems at different layers. You’ll likely need both.
  • Not a CI/CD tool - Dapr doesn’t build, test, or deploy your app. That’s what Flux, Argo CD, or GitHub Actions are for.
  • Not a secrets manager - Dapr connects to secrets managers (Vault, AWS SM). It doesn’t store secrets itself.
  • Not limited to Kubernetes - Dapr runs anywhere (VMs, bare metal, serverless), but Kubernetes is its strongest integration.

Architecture - A Guardrailed Runtime for Citizen-Built Apps#

Seven CNCF projects working together. Every component plays a specific role:

flowchart TB
    subgraph cluster["Kubernetes Cluster (Kind / GKE / EKS)"]
        subgraph system["System Node Pool"]
            flux["Flux CD<br/>(GitOps)"]
            dapr_cp["Dapr Control Plane<br/>(Operator, Sentry, Injector)"]
            harbor["Harbor<br/>(OCI Registry)"]
            vault["Vault<br/>(Secret Backend)"]
            eso["External Secrets<br/>Operator"]
            redis["Redis<br/>(State Backend)"]
            prom["Prometheus + Grafana<br/>(Monitoring)"]
        end

        subgraph app1_pool["App1 Node Pool (Dedicated)"]
            app1["app1-ns"]
            app1_dapr["Dapr Sidecar"]
            app1 --- app1_dapr
        end

        subgraph app2_pool["App2 Node Pool (Dedicated)"]
            app2["app2-ns"]
            app2_dapr["Dapr Sidecar"]
            app2 --- app2_dapr
        end
    end

    cilium["Cilium CNI<br/>(eBPF Networking)"]
    cilium -.->|network policies| cluster

    flux -->|"HelmRelease (OCI)"| harbor
    harbor -->|chart + images| app1_pool
    harbor -->|chart + images| app2_pool
    dapr_cp -.->|sidecar injection| app1_dapr
    dapr_cp -.->|sidecar injection| app2_dapr
    eso -->|sync secrets| vault
    eso -->|K8s Secrets| app1
    eso -->|K8s Secrets| app2
    app1_dapr -->|state| redis
    app1_dapr -->|secrets| vault
    app2_dapr -.->|"service invocation<br/>(cross-namespace)"| app1_dapr
    prom -.->|scrape| dapr_cp

The CNCF Stack#

ComponentCNCF StatusRole in This Platform
KubernetesGraduatedContainer orchestration - the foundation
DaprIncubatingApplication runtime - abstracts state, secrets, service calls
CiliumGraduatedeBPF-based CNI - network policies, kube-proxy replacement
Flux CDGraduatedGitOps - manages all HelmReleases, drift detection
HarborGraduatedOCI registry - stores Helm charts and container images
PrometheusGraduatedMonitoring - metrics collection, alerting, Grafana dashboards
HelmGraduatedPackage manager - shared app chart, infrastructure releases

Supporting (non-CNCF but ecosystem-standard):

ComponentRole
External Secrets OperatorSyncs Vault secrets to Kubernetes Secrets
HashiCorp VaultSecret storage backend (KV v2)

Production mapping: In this demo, Kind simulates a managed Kubernetes cluster (GKE, EKS, AKS). Harbor simulates your organization’s container registry (ECR, GAR, ACR). Vault simulates your secrets management platform. The architecture and patterns are identical - only the backing services change.

An honest note on complexity: This is not a lightweight setup. You are trading developer simplicity for platform complexity. Seven CNCF projects, coordinated upgrades across Dapr, ESO, and Flux, Vault token lifecycle management, Dapr component CRDs per app, network policy debugging across namespaces - all of this lands on the platform team. The bet is that absorbing this complexity centrally is cheaper than letting 50 teams each solve secrets, state, and networking independently (and incorrectly). If you have 2-3 apps, this stack is overkill. If you have 20+, the investment pays for itself.

Core Concepts - How It All Fits Together#

1. The Shared Helm Chart#

The platform team maintains a single Helm chart (charts/app/) that encodes all operational requirements. Citizen developers never see Kubernetes manifests - they provide a values.yaml and the chart handles everything:

# charts/app/values.yaml - Platform team defaults
image:
  repository: ""
  tag: "latest"
  pullPolicy: IfNotPresent

port: 8080

# Dedicated nodepool scheduling
nodepool: ""

# Safe resource defaults
resources:
  requests:
    cpu: 100m
    memory: 64Mi
  limits:
    cpu: 500m
    memory: 256Mi

# Dapr sidecar (enabled by default for all apps)
dapr:
  enabled: true
  appProtocol: http
  config: dapr-config
  logLevel: info
  sidecar:
    cpuRequest: 100m
    memoryRequest: 64Mi
    cpuLimit: 300m
    memoryLimit: 128Mi

# Building blocks (citizen developer enables what they need)
statestore:
  enabled: false
secretstore:
  enabled: false
pubsub:
  enabled: false
binding:
  enabled: false

# Network isolation (deny-all baseline)
networkPolicy:
  enabled: true
  allowIngressFrom: []
  allowEgressTo: []

The chart generates:

  • Deployment with Dapr sidecar annotations, health probes, security context (non-root, read-only filesystem), and node affinity
  • Service for internal routing
  • NetworkPolicy with deny-all baseline plus explicit allow rules for Dapr, DNS, Redis, Vault, and cross-namespace communication
  • ResourceQuota per namespace (CPU, memory, pod count limits)
  • Dapr Components (state store, secret store, pub/sub, bindings) based on what the citizen developer enables

2. What the Citizen Developer Provides#

A citizen developer who built a message classifier with Claude provides exactly this:

# apps/app1/values.yaml - This is ALL the citizen developer writes
image:
  repository: localhost:30000/platform/dapr-app1
  tag: latest

port: 8080

nodepool: app1

# Allow app2 to call this app via Dapr service invocation
networkPolicy:
  enabled: true
  allowIngressFrom:
    - app2-ns

# Save classifications to Redis (via Dapr API)
statestore:
  enabled: true
  redis:
    host: "redis.redis-system.svc.cluster.local:6379"

# Read secrets from Vault (via Dapr API)
secretstore:
  enabled: true
  vault:
    address: "http://vault.vault-system.svc.cluster.local:8200"
    secretsPath: "dapr/app1"
    tokenSecret:
      name: "vault-access"
      key: "token"

They don’t write a Deployment, a Service, a NetworkPolicy, a ResourceQuota, or any Dapr component YAML. They say “I need state storage and secrets” and the platform provides it.

Note: In practice, even this values file would be generated through a self-service portal or a conversation with the platform team. The citizen developer would say “I need to save data and read API keys” and the platform team would translate that into the values above.

3. Dapr Sidecar Injection#

The shared Helm chart adds Dapr annotations to every Deployment. When the pod starts, the Dapr sidecar injector (a mutating admission webhook) automatically injects the daprd sidecar container:

# Generated by the shared Helm chart (citizen developer never sees this)
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app1
  namespace: app1-ns
spec:
  template:
    metadata:
      annotations:
        dapr.io/enabled: "true"
        dapr.io/app-id: "app1"
        dapr.io/app-port: "8080"
        dapr.io/app-protocol: "http"
        dapr.io/config: "dapr-config"
        dapr.io/log-level: "info"
        dapr.io/sidecar-cpu-request: "100m"
        dapr.io/sidecar-memory-request: "64Mi"
        dapr.io/sidecar-cpu-limit: "300m"
        dapr.io/sidecar-memory-limit: "128Mi"
    spec:
      containers:
        - name: app1
          image: localhost:30000/platform/dapr-app1:latest
          ports:
            - containerPort: 8080
          # Platform-enforced security
          securityContext:
            runAsUser: 10001
            runAsNonRoot: true
            readOnlyRootFilesystem: true
            allowPrivilegeEscalation: false

The result is a pod with two containers: the citizen developer’s app and the Dapr sidecar. The app talks to localhost:3500, and the sidecar handles Redis, Vault, mTLS, and service discovery.

4. Network Isolation#

Every app gets a deny-all NetworkPolicy baseline. The shared chart explicitly opens only the ports each app needs:

# Generated NetworkPolicy (citizen developer never writes this)
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: app1-network-policy
  namespace: app1-ns
spec:
  podSelector:
    matchLabels:
      app.kubernetes.io/name: app1
  policyTypes:
    - Ingress
    - Egress
  ingress:
    # Allow from same namespace (Dapr sidecar <-> app)
    - from:
        - podSelector: {}
    # Allow from app2-ns for cross-namespace Dapr calls
    - from:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: app2-ns
      ports:
        - port: 50002    # Dapr internal gRPC port
  egress:
    # DNS resolution
    - to:
        - namespaceSelector: {}
      ports:
        - port: 53
          protocol: UDP
    # Dapr control plane
    - to:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: dapr-system
    # Redis state store
    - to:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: redis-system
      ports:
        - port: 6379
    # Vault secret store
    - to:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: vault-system
      ports:
        - port: 8200

A citizen developer’s app can’t reach anything it wasn’t explicitly granted access to. If app1 doesn’t declare allowEgressTo: [app2-ns], it simply cannot call app2 - even via Dapr.

What a Failure Looks Like#

Here’s a real scenario from testing this demo: a citizen developer’s app was crashing on startup with readOnlyRootFilesystem: true because gunicorn writes to /tmp by default. The app worked fine locally but failed in the cluster.

The fix was a one-line addition to the shared Helm chart - an emptyDir volume mounted at /tmp. But the point is: the citizen developer didn’t debug this. They didn’t know what a read-only filesystem was, or why their app couldn’t write to /tmp. The platform team caught it, fixed it in the chart, and every future app inherited the fix automatically.

This is the pattern at work. When failures happen (and they will), they surface at the platform layer where the team with Kubernetes expertise handles them - not in the citizen developer’s code where they’d be confused and stuck.

5. Secrets Flow#

Secrets never touch the citizen developer’s code. The flow is:

flowchart LR
    subgraph vault["HashiCorp Vault"]
        v1["secret/dapr/app1<br/>slack-bot-token<br/>anthropic-api-key"]
    end

    subgraph eso["External Secrets Operator"]
        css["ClusterSecretStore<br/>(vault-backend)"]
        es["ExternalSecret<br/>(refreshInterval: 1m)"]
    end

    subgraph k8s["Kubernetes"]
        secret["K8s Secret<br/>(app1-secrets)"]
        dapr_sc["Dapr Secret Store<br/>Component"]
    end

    subgraph app["Citizen Developer App"]
        code["requests.get(<br/>'localhost:3500/v1.0/<br/>secrets/secretstore/api-key')"]
    end

    vault --> css --> es --> secret
    secret --> dapr_sc --> code
  1. Platform team seeds secrets into Vault
  2. ESO syncs them to Kubernetes Secrets (with automatic rotation every 60 seconds)
  3. Dapr’s secret store component reads from the K8s Secret
  4. The citizen developer’s app calls localhost:3500/v1.0/secrets/secretstore/key-name

No .env files. No hardcoded credentials. No kubectl create secret in a README.

6. GitOps Delivery with Flux#

Everything is managed by Flux - 7 HelmReleases total:

flowchart LR
    subgraph flux["Flux CD"]
        src["HelmRepositories<br/>(5 sources)"]
        hr_infra["Infrastructure HelmReleases"]
        hr_apps["App HelmReleases"]
    end

    subgraph infra["Infrastructure (5)"]
        ms["metrics-server"]
        prom["kube-prometheus-stack"]
        eso_hr["external-secrets"]
        dapr_hr["dapr"]
        harbor_hr["harbor"]
    end

    subgraph apps["Apps from Harbor OCI (2)"]
        app1_hr["app1<br/>(chart: oci://harbor/platform/app)"]
        app2_hr["app2<br/>(chart: oci://harbor/platform/app)"]
    end

    src --> hr_infra --> infra
    src --> hr_apps --> apps
    harbor_hr -.->|"OCI chart source"| apps

The app HelmReleases pull the shared chart from Harbor as an OCI artifact:

# kubernetes/flux/app1-release.yaml
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
  name: app1
  namespace: app1-ns
spec:
  interval: 5m
  chart:
    spec:
      chart: app
      version: ">=0.1.0"
      sourceRef:
        kind: HelmRepository
        name: harbor-platform
        namespace: flux-system
  values:
    image:
      repository: localhost:30000/platform/dapr-app1
      tag: latest
      pullPolicy: IfNotPresent
    port: 8080
    nodepool: app1
    networkPolicy:
      enabled: true
      allowIngressFrom:
        - app2-ns
    statestore:
      enabled: true
      redis:
        host: "redis.redis-system.svc.cluster.local:6379"
    secretstore:
      enabled: true
      vault:
        address: "http://vault.vault-system.svc.cluster.local:8200"
        secretsPath: "dapr/app1"
        tokenSecret:
          name: "vault-access"
          key: "token"

The Harbor OCI source tells Flux where to find the shared chart:

# In kubernetes/flux/sources.yaml
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
  name: harbor-platform
  namespace: flux-system
spec:
  type: oci
  interval: 5m
  url: oci://harbor.harbor-system.svc.cluster.local/platform
  insecure: true    # HTTP for demo (use TLS in production)

If someone manually modifies an app deployment or deletes a Dapr component, Flux detects the drift and restores it within 5 minutes.

The Apps - What Citizen Developers Actually Build#

App1 - Message Classifier (Python/Flask)#

A citizen developer built this with Claude. The entire app is one file. Notice what’s not in the code - no Redis imports, no Vault SDK, no Kubernetes awareness:

import requests
import uuid
from datetime import datetime, timezone
from flask import Flask, request, jsonify

app = Flask(__name__)

DAPR_PORT = 3500
STATE_STORE = "statestore"
SECRET_STORE = "secretstore"
DAPR_URL = f"http://localhost:{DAPR_PORT}/v1.0"

@app.route("/classify", methods=["POST"])
def classify():
    data = request.get_json(force=True)
    message = data.get("message", "")

    # Simple keyword-based classifier (placeholder for Claude API)
    classification = "question"
    bug_keywords = ["broken", "error", "fail", "crash", "bug", "issue", "wrong"]
    feature_keywords = ["add", "want", "wish", "would be nice", "feature", "request"]
    if any(kw in message.lower() for kw in bug_keywords):
        classification = "bug"
    elif any(kw in message.lower() for kw in feature_keywords):
        classification = "feature_request"

    entry = {
        "id": uuid.uuid4().hex[:8],
        "message": message,
        "classification": classification,
        "timestamp": datetime.now(timezone.utc).isoformat(),
    }

    # Save to state store (Redis behind the scenes - app doesn't know)
    requests.post(f"{DAPR_URL}/state/{STATE_STORE}", json=[
        {"key": entry["id"], "value": entry}
    ])

    return jsonify(entry)

@app.route("/history")
def history():
    # Read history from state store
    resp = requests.get(f"{DAPR_URL}/state/{STATE_STORE}/history")
    items = resp.json() if resp.status_code == 200 and resp.text else []
    return jsonify(items)

Standard Python - requests.post to save data, requests.get to read it. The developer doesn’t know Redis exists. They don’t know their data lives in a separate namespace behind network policies. They don’t know that every call between their app and Redis is encrypted with mTLS. And they don’t need to.

App2 - Classification Dashboard (Node.js/TypeScript)#

A second citizen developer built a dashboard that calls app1. Notice the service invocation - they call app1 by name, not by URL:

import express from "express";

const app = express();
const DAPR_PORT = process.env.DAPR_HTTP_PORT || "3500";
const DAPR_URL = `http://localhost:${DAPR_PORT}/v1.0`;
const APP1_ID = "app1.app1-ns";  // Dapr app ID (cross-namespace)

// Forward classification requests to app1 via Dapr
app.post("/api/classify", async (req, res) => {
  const response = await fetch(
    `${DAPR_URL}/invoke/${APP1_ID}/method/classify`,
    {
      method: "POST",
      headers: { "Content-Type": "application/json" },
      body: JSON.stringify(req.body),
    }
  );
  const data = await response.json();
  res.json(data);
});

// Get history from app1 via Dapr
app.get("/api/history", async (req, res) => {
  const response = await fetch(
    `${DAPR_URL}/invoke/${APP1_ID}/method/history`
  );
  const data = await response.json();
  res.json(data);
});

No IP addresses, no TLS certificates, no DNS configuration, no retry logic. Dapr handles all of it through a single endpoint: localhost:3500/v1.0/invoke/app1.app1-ns/method/classify.

What the Citizen Developer Sees vs What the Platform Provides#

ConcernCitizen DeveloperPlatform Team
CodeWrites Python/Node.js app with ClaudeMaintains shared Helm chart
State storageCalls localhost:3500/v1.0/state/...Configures Redis, network policies, Dapr component
SecretsCalls localhost:3500/v1.0/secrets/...Manages Vault, ESO sync, token rotation
Service callsCalls localhost:3500/v1.0/invoke/...Configures Dapr mTLS, network policies, namespace isolation
DeploymentProvides values.yaml (image, port, building blocks)Wraps in Flux HelmRelease, pushes to Harbor
MonitoringNothingPrometheus scrapes Dapr metrics automatically
SecurityNothingNon-root containers, read-only FS, deny-all network baseline
Node isolationNothingDedicated node pool with taints and tolerations

Node Pool Isolation#

Each citizen developer app runs on a dedicated node pool. In the demo, Kind worker nodes simulate GKE/EKS node pools with labels and taints:

NodeLabelTaintWorkloads
system workerworkload=systemNoneDapr, Vault, Harbor, ESO, Prometheus, Flux
app1 workerapp=app1app=app1:NoScheduleOnly app1 pods
app2 workerapp=app2app=app2:NoScheduleOnly app2 pods

The shared Helm chart automatically adds tolerations and node affinity based on the nodepool value in the citizen developer’s config. A misconfigured app1 pod consuming 100% CPU cannot affect app2 because they’re on physically separate nodes.

Production equivalent: In GKE, these become separate node pools with different machine types, autoscaling policies, and preemptible/spot configurations. In EKS, they’re managed node groups with taints.

Dapr: Pros and Cons#

Pros#

AdvantageDescription
Infrastructure abstractionCitizen developers call localhost:3500 - they never see Redis, Vault, or Kubernetes internals
Sidecar modelSecurity and infrastructure logic is outside app code - platform team controls it
Pluggable backendsSwap Redis for DynamoDB, Vault for AWS Secrets Manager - app code doesn’t change
Built-in mTLSDapr Sentry issues certificates automatically - no cert-manager setup per app
Cross-namespace service invocationApps call each other by name (app1.app1-ns) with automatic mTLS
CNCF ecosystemIntegrates naturally with Prometheus, Flux, Helm, and other CNCF projects
Language-agnosticPython, Node.js, Go, .NET, Java - all use the same HTTP API

Cons#

LimitationDescription
Resource overheadEach app pod gets a Dapr sidecar (100-300m CPU, 64-128Mi memory) - adds up at scale
Sidecar startup latencyThe sidecar must be ready before the app can use Dapr APIs - adds a few seconds to pod startup
Component CRD complexityState store, secret store, pub/sub, bindings - each is a separate CRD the platform team must manage
Version couplingDapr control plane version must be compatible with sidecar version across all apps
Debugging complexityIssues can be in the app, the sidecar, the component config, or the backend - more layers to troubleshoot
Not a service meshDapr doesn’t replace Cilium/Istio for network-level concerns like traffic splitting or rate limiting

When to Use This Pattern#

Use it when:#

  • Non-technical users are building apps with LLMs and those apps need to run on your platform
  • You need infrastructure abstraction - apps should consume capabilities without managing backends
  • Multiple teams deploy independent apps on shared infrastructure with isolation requirements
  • Security is non-negotiable - you can’t trust citizen developer code to handle secrets, network policies, or resource limits correctly
  • You want a single chart pattern - one Helm chart that enforces all operational requirements

Consider alternatives when:#

  • Your developers are experienced - engineers who understand Kubernetes don’t need Dapr’s abstraction layer; direct Redis/Vault SDKs give more control
  • You’re running 1-2 apps - the overhead of Dapr, Harbor, Flux, ESO, and Vault for two apps is hard to justify
  • Latency is critical - the Dapr sidecar adds a hop; for sub-millisecond requirements, direct connections are faster
  • You already have a service mesh - Istio/Linkerd provides some overlapping features (mTLS, service discovery)

Scaling Considerations#

This demo runs two apps. Production won’t be that simple. Here’s what changes at scale:

ConcernAt 5 AppsAt 50+ Apps
Dapr sidecar overheadNegligible (500m CPU total)Significant - 5-15 CPU cores just for sidecars. Consider sidecar resource tuning per app tier.
Redis as state storeSingle instance is fineBecomes a bottleneck. Move to Redis Cluster or per-app state stores (PostgreSQL, DynamoDB).
Vault token managementManual vault-access secrets workESO token rotation and Vault auth methods (Kubernetes auth) become essential.
Flux reconciliation7 HelmReleases, fast50+ HelmReleases - tune interval, use sharding, watch source-controller memory.
Namespace explosionManageableNeed automation for namespace provisioning, quota enforcement, and cleanup of abandoned apps.
Dapr component sprawlA few CRDs per namespaceHundreds of Component CRDs. Consider shared components with scoping or a component template system.
Multi-clusterSingle cluster is fineDedicated clusters per business unit or region. Dapr supports multi-cluster with federation.

The architecture in this post scales to roughly 20-30 apps on a single cluster before you need to rethink individual components. Beyond that, you’re looking at dedicated infrastructure per component (managed Redis, production Vault cluster, Harbor with replication) and potentially multi-cluster with Flux.

Hands-On Demo Repository#

The complete working demo is available at:

github.com/nicknikolakakis/srekubecraft-demo - see the dapr/ directory.

Quick Start#

git clone https://github.com/nicknikolakakis/srekubecraft-demo.git
cd srekubecraft-demo/dapr

# Full setup: bootstrap -> Flux infra -> Vault -> ESO -> Dapr -> Redis -> Harbor -> apps
task setup

The setup takes 10-12 minutes and creates:

  • 4-node Kind cluster with Cilium CNI (eBPF, no kube-proxy)
  • 7 Flux HelmReleases (metrics-server, Prometheus+Grafana, ESO, Dapr, Harbor, app1, app2)
  • Vault with seeded demo secrets
  • Redis for Dapr state store
  • Harbor OCI registry with shared Helm chart and container images
  • Two citizen developer apps with Dapr sidecars on dedicated node pools

Verify#

# All 7 HelmReleases should show READY: True
task flux:status

# Harbor health + artifacts in platform project
task harbor:status

# Integration tests - app1 classify + history, app2 cross-namespace calls
task test:app1
task test:app2

Demo Stack Versions#

ComponentVersionManaged by
Cilium1.19.3Helm (imperative bootstrap)
Flux CD2.8.5flux install (imperative)
Metrics Server3.13.0Flux HelmRelease
kube-prometheus-stack83.6.0Flux HelmRelease
External Secrets Operator2.3.0Flux HelmRelease
Dapr1.17.5Flux HelmRelease
Harbor1.18.3 (v2.14.3)Flux HelmRelease
Vault1.21kubectl (plain manifests)
Redis7 (alpine)kubectl (plain manifests)

Conclusion#

The rise of LLM-powered app builders is creating a new category of workload for platform teams. These aren’t microservices from experienced engineers - they’re apps written by people who don’t know what a container is, let alone how to configure mTLS or write a NetworkPolicy. And the volume is only growing.

I think this shift is inevitable. The cost of producing a working application just dropped to near zero. What hasn’t changed is the cost of running it safely. That gap - between “I built an app” and “it’s running in production without leaking secrets or taking down the cluster” - is exactly where platform engineering earns its keep.

Dapr is the strongest abstraction I’ve found for this specific problem. Not because it’s the simplest tool (it isn’t), but because it draws the right boundary: developers interact with localhost:3500, and everything behind that API - Redis, Vault, mTLS, service discovery - is owned by the platform team. That boundary is clean, enforceable, and language-agnostic. Whether Dapr is the right long-term answer depends on how the CNCF ecosystem evolves, but the pattern - infrastructure as a localhost API, enforced by sidecars, delivered through GitOps - is here to stay.

Combined with the rest of the stack - Flux for GitOps delivery, Harbor for OCI supply chain, Cilium for eBPF networking, Prometheus for observability, Helm for packaging, and ESO for secrets lifecycle - you get a platform where:

  • Every app is deployed through a Flux HelmRelease (auditable, recoverable)
  • Every app gets a deny-all network baseline with explicit allow rules
  • Every app runs on a dedicated node pool (blast radius containment)
  • Every app’s secrets come from Vault through ESO (no hardcoded credentials)
  • Every app is monitored through Prometheus (no instrumentation needed)
  • Every app uses a shared Helm chart (consistent security posture)

The moment you accept citizen developers, you’re no longer just running infrastructure - you’re designing guardrails for code you didn’t write, don’t control, and still have to operate. This architecture is one way to do that safely.


If you found this useful, you might also enjoy my related posts:

Dapr - Building a Safe Platform for Citizen Developer Apps on Kubernetes

EOF · 20 min · 4245 words
$ continue exploring
OAuth2-Proxy - Securing MCP Servers on Kubernetes Before Hackers Find Them First // MCP servers like DBHub expose databases, filesystems, and code execution over HTTP with zero authentication. Learn how to deploy OAuth2-Proxy on Kubernetes to add SSO, group-based access control, and session management to any MCP server without changing a single line of code. #sre #kubernetes #oauth2-proxy
// author
Nikos Nikolakakis
Nikos Nikolakakis Principal SRE & Platform Engineer // Writing about Kubernetes, SRE practices, and cloud-native infrastructure
$ exit logout connection closed. cd ~/home ↵
ESC
Type to search...