Place in the stack
Krane runs in each Kubernetes cluster. It is the only service in this stack with direct Kubernetes API credentials, which keeps cluster access isolated to Krane.Service boundaries
Krane only talks to three systems.- Upstream: control plane streams desired state and receives status updates
- Downstream: Kubernetes API server for creating, updating, and watching resources
- Sidecar dependency: Vault for secrets decryption when enabled
Core responsibilities
Krane is built around these core responsibilities.- Reconcile user workloads as Kubernetes ReplicaSets
- Reconcile sentinel infrastructure as Deployments, Services, PDBs, and gossip resources
- Reconcile Cilium network policies from control plane definitions
- Report actual state for workloads and sentinels upstream
- Decrypt workload secrets using Vault when enabled
Control plane interface
Krane connects upstream with a Connect RPC client that keeps long-running streams open. It injectsAuthorization: Bearer <token> and X-Krane-Region headers on every request, and supports h2c for non-TLS URLs.
Reconciliation model
Control loops
Each controller maintains its own version cursor and reconnect logic for its stream. Streams reconnect with jittered backoff between one and five seconds. A version cursor advances only after a state is applied successfully, which makes stream replay safe.Deployment controller
The deployment controller manages user workloads as Kubernetes ReplicaSets. It runs three loops.- Desired state apply loop streams
WatchDeploymentsand applies or deletes ReplicaSets - Actual state report loop watches ReplicaSet events and reports status to the control plane
- Resync loop runs every minute and corrects drift by re-reading desired state
Sentinel controller
The sentinel controller manages the shared routing layer that fronts workloads. It also runs three loops.- Desired state apply loop streams
WatchSentinelsand applies or deletes resources - Actual state report loop watches sentinel Deployments and reports health
- Resync loop runs every minute and corrects drift by re-reading desired state
Cilium controller
The Cilium controller manages Cilium network policies. It runs two loops.- Desired state apply loop streams
WatchCiliumNetworkPoliciesand applies or deletes policies - Resync loop runs every minute and corrects drift by re-reading desired state
Kubernetes resource model
Krane uses server-side apply for all Kubernetes resources and labels everything it manages. Labels includeapp.kubernetes.io/managed-by=krane and a component label for selection.
Deployments
User workloads are represented as ReplicaSets with the following characteristics.- Namespaces are created on demand
- Pods run with
RuntimeClassName: gvisorfor isolation - Pods tolerate the
karpenter.sh/nodepool=untrustedtaint - Topology spread keeps replicas balanced across zones
- Pod affinity prefers zones that already run sentinel pods for the environment
- Env vars include
UNKEY_WORKSPACE_ID,UNKEY_PROJECT_ID,UNKEY_ENVIRONMENT_ID, andUNKEY_DEPLOYMENT_ID UNKEY_ENCRYPTED_ENVcontains a base64-encoded secrets blob when present- Healthchecks map to HTTP probes, and POST uses an exec probe with
wget - Optional preStop hook sends non-SIGTERM shutdown signals
Sentinels
Sentinels are infrastructure proxies deployed into thesentinel namespace. Each sentinel reconciliation applies the following resources.
- Deployment for the sentinel pods
- ClusterIP Service owned by the Deployment for stable addressing
- PodDisruptionBudget to keep at least one pod available
- Headless gossip service for peer discovery across the environment
- CiliumNetworkPolicy for gossip traffic between sentinel pods

