CG
SkillsDetecting Privilege Escalation in Kubernetes Pods
Start Free
Back to Skills Library
Container & Cloud-Native Security🔴 Advanced

Detecting Privilege Escalation in Kubernetes Pods

Detect and prevent privilege escalation in Kubernetes pods by monitoring security contexts, capabilities, and syscall patterns with Falco and OPA policies.

4 min read6 code examples

Prerequisites

  • Kubernetes cluster v1.25+ (Pod Security Admission support)
  • kubectl with cluster-admin access
  • Falco or similar runtime security tool
  • OPA Gatekeeper or Kyverno for admission policies

Detecting Privilege Escalation in Kubernetes Pods

Overview

Privilege escalation in Kubernetes occurs when a pod or container gains elevated permissions beyond its intended scope. This includes running as root, using privileged mode, mounting host filesystems, enabling dangerous Linux capabilities, or exploiting kernel vulnerabilities. Detection combines admission control (prevention), runtime monitoring (detection), and audit logging (investigation).

Prerequisites

  • Kubernetes cluster v1.25+ (Pod Security Admission support)
  • kubectl with cluster-admin access
  • Falco or similar runtime security tool
  • OPA Gatekeeper or Kyverno for admission policies

Privilege Escalation Vectors in Kubernetes

VectorRiskDetection Method
privileged: trueFull host accessAdmission control + audit
hostPID: trueAccess host processesAdmission control
hostNetwork: trueAccess host network stackAdmission control
hostPath volumesRead/write host filesystemAdmission control
SYS_ADMIN capabilityNear-privileged accessAdmission + runtime
allowPrivilegeEscalation: truesetuid/setgid exploitationAdmission control
runAsUser: 0Container rootAdmission control
automountServiceAccountTokenToken theft for API accessAdmission control
Writable /proc or /sysKernel parameter manipulationRuntime monitoring

Detection with Admission Control

Pod Security Admission (Built-in)

# Enforce restricted policy on namespace
apiVersion: v1
kind: Namespace
metadata:
  name: production
  labels:
    pod-security.kubernetes.io/enforce: restricted
    pod-security.kubernetes.io/enforce-version: latest
    pod-security.kubernetes.io/audit: restricted
    pod-security.kubernetes.io/warn: restricted

OPA Gatekeeper Policies

# Block dangerous capabilities
apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
  name: k8sdangerouspriv
spec:
  crd:
    spec:
      names:
        kind: K8sDangerousPriv
  targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
        package k8sdangerouspriv

        dangerous_caps := {"SYS_ADMIN", "SYS_PTRACE", "SYS_MODULE", "DAC_OVERRIDE", "NET_ADMIN", "NET_RAW"}

        violation[{"msg": msg}] {
          container := input.review.object.spec.containers[_]
          cap := container.securityContext.capabilities.add[_]
          dangerous_caps[cap]
          msg := sprintf("Container %v adds dangerous capability: %v", [container.name, cap])
        }

        violation[{"msg": msg}] {
          container := input.review.object.spec.containers[_]
          container.securityContext.privileged == true
          msg := sprintf("Container %v runs in privileged mode", [container.name])
        }

        violation[{"msg": msg}] {
          container := input.review.object.spec.containers[_]
          container.securityContext.allowPrivilegeEscalation == true
          msg := sprintf("Container %v allows privilege escalation", [container.name])
        }

        violation[{"msg": msg}] {
          input.review.object.spec.hostPID == true
          msg := "Pod uses host PID namespace"
        }

        violation[{"msg": msg}] {
          input.review.object.spec.hostNetwork == true
          msg := "Pod uses host network"
        }

Runtime Detection with Falco

# /etc/falco/rules.d/privesc-detection.yaml
- rule: Setuid Binary Execution in Container
  desc: Detect execution of setuid/setgid binaries in a container
  condition: >
    spawned_process and container and
    (proc.name in (su, sudo, newgrp, chsh, passwd) or
     proc.is_exe_upper_layer=true)
  output: >
    Setuid/setgid binary executed in container
    (user=%user.name container=%container.name image=%container.image.repository
     command=%proc.cmdline parent=%proc.pname)
  priority: WARNING
  tags: [container, privilege-escalation, T1548]

- rule: Capability Gained in Container
  desc: Detect when a process gains elevated capabilities
  condition: >
    evt.type = capset and container and
    evt.arg.cap != ""
  output: >
    Process gained capabilities in container
    (container=%container.name image=%container.image.repository
     capabilities=%evt.arg.cap command=%proc.cmdline)
  priority: WARNING
  tags: [container, privilege-escalation, T1548.001]

- rule: Container with Dangerous Capabilities Started
  desc: Detect container launched with dangerous capabilities
  condition: >
    container_started and container and
    (container.image.repository != "registry.k8s.io/pause") and
    (container.cap_effective contains SYS_ADMIN or
     container.cap_effective contains SYS_PTRACE or
     container.cap_effective contains SYS_MODULE)
  output: >
    Container with dangerous capabilities
    (container=%container.name image=%container.image.repository
     caps=%container.cap_effective)
  priority: CRITICAL
  tags: [container, privilege-escalation, T1068]

- rule: Write to /etc/passwd in Container
  desc: Detect writes to /etc/passwd inside container
  condition: >
    open_write and container and fd.name = /etc/passwd
  output: >
    Write to /etc/passwd in container
    (container=%container.name image=%container.image.repository
     command=%proc.cmdline user=%user.name)
  priority: CRITICAL
  tags: [container, privilege-escalation, T1136]

Kubernetes Audit Log Detection

# audit-policy.yaml - Capture privilege escalation events
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
  # Log pod creation with security context details
  - level: RequestResponse
    resources:
      - group: ""
        resources: ["pods"]
    verbs: ["create", "update", "patch"]

  # Log privilege escalation attempts
  - level: RequestResponse
    resources:
      - group: "rbac.authorization.k8s.io"
        resources: ["clusterroles", "clusterrolebindings", "roles", "rolebindings"]
    verbs: ["create", "update", "patch", "bind", "escalate"]

  # Log service account token requests
  - level: Metadata
    resources:
      - group: ""
        resources: ["serviceaccounts/token"]
    verbs: ["create"]

Query Audit Logs for Privilege Escalation

# Find pods created with privileged security context
kubectl logs -n kube-system kube-apiserver-* | \
  jq 'select(.verb == "create" and .objectRef.resource == "pods") |
  select(.requestObject.spec.containers[].securityContext.privileged == true)'

# Find RBAC escalation attempts
kubectl logs -n kube-system kube-apiserver-* | \
  jq 'select(.objectRef.resource == "clusterrolebindings" and .verb == "create")'

Investigation Playbook

# Check pod security context
kubectl get pod <pod-name> -n <ns> -o jsonpath='{.spec.containers[*].securityContext}'

# Check effective capabilities
kubectl exec <pod-name> -n <ns> -- cat /proc/1/status | grep -i cap

# List pods running as root
kubectl get pods --all-namespaces -o json | \
  jq '.items[] | select(.spec.containers[].securityContext.runAsUser == 0 or .spec.containers[].securityContext.privileged == true) | {name: .metadata.name, ns: .metadata.namespace}'

# Check for hostPath volumes
kubectl get pods --all-namespaces -o json | \
  jq '.items[] | select(.spec.volumes[]?.hostPath != null) | {name: .metadata.name, ns: .metadata.namespace, paths: [.spec.volumes[].hostPath.path]}'

Best Practices

  1. Enable Pod Security Admission at restricted level for production namespaces
  2. Drop ALL capabilities and add back only what is needed
  3. Set allowPrivilegeEscalation: false on all containers
  4. Run as non-root (runAsNonRoot: true, runAsUser > 0)
  5. Disable automountServiceAccountToken unless API access is needed
  6. Monitor with Falco for runtime privilege escalation attempts
  7. Audit RBAC changes with Kubernetes audit logging
  8. Use seccomp profiles to restrict syscalls

Verification Criteria

Confirm successful execution by validating:

  • [ ] All prerequisite tools and access requirements are satisfied
  • [ ] Each workflow step completed without errors
  • [ ] Output matches expected format and contains expected data
  • [ ] No security warnings or misconfigurations detected
  • [ ] Results are documented and evidence is preserved for audit

Compliance Framework Mapping

This skill supports compliance evidence collection across multiple frameworks:

  • SOC 2: CC6.1 (Logical Access), CC7.1 (Monitoring), CC8.1 (Change Management)
  • ISO 27001: A.14.2 (Secure Development), A.12.6 (Technical Vulnerability Mgmt)
  • NIST 800-53: CM-7 (Least Functionality), SI-2 (Flaw Remediation), SC-28 (Protection at Rest)
  • NIST CSF: PR.IP (Information Protection), PR.DS (Data Security)

Claw GRC Tip: When this skill is executed by a registered agent, compliance evidence is automatically captured and mapped to the relevant controls in your active frameworks.

Deploying This Skill with Claw GRC

Agent Execution

Register this skill with your Claw GRC agent for automated execution:

# Install via CLI
npx claw-grc skills add detecting-privilege-escalation-in-kubernetes-pods

# Or load dynamically via MCP
grc.load_skill("detecting-privilege-escalation-in-kubernetes-pods")

Audit Trail Integration

When executed through Claw GRC, every step of this skill generates tamper-evident audit records:

  • SHA-256 chain hashing ensures no step can be modified after execution
  • Evidence artifacts (configs, scan results, logs) are automatically attached to relevant controls
  • Trust score impact — successful execution increases your agent's trust score

Continuous Compliance

Schedule this skill for recurring execution to maintain continuous compliance posture. Claw GRC monitors for drift and alerts when re-execution is needed.

Use with Claw GRC Agents

This skill is fully compatible with Claw GRC's autonomous agent system. Deploy it to any registered agent via MCP, and every execution will be logged in the tamper-evident audit trail.

// Load this skill in your agent
npx claw-grc skills add detecting-privilege-escalation-in-kubernetes-pods
// Or via MCP
grc.load_skill("detecting-privilege-escalation-in-kubernetes-pods")

Tags

kubernetesprivilege-escalationsecurity-contextcapabilitiesdetectionpod-security

Related Skills

Container & Cloud-Native Security

Detecting Container Escape with Falco Rules

5m·advanced
Container & Cloud-Native Security

Implementing Kubernetes Pod Security Standards

3m·intermediate
Container & Cloud-Native Security

Detecting Container Escape Attempts

5m·advanced
Container & Cloud-Native Security

Implementing Runtime Security with Tetragon

3m·advanced
Container & Cloud-Native Security

Performing Kubernetes Penetration Testing

4m·advanced
Container & Cloud-Native Security

Scanning Kubernetes Manifests with Kubesec

4m·advanced

Skill Details

Domain
Container & Cloud-Native Security
Difficulty
advanced
Read Time
4 min
Code Examples
6

On This Page

OverviewPrerequisitesPrivilege Escalation Vectors in KubernetesDetection with Admission ControlRuntime Detection with FalcoKubernetes Audit Log DetectionInvestigation PlaybookBest PracticesVerification CriteriaCompliance Framework MappingDeploying This Skill with Claw GRC

Deploy This Skill

Add this skill to your Claw GRC agent and start automating.

Get Started Free →