Docker Interview Questions 2026 — Top 40 with Expert Answers
Docker engineers at product companies command ₹15-35 LPA, and senior container/DevOps specialists at Flipkart, Razorpay, and Swiggy pull ₹40-60 LPA. The difference between getting screened out and landing that offer often comes down to how deeply you understand containers — not just docker run, but image layers, security scanning, multi-stage builds, and production networking. Container roles grew 280% in India since 2023, and demand still outpaces supply.
This guide covers 40 battle-tested Docker interview questions compiled from 150+ real interviews at Flipkart, Razorpay, PhonePe, Zerodha, CRED, and Amazon India. Every answer includes working commands and production-ready examples.
Related: Kubernetes Interview Questions 2026 | Microservices Interview Questions 2026 | Golang Interview Questions 2026
Beginner-Level Docker Questions (Q1–Q12)
Q1. What is Docker? How is it different from a virtual machine?
Docker vs. Virtual Machine:
| Feature | Docker Container | Virtual Machine |
|---|---|---|
| OS | Shares host kernel | Full guest OS per VM |
| Size | Megabytes (image) | Gigabytes |
| Startup time | Milliseconds | Minutes |
| Isolation | Process-level (namespaces + cgroups) | Hardware-level (hypervisor) |
| Overhead | Minimal | Significant |
| Security isolation | Lower (shared kernel) | Higher (separate kernel) |
| Portability | Excellent | Good |
| Use case | Microservices, CI/CD | Legacy apps, different OS |
Containers use Linux namespaces (isolate PID, network, filesystem, users) and cgroups (limit CPU, memory, I/O) to achieve isolation without a full hypervisor layer.
Asked at virtually every DevOps interview
Q2. Explain the Docker architecture — engine, daemon, client, registry.
Developer Machine
├── Docker CLI (client) ──[REST API]──→ Docker Daemon (dockerd)
│ ├── containerd (runtime)
│ │ └── runc (OCI runtime)
│ ├── Image cache
│ └── Network/Volume management
└── docker pull/push ──→ Container Registry (Docker Hub, ECR, GCR, private)
- Docker Client (
dockerCLI): Sends API calls to the daemon - Docker Daemon (
dockerd): Long-running background process that manages images, containers, networks, volumes - containerd: High-level runtime that manages container lifecycle (start, stop, pull images)
- runc: Low-level OCI-compliant runtime that actually creates containers using kernel features
- Registry: Stores and distributes Docker images (Docker Hub, AWS ECR, GCR, Harbor)
The client and daemon can run on the same host or the client can connect to a remote daemon via TCP.
Q3. What is a Docker image vs. a Docker container?
| Concept | Definition | Analogy |
|---|---|---|
| Image | Read-only template with filesystem layers and metadata | Class definition |
| Container | Running instance of an image (adds writable layer on top) | Object/instance |
| Dockerfile | Instructions to build an image | Blueprint |
| Registry | Storage for images | Library/warehouse |
An image is immutable. Multiple containers can run from the same image. Each container gets its own writable layer (Copy-on-Write) — changes to files create new copies in the writable layer, the image layers below are never modified.
# Image commands
docker images # List local images
docker pull nginx:1.25 # Pull from registry
docker build -t myapp:v1 . # Build from Dockerfile
docker rmi nginx:1.25 # Remove image
# Container commands
docker run -d -p 8080:80 nginx:1.25 # Create + start container
docker ps # List running containers
docker stop <container_id> # Graceful stop (SIGTERM)
docker kill <container_id> # Forceful kill (SIGKILL)
docker rm <container_id> # Remove stopped container
Q4. Write a Dockerfile for a Node.js application. Apply best practices.
# Use specific version tag, not 'latest'
FROM node:20-alpine
# Set working directory
WORKDIR /app
# Copy package files FIRST (for better layer caching)
# If package.json doesn't change, npm install layer is cached
COPY package.json package-lock.json ./
# Install production dependencies only
RUN npm ci --omit=dev
# Copy application code (changes more frequently)
COPY src/ ./src/
# Run as non-root user for security
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser
# Document the port (informational, doesn't publish)
EXPOSE 3000
# Use exec form (not shell form) to receive SIGTERM directly
CMD ["node", "src/index.js"]
Best practices applied:
- Specific base image tag (not
latest) — reproducible builds - Alpine variant — smaller image (~5MB vs ~900MB for full Debian)
- Dependencies copied before source — Docker layer cache reused when only code changes
npm ciinstead ofnpm install— deterministic, fails on lockfile mismatch--omit=dev— no devDependencies in production image- Non-root user — container compromise doesn't give root on host
- Exec form CMD — process receives SIGTERM properly, no shell wrapper
Q5. What is a multi-stage Docker build? Show an example.
# Stage 1: Builder (has full Go toolchain ~800MB)
FROM golang:1.22-alpine AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
# Build statically linked binary (no runtime dependencies needed)
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o server ./cmd/server
# Stage 2: Final image (just the binary + minimal OS)
FROM gcr.io/distroless/static:nonroot
# Copy only the compiled binary from the builder stage
COPY --from=builder /app/server /server
USER nonroot:nonroot
ENTRYPOINT ["/server"]
Size comparison:
- Single-stage with Go toolchain: ~800 MB
- Multi-stage with Alpine: ~15 MB
- Multi-stage with distroless: ~6 MB
Distroless images (Google) contain only the application and its runtime dependencies — no shell, no package manager, minimal attack surface.
Asked at Flipkart, Swiggy, Razorpay, Zerodha platform interviews
Q6. What are Docker layers? How does the layer cache work?
Layer cache rules:
- If the instruction and its inputs are identical to a previous build → reuse cached layer
- If any instruction is invalidated → all subsequent layers are rebuilt
COPY/ADDinvalidation: if file content changes (by checksum), layer is rebuilt
Optimal Dockerfile layer ordering:
# Rarely changes → stable base
FROM python:3.12-slim
# Changes occasionally → system dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
libpq-dev \
&& rm -rf /var/lib/apt/lists/*
# Changes rarely → Python dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Changes often → application code
COPY app/ ./app/
CMD ["python", "-m", "uvicorn", "app.main:app"]
By placing requirements.txt copy before application code, Docker reuses the pip install layer across code-only changes — saving minutes in CI/CD pipelines.
Q7. How do Docker volumes differ from bind mounts and tmpfs?
| Type | Host Path | Performance | Use Case |
|---|---|---|---|
| Volume | /var/lib/docker/volumes/ | Best (Docker manages) | Production data persistence |
| Bind Mount | Any host path | Good | Development (live code sync) |
| tmpfs | Memory only | Fastest | Temp files, secrets in memory |
# Volume (recommended for production)
docker run -v my-volume:/app/data postgres:15
# Bind mount (development)
docker run -v /home/user/code:/app -p 3000:3000 myapp:dev
# tmpfs (in-memory, not persisted)
docker run --tmpfs /run/secrets:rw,noexec,nosuid,size=100m myapp
Volume advantages over bind mounts:
- Managed by Docker — consistent behavior across OS
- Easy backup/migration with
docker volume ls,docker volume inspect - Works with volume drivers (cloud storage plugins like AWS EFS)
- Not tied to host directory structure
- Better performance on Linux (no inotify overhead for bind mounts)
Q8. How does Docker networking work? Explain network drivers.
| Driver | Description | Use Case |
|---|---|---|
| bridge | Default for containers on same host — virtual switch, NAT | Single-host container communication |
| host | Container shares host network namespace (no isolation) | High-performance, simple networking |
| none | No network connectivity | Maximum isolation, batch processing |
| overlay | Multi-host networking (Docker Swarm, distributed) | Multi-host clusters |
| macvlan | Container gets its own MAC address, appears as physical device | Legacy apps that need direct network access |
Bridge network (default):
# Create custom bridge network (preferred over default)
docker network create myapp-network --driver bridge
# Connect containers via custom network (DNS works by container name)
docker run -d --name postgres --network myapp-network postgres:15
docker run -d --name app --network myapp-network myapp:latest
# 'app' can reach 'postgres' by hostname: psql -h postgres
Custom bridge networks provide automatic DNS resolution by container name. The default bridge network (docker0) does NOT provide DNS — containers must use IP addresses.
Q9. What is Docker Compose? Write a Compose file for a web app with database.
# docker-compose.yml
version: '3.9'
services:
app:
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:8080"
environment:
- DATABASE_URL=postgresql://user:password@postgres:5432/mydb
- REDIS_URL=redis://redis:6379
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_started
restart: unless-stopped
networks:
- backend
postgres:
image: postgres:16-alpine
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: password
POSTGRES_DB: mydb
volumes:
- postgres-data:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
healthcheck:
test: ["CMD-SHELL", "pg_isready -U user -d mydb"]
interval: 10s
timeout: 5s
retries: 5
networks:
- backend
redis:
image: redis:7-alpine
command: redis-server --requirepass redispassword
volumes:
- redis-data:/data
networks:
- backend
volumes:
postgres-data:
redis-data:
networks:
backend:
driver: bridge
docker compose up -d # Start in background
docker compose logs -f app # Follow app logs
docker compose down -v # Stop and remove volumes
docker compose scale app=3 # Scale app to 3 instances
Q10. What is the difference between COPY and ADD in Dockerfile?
| Feature | COPY | ADD |
|---|---|---|
| Basic file copy | Yes | Yes |
| Auto-extract tar archives | No | Yes |
| Download from URL | No | Yes |
| Predictability | High | Lower (magic behavior) |
| Best practice | Preferred | Use only for tar extraction |
# Good: use COPY for simple file operations
COPY requirements.txt .
COPY src/ ./src/
# Use ADD only when you need tar extraction
ADD app-data.tar.gz /app/data/
# Don't use ADD to download URLs — use RUN wget or COPY --from
# Bad: ADD https://example.com/file.tar.gz /app/
# Good:
RUN wget -q https://example.com/file.tar.gz && \
tar xzf file.tar.gz && \
rm file.tar.gz
ADD from URL doesn't cache properly and introduces non-determinism. Use COPY by default; only reach for ADD when you specifically need tar auto-extraction.
Q11. What is the difference between CMD and ENTRYPOINT?
| Feature | CMD | ENTRYPOINT |
|---|---|---|
| Purpose | Default arguments | Main executable |
| Overridable | Yes (docker run image mycommand) | Harder (docker run --entrypoint) |
| Interaction | CMD args are appended to ENTRYPOINT | ENTRYPOINT runs as main process |
| Forms | Shell form or exec form | Shell form or exec form |
# Pattern 1: ENTRYPOINT as executable, CMD as default args
ENTRYPOINT ["nginx"]
CMD ["-g", "daemon off;"]
# Result: nginx -g "daemon off;"
# Override: docker run myimage -c /etc/nginx/nginx.conf
# Pattern 2: Only CMD (common for simple apps)
CMD ["python", "app.py"]
# Override: docker run myimage python debug.py
# Shell form vs Exec form
CMD python app.py # Shell form: /bin/sh -c "python app.py"
CMD ["python", "app.py"] # Exec form: directly executes python
Always use exec form ["executable", "arg1"] — shell form wraps the command in /bin/sh -c, making it the PID 1. This means SIGTERM goes to the shell, not your application → graceful shutdown doesn't work.
Q12. How do you reduce Docker image size?
- Use Alpine or Distroless base images:
python:3.12-slim(~130MB) vspython:3.12(~1GB) - Multi-stage builds: Discard build tools, include only runtime artifacts
- Minimize layers: Chain RUN commands with
&&; always clean up in same layer:
RUN apt-get update && \
apt-get install -y --no-install-recommends curl && \
rm -rf /var/lib/apt/lists/*
# Clean MUST be in same RUN — separate layer cleanup doesn't work
- .dockerignore: Exclude files from build context (node_modules, .git, tests, docs):
node_modules
.git
*.test.js
docs/
README.md
.env
- Avoid
ADDing unnecessary files: Be specific withCOPY - Use
--no-cacheflags:pip install --no-cache-dir,npm ci - Squash layers (advanced):
docker build --squash(experimental) or use Dockerfile FROM scratch + COPY from builder
Size comparison for a Python API:
- Full Debian + all tools: ~1.2 GB
- python:3.12-slim + app: ~200 MB
- Multi-stage + distroless: ~60 MB
Checkpoint: If you can confidently answer Q1-Q12, you've already beaten 70% of candidates in Docker screening rounds. Now let's separate you from the pack.
Essential Intermediate Docker Questions (Q13–Q28)
Q13. What is Docker BuildKit? What features does it add?
- Parallel builds: Independent build stages execute concurrently
- Cache mounts: Mount build-time caches (pip cache, npm cache, apt cache) that persist between builds:
# syntax=docker/dockerfile:1
FROM python:3.12-slim
# Cache pip downloads between builds (not stored in image layer)
RUN --mount=type=cache,target=/root/.cache/pip \
pip install -r requirements.txt
- Secret mounts: Pass secrets without baking them into layers:
# Secret is available only during this RUN; not in final image
RUN --mount=type=secret,id=npmrc,target=/root/.npmrc \
npm install
docker build --secret id=npmrc,src=$HOME/.npmrc .
- SSH forwarding: Use host SSH keys without copying them into image
- Build metadata: Better provenance tracking, SBOM generation
Enable: DOCKER_BUILDKIT=1 docker build . or docker buildx build .
Q14. How do you pass environment variables to Docker containers securely?
-eflag (avoid for secrets — visible in process list anddocker inspect):
docker run -e DATABASE_URL=postgres://... myapp
--env-file(better — not in process list, but file on disk):
docker run --env-file .env myapp
# .env file: DATABASE_URL=postgres://...
- Docker secrets (Docker Swarm mode — mounts as file in
/run/secrets/):
echo "mysecretpassword" | docker secret create db_password -
docker service create --secret db_password myapp
# In container: cat /run/secrets/db_password
-
Build-time secrets (BuildKit): For secrets needed only during build (npm tokens, pip auth)
-
Runtime secret injection (production best practice):
- AWS Secrets Manager + ECS task definition (auto-injected from SSM/Secrets Manager)
- HashiCorp Vault Agent sidecar
- Kubernetes External Secrets Operator
Never: bake secrets into Docker images (even via build args — they're visible in image history with docker history --no-trunc)
Q15. What is Docker Swarm? How does it compare to Kubernetes?
| Feature | Docker Swarm | Kubernetes |
|---|---|---|
| Setup complexity | Low (built into Docker) | High |
| Learning curve | Gentle | Steep |
| Scaling | Good | Excellent |
| Ecosystem | Limited | Massive (CNCF, Helm) |
| Service mesh | None native | Istio, Linkerd, Cilium |
| Auto-scaling | No HPA equivalent | HPA, VPA, KEDA |
| Rolling updates | Yes (simpler) | Yes (more control) |
| Network features | Overlay, ingress mesh | CNI plugins, NetworkPolicy |
| Production adoption | Declining | Industry standard |
When to choose Swarm (still valid in 2026):
- Small teams, simple applications
- Already using Docker Compose (Swarm mode is compatible)
- Docker expertise but no K8s knowledge on team
- Single-host or small multi-host deployments
For anything at scale or with a dedicated platform team, Kubernetes is the right choice.
Q16. How does Docker handle container networking with --network host vs bridge?
Bridge mode (default):
- Container gets its own network namespace and IP (e.g., 172.17.0.2)
- Port publishing:
docker run -p 8080:80→ NAT rule maps host port 8080 to container port 80 - Overhead: NAT translation for each packet
Host mode:
- Container shares host's network namespace entirely
- Container's port IS the host's port — no NAT
docker run --network host -p 80:80is redundant — container already uses port 80 on host- ~10% better network performance (no NAT overhead)
- Security risk: container can access any host network interface
# Host mode: container's nginx binds directly to host port 80
docker run -d --network host nginx
# Verify: no NAT
docker run --network host alpine ip addr # Shows host's interfaces
Use host mode for: High-performance networking (database proxies, load balancers), when NAT overhead matters at scale. Avoid in multi-container environments where port conflicts occur.
Q17. How do you scan Docker images for vulnerabilities?
Trivy (most popular, 2026):
# Install
apt-get install trivy
# Scan image (checks OS packages + language libraries)
trivy image myapp:v1.2
# Scan with SBOM generation
trivy image --format cyclonedx --output sbom.json myapp:v1.2
# Scan Dockerfile itself for misconfigurations
trivy config Dockerfile
# In CI/CD — fail build if CRITICAL vulns found
trivy image --exit-code 1 --severity CRITICAL myapp:v1.2
Other tools:
# Grype (Anchore)
grype myapp:v1.2
# Docker Scout (built into Docker Desktop/Hub)
docker scout cves myapp:v1.2
docker scout recommendations myapp:v1.2
# Snyk
snyk container test myapp:v1.2
Best practices:
- Scan in CI before pushing to registry
- Scan images already in registry (scheduled scans — new CVEs discovered daily)
- Use distroless/scratch base images — fewer OS packages = fewer CVEs
- Update base images weekly (subscribe to security advisories)
- Enable Docker Hub automated security scanning
- Enforce minimum severity policy in CI (block on CRITICAL/HIGH)
Q18. What is a distroless image? When should you use it?
Available variants:
gcr.io/distroless/static:nonroot— static binaries only (Go, Rust)gcr.io/distroless/base:nonroot— glibc, libssl (C/C++ apps)gcr.io/distroless/java17:nonroot— JRE onlygcr.io/distroless/python3:nonroot— Python runtime
Security benefits:
- No shell → no shell injection, no interactive debugging by attackers
- Fewer packages → smaller CVE attack surface
- Runs as nonroot by default
Debugging challenge:
# No shell in distroless → can't docker exec -it container sh
# Use ephemeral debug containers in K8s:
kubectl debug -it <pod> --image=busybox --target=<container>
# Or use Docker override for local debugging
docker run -it --entrypoint /busybox/sh gcr.io/distroless/static:debug
Q19. Explain Docker resource limits. How do you prevent a container from consuming all host memory?
# Limit memory (hard limit — container is OOM killed if exceeded)
docker run -m 512m myapp
# Memory + swap
docker run -m 512m --memory-swap 1g myapp # 512MB RAM + 512MB swap
# CPU limit (0.5 = 50% of one CPU core)
docker run --cpus="0.5" myapp
# CPU shares (relative weight, not hard limit)
docker run --cpu-shares 512 myapp # Default is 1024
# In Docker Compose:
services:
app:
deploy:
resources:
limits:
cpus: '0.50'
memory: 512M
reservations:
cpus: '0.25'
memory: 256M
What happens without limits: A runaway container (memory leak, infinite loop) can consume all host memory/CPU, taking down other containers and the host OS itself. Always set limits in production.
Relationship to Kubernetes: K8s resource requests and limits map directly to these cgroups settings at the container level. Kubernetes enforces them via containerd.
Q20. How do you implement health checks in Docker?
# In Dockerfile
HEALTHCHECK --interval=30s --timeout=10s --start-period=40s --retries=3 \
CMD curl -f http://localhost:8080/health || exit 1
Parameters:
--interval: Time between health check runs (default 30s)--timeout: Time to wait for a response (default 30s)--start-period: Grace period after container starts before failures count (for slow startup)--retries: Consecutive failures before marking unhealthy (default 3)
# Check container health status
docker inspect --format='{{.State.Health.Status}}' <container>
docker inspect --format='{{json .State.Health}}' <container> | jq
Health states: starting → healthy → unhealthy
Docker doesn't restart unhealthy containers by default — it just marks them. Docker Swarm DOES restart unhealthy service containers. Kubernetes uses its own readiness/liveness probes independently of Docker HEALTHCHECK.
Q21. What is the .dockerignore file? What should always be in it?
# Version control
.git
.gitignore
# Dependencies (rebuilt inside container)
node_modules
vendor/
__pycache__
*.pyc
# Test and development files
tests/
*.test.js
*.spec.ts
coverage/
.pytest_cache/
# Environment and secrets
.env
.env.*
*.key
*.pem
*.p12
secrets/
credentials/
# Documentation
README.md
docs/
*.md
# CI/CD configs (not needed in runtime image)
.github/
Jenkinsfile
.travis.yml
# Build artifacts (in multi-stage builds, use --from)
dist/
build/
*.tar.gz
# Editor files
.vscode/
.idea/
*.swp
# OS files
.DS_Store
Thumbs.db
Without .dockerignore, Docker sends everything in the build directory to the daemon — including node_modules (often 500MB+), which is wasteful even though it's never used (overwritten by npm install inside the container).
Q22. How do you push a Docker image to AWS ECR?
# 1. Authenticate Docker to ECR
aws ecr get-login-password --region ap-south-1 | \
docker login --username AWS --password-stdin \
123456789.dkr.ecr.ap-south-1.amazonaws.com
# 2. Create ECR repository (one-time)
aws ecr create-repository \
--repository-name myapp \
--region ap-south-1 \
--image-scanning-configuration scanOnPush=true \
--encryption-configuration encryptionType=AES256
# 3. Tag the image with ECR URI
docker tag myapp:v1.2 123456789.dkr.ecr.ap-south-1.amazonaws.com/myapp:v1.2
docker tag myapp:v1.2 123456789.dkr.ecr.ap-south-1.amazonaws.com/myapp:latest
# 4. Push
docker push 123456789.dkr.ecr.ap-south-1.amazonaws.com/myapp:v1.2
docker push 123456789.dkr.ecr.ap-south-1.amazonaws.com/myapp:latest
# 5. Lifecycle policy — keep only last 10 images
aws ecr put-lifecycle-policy \
--repository-name myapp \
--lifecycle-policy-text '{"rules":[{"rulePriority":1,"selection":{"tagStatus":"any","countType":"imageCountMoreThan","countNumber":10},"action":{"type":"expire"}}]}'
In CI/CD (GitHub Actions), use OIDC to authenticate to AWS without storing access keys — aws-actions/configure-aws-credentials + OIDC provider.
Q23. How do you run Docker containers as non-root?
In Dockerfile:
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --omit=dev
COPY . .
# Create non-root user
RUN addgroup -S appgroup && \
adduser -S appuser -G appgroup && \
chown -R appuser:appgroup /app
USER appuser
CMD ["node", "server.js"]
At runtime:
# Override to run as user ID 1001
docker run --user 1001:1001 myapp
# Check what user the container runs as
docker exec <container> id
Why it matters: If a container runs as root and an attacker exploits the application, they get root inside the container. While Docker's namespaces provide some isolation, root in a container can still:
- Mount host filesystems (if volumes are bind-mounted)
- Escape via kernel vulnerabilities
- Write to any bind-mounted host directory
Running as non-root means even a compromised container has limited privileges.
In Kubernetes: Set securityContext.runAsNonRoot: true and securityContext.runAsUser: 1001.
Q24. What is Docker content trust (DCT)? How does image signing work?
# Enable DCT
export DOCKER_CONTENT_TRUST=1
# Push a signed image (prompts for passphrase)
docker push myregistry.com/myapp:v1.0
# Pull — verifies signature
docker pull myregistry.com/myapp:v1.0
Modern alternative — Sigstore/cosign:
# Sign image with cosign (keyless, uses OIDC identity)
cosign sign ghcr.io/myorg/myapp:v1.0
# Verify
cosign verify ghcr.io/myorg/myapp:v1.0 \
--certificate-identity="https://github.com/myorg/myapp/.github/workflows/build.yml@refs/heads/main" \
--certificate-oidc-issuer="https://token.actions.githubusercontent.com"
Cosign + Kubernetes admission webhook (Kyverno/OPA) = only signed images from trusted registries can run in the cluster.
Q25. How do you optimize Docker builds for CI/CD pipelines?
- BuildKit cache mounts (biggest win for package managers):
RUN --mount=type=cache,target=/root/.cache/pip pip install -r requirements.txt
RUN --mount=type=cache,target=/root/.npm npm ci
-
Layer ordering: Stable layers (deps) before volatile layers (code)
-
Remote cache in CI (GitHub Actions):
- name: Build with cache
uses: docker/build-push-action@v5
with:
cache-from: type=gha
cache-to: type=gha,mode=max
tags: myapp:${{ github.sha }}
- Registry cache (pull layers from registry instead of rebuilding):
docker buildx build \
--cache-from type=registry,ref=myregistry.com/myapp:cache \
--cache-to type=registry,ref=myregistry.com/myapp:cache,mode=max \
-t myapp:latest .
-
Parallel stage execution: BuildKit automatically parallelizes independent stages
-
Use
docker buildx bakefor building multiple images with shared cache in one command -
Minimize build context: Aggressive
.dockerignore— sending 100MB context to daemon adds 30+ seconds
Q26. How do you handle Docker container logging in production?
Docker containers write to stdout/stderr by default. Docker captures these with logging drivers.
Logging drivers:
| Driver | Description | Use Case |
|---|---|---|
| json-file (default) | JSON files on disk | Development |
| none | No logging | High-performance batch |
| syslog | Send to syslog daemon | Traditional Linux logging |
| journald | systemd journal | Systemd hosts |
| awslogs | Send to CloudWatch Logs | AWS deployments |
| fluentd | Send to Fluentd | EFK stack |
| splunk | Send to Splunk | Enterprise logging |
# Configure awslogs driver
docker run \
--log-driver=awslogs \
--log-opt awslogs-region=ap-south-1 \
--log-opt awslogs-group=/myapp/production \
--log-opt awslogs-stream=api-server-1 \
myapp:latest
Production pattern: Run container with default json-file logging. Deploy Fluent Bit as a DaemonSet (K8s) or container (Swarm) to tail logs and ship to your central log system (OpenSearch, CloudWatch, Loki).
Log rotation is critical for json-file driver — set max-size and max-file:
// /etc/docker/daemon.json
{
"log-driver": "json-file",
"log-opts": {
"max-size": "100m",
"max-file": "5"
}
}
Q27. What is Docker Buildx? How do you build multi-architecture images?
# Create a builder with multi-arch support
docker buildx create --name multiarch-builder --use
docker buildx inspect --bootstrap
# Build for linux/amd64 and linux/arm64 (Apple Silicon + AWS Graviton)
docker buildx build \
--platform linux/amd64,linux/arm64,linux/arm/v7 \
--tag myapp:v1.0 \
--push \
.
The result is a multi-arch manifest in the registry. When users pull myapp:v1.0, Docker automatically pulls the image matching their architecture.
Why it matters in 2026:
- AWS Graviton (arm64) instances offer 20-40% better price/performance
- Apple Silicon Macs (arm64) are common developer machines
- IoT/edge devices often run arm64 or armv7
- A single image tag works everywhere
Q28. How do you implement a Docker-based CI/CD pipeline with GitHub Actions?
# .github/workflows/build-push.yml
name: Build and Push Docker Image
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
build-and-push:
runs-on: ubuntu-latest
permissions:
id-token: write # For OIDC AWS auth
contents: read
packages: write
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Configure AWS credentials (OIDC — no long-term keys)
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: arn:aws:iam::123456789:role/github-actions-ecr
aws-region: ap-south-1
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v2
- name: Run Trivy security scan
uses: aquasecurity/trivy-action@master
with:
image-ref: '${{ steps.login-ecr.outputs.registry }}/myapp:${{ github.sha }}'
exit-code: '1'
severity: 'CRITICAL,HIGH'
- name: Build and push
uses: docker/build-push-action@v5
with:
context: .
push: ${{ github.ref == 'refs/heads/main' }}
tags: |
${{ steps.login-ecr.outputs.registry }}/myapp:${{ github.sha }}
${{ steps.login-ecr.outputs.registry }}/myapp:latest
cache-from: type=gha
cache-to: type=gha,mode=max
platforms: linux/amd64,linux/arm64
If you've nailed Q1-Q28, you're already in the top 20% for Docker knowledge. The advanced section below is what gets you the ₹30+ LPA offers — these are the exact questions that separate senior engineers from mid-level.
Advanced Docker Questions — The Insider Round (Q29–Q40)
Q29. What are Docker namespaces and cgroups? How do they provide isolation?
Namespaces (isolate visibility):
| Namespace | What it isolates |
|---|---|
| pid | Process IDs (container sees only its own processes) |
| net | Network interfaces, IP addresses, routes |
| mnt | Filesystem mount points |
| uts | Hostname and domain name |
| ipc | System V IPC, POSIX message queues |
| user | User and group IDs (user namespaces — UID 0 in container ≠ UID 0 on host) |
| cgroup | cgroup root directory (added in Linux 4.6) |
cgroups v2 (limit resource usage):
- CPU: CFS scheduler — throttle CPU time
- Memory: Hard limit + OOM killer
- I/O: Block I/O throttling
- PIDs: Maximum number of processes
# See cgroup for a running container
docker inspect <container> --format='{{.HostConfig.CgroupParent}}'
cat /sys/fs/cgroup/memory/<container_id>/memory.limit_in_bytes
Limitation: All containers share the host kernel. A kernel vulnerability (like Dirty COW, Dirty Pipe) can potentially allow container escape — use gVisor or Kata Containers for hardware-level isolation when security is paramount.
Q30. What is Docker's overlay filesystem? How does copy-on-write work?
Container Layer (writable) — your writes go here
──────────────────────────────────
Image Layer N (read-only) — COPY src/
Image Layer N-1 (read-only) — npm ci
Image Layer N-2 (read-only) — COPY package.json
Image Layer 1 (read-only) — FROM node:20-alpine
Copy-on-Write (CoW):
- Container reads a file → reads directly from the read-only image layer (fast)
- Container modifies a file → file is copied to the writable container layer, modification happens there
- Image layer is never touched
- The original file still exists in the image layer (read by other containers)
Implications:
- Multiple containers sharing the same image layers use minimal additional disk space
- Writing large files in a running container causes CoW overhead
docker commitcreates a new image with the writable layer merged in
# See layer storage
docker inspect <image> --format='{{json .RootFS.Layers}}'
ls /var/lib/docker/overlay2/
Q31. How do you debug a Docker container that crashes immediately on startup?
# 1. Run with override — replace entrypoint with shell
docker run -it --entrypoint /bin/sh myapp:latest
# For distroless (no shell):
docker run -it --entrypoint /busybox/sh gcr.io/distroless/base:debug
# 2. Check exit code and logs from last run
docker ps -a # Find the stopped container
docker logs <container_id>
# 3. Start container and keep it running despite crash
docker run -d --entrypoint sleep myapp infinity
docker exec -it <container_id> sh
# 4. Inspect environment variables (might reveal missing config)
docker inspect <container_id> --format='{{json .Config.Env}}' | jq
# 5. Check image history for clues
docker history myapp:latest --no-trunc
# 6. Run with strace for system call tracing
docker run --cap-add=SYS_PTRACE myapp
# 7. For K8s — use ephemeral debug container
kubectl debug -it <pod> --image=busybox:1.36 --target=app-container
Common crash causes:
- Missing environment variable (check
CMDvs what's actually needed) - Permission denied on files (check
chownin Dockerfile) - Port already in use on host
- Entrypoint script has Windows line endings (CRLF) —
sed -i 's/\r$//' entrypoint.sh - Missing dependency not in image
Q32. What is container runtime security? Explain gVisor and Kata Containers.
gVisor (Google):
- Intercepts all system calls from the container using a user-space kernel called Sentry
- Sentry implements a subset of Linux syscalls in Go
- Even if container code is malicious, it talks to Sentry (not the host kernel)
- Performance overhead: ~10-20%
- Used by: Google Cloud Run, Anthos
Kata Containers:
- Runs containers inside lightweight VMs (QEMU, Firecracker)
- Full hardware-level isolation (separate kernel per container)
- Compatible with OCI runtime interface (containerd can use it)
- Performance overhead: ~5-15% (better than full VMs)
- Used by: AWS Firecracker (Lambda), OpenStack
In Kubernetes:
# Use RuntimeClass to select runtime
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
name: kata
handler: kata-qemu
---
spec:
runtimeClassName: kata # This pod runs in Kata Containers
Use gVisor/Kata for: untrusted code execution, multi-tenant CI/CD (running user-submitted code), financial applications with strict isolation requirements.
Q33. How do you implement Docker in a zero-downtime production deployment?
Option 1 — Docker Compose with rolling update (single host):
# Update service with no downtime
docker compose up -d --no-deps --scale app=2 app
# Wait for new containers to be healthy
docker compose up -d --no-deps app
# Scale back down
docker compose up -d --scale app=1 app
Option 2 — Docker Swarm rolling update:
docker service update \
--image myapp:v2.0 \
--update-parallelism 1 \
--update-delay 30s \
--update-failure-action rollback \
--health-cmd "curl -f http://localhost:8080/health || exit 1" \
myapp-service
Option 3 — Nginx + Docker (blue-green):
# Start green containers
docker run -d --name app-green --network app-net myapp:v2.0
# Health check green
until curl -f http://app-green:8080/health; do sleep 2; done
# Switch nginx upstream to green
docker exec nginx nginx -s reload # After updating upstream config
# Remove blue
docker stop app-blue && docker rm app-blue
The readiness check (health endpoint returning 200) is the critical gate before switching traffic.
Q34. What is Docker's daemon.json? What important configurations go there?
{
"log-driver": "json-file",
"log-opts": {
"max-size": "100m",
"max-file": "5"
},
"storage-driver": "overlay2",
"insecure-registries": [],
"registry-mirrors": ["https://mirror.example.com"],
"default-address-pools": [
{"base": "172.30.0.0/16", "size": 24}
],
"live-restore": true,
"userland-proxy": false,
"no-new-privileges": true,
"seccomp-profile": "/etc/docker/seccomp.json",
"userns-remap": "default",
"metrics-addr": "0.0.0.0:9323",
"experimental": false,
"max-concurrent-downloads": 10
}
Critical settings:
live-restore: true— containers keep running during Docker daemon restart/upgradeno-new-privileges: true— prevents privilege escalation (suid binaries)userns-remap— enables user namespace remapping (root in container = unprivileged on host)default-address-pools— avoid IP conflicts with your network (default 172.17.0.0/16 conflicts in corporate networks)
Q35. Explain Docker's seccomp, AppArmor, and capabilities security layers.
Linux Capabilities: Docker drops most Linux capabilities by default. Notable drops:
CAP_NET_ADMIN— no network interface managementCAP_SYS_ADMIN— no mount, ptrace, sysctl changes (a very powerful cap)CAP_SYS_PTRACE— no process tracing
# Only add capabilities you specifically need
docker run --cap-drop=ALL --cap-add=NET_BIND_SERVICE nginx
# nginx only needs to bind to port 80 — nothing else
Seccomp (Secure Computing Mode):
Default Docker seccomp profile blocks ~44 syscalls (e.g., reboot, mount, init_module). You can provide a custom profile:
docker run --security-opt seccomp=/path/to/profile.json myapp
# Or disable for debugging (DON'T do in production)
docker run --security-opt seccomp=unconfined myapp
AppArmor:
Mandatory access control (MAC) policy — restricts what files and operations a process can access. Docker's default AppArmor profile (docker-default) is applied to all containers on AppArmor-enabled systems.
Q36. How do you build a minimal container for a Go application from scratch?
# syntax=docker/dockerfile:1
FROM golang:1.22-alpine AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
# Static binary — no external dependencies
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 \
go build -ldflags="-w -s" -o /bin/server ./cmd/server
# -w: omit DWARF debug info
# -s: omit symbol table
# Results in ~40% smaller binary
FROM scratch # Empty filesystem — not even an OS
# Required: CA certificates for HTTPS calls
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
# Copy the binary
COPY --from=builder /bin/server /bin/server
USER 65534:65534 # nobody user (no /etc/passwd in scratch)
ENTRYPOINT ["/bin/server"]
Result: ~6-10MB image (just binary + CA certs) vs ~800MB single-stage.
Limitation: No debugging tools whatsoever. Use K8s ephemeral containers or a separate debug build for troubleshooting.
Q37. What is Docker manifest and how do multi-arch images work internally?
# Inspect a multi-arch manifest
docker manifest inspect node:20-alpine
# Output (truncated):
{
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.list.v2+json",
"manifests": [
{
"digest": "sha256:abc...",
"platform": {"architecture": "amd64", "os": "linux"}
},
{
"digest": "sha256:def...",
"platform": {"architecture": "arm64", "os": "linux"}
}
]
}
When docker pull node:20-alpine runs on an ARM Mac, Docker reads the manifest list, selects the arm64 manifest, and pulls those layers.
Q38. How do you set up a private Docker registry with Harbor?
- Image vulnerability scanning (Trivy integration)
- Image signing (cosign/Notary)
- Robot accounts for CI/CD
- Projects with RBAC
- Content trust policies
- Replication rules (sync images across registries)
- Proxy cache (cache Docker Hub/ECR pulls to reduce egress costs)
# Quick Harbor install with Docker Compose
wget https://github.com/goharbor/harbor/releases/download/v2.10.0/harbor-online-installer-v2.10.0.tgz
tar xzf harbor-online-installer-v2.10.0.tgz
cd harbor
cp harbor.yml.tmpl harbor.yml
# Edit harbor.yml: set hostname, TLS cert paths
./install.sh --with-trivy
# Use Harbor as registry
docker login harbor.example.com
docker tag myapp:v1 harbor.example.com/myproject/myapp:v1
docker push harbor.example.com/myproject/myapp:v1
In Kubernetes, configure an imagePullSecret with Harbor credentials:
kubectl create secret docker-registry harbor-secret \
--docker-server=harbor.example.com \
--docker-username=robot$ci \
--docker-password=<robot-token>
Q39. What is Docker's impact on application startup time and how do you optimize it?
- Image pull (if not cached): varies by image size and registry latency
- Image layer extraction: proportional to number of layers
- Network setup: CNI plugin creates network namespace
- Volume mounts: bind mounts are fast; volume plugins can be slow
- Application startup: JVM, Python import time, Node.js module loading
Optimization strategies:
| Layer | Optimization | Impact |
|---|---|---|
| Image pull | Pre-pull images on nodes (DaemonSet ImagePuller) | Eliminate pull time |
| Image size | Multi-stage, Alpine base | Smaller = faster pull + extract |
| Layer count | Combine RUN commands | Fewer overlay layers |
| App startup | JVM: Class Data Sharing, AOT; Node: --max-semi-space-size | 30-60% faster init |
| Kubernetes | Startup probes (don't count as liveness failures during init) | No premature restarts |
| Kubernetes | Provisioned concurrency (Lambda) / warm pod pools | Near-zero perceived latency |
For Java specifically: GraalVM Native Image compiles to native binary (starts in <100ms vs 5-10s JVM startup), dramatically improving container startup at cost of some dynamic features.
Q40. Design a container security strategy for a production fintech application.
Layer 1 — Build Security:
- Base images: Distroless or Alpine minimal, pinned by digest (not tag)
- Vulnerability scanning: Trivy in CI, block on CRITICAL/HIGH CVEs
- Image signing: cosign with keyless signing (GitHub OIDC)
- SBOM generation: CycloneDX format per build
- Secrets: Never bake into images; use BuildKit secret mounts
- Non-root user in all Dockerfiles
Layer 2 — Registry Security:
- Private registry (Harbor or ECR)
- Scan on push + periodic scheduled scans
- Admission policy: only signed images from approved registries
- Lifecycle policies: auto-delete untagged/old images
Layer 3 — Runtime Security:
- Kubernetes PodSecurityStandards:
Restrictedprofile - No privileged containers, no hostPath, no hostNetwork
- Read-only root filesystem:
readOnlyRootFilesystem: true - Drop all capabilities:
capabilities.drop: [ALL] - seccomp:
RuntimeDefaultor custom profile - Falco: Runtime anomaly detection (shell spawned, unexpected outbound connections)
- NetworkPolicies: Deny all by default, explicit allow lists
Layer 4 — Compliance:
- CIS Docker Benchmark: Audit with
docker-bench-security - PCI DSS: Container isolation, encrypted secrets, audit logging
- Regular penetration testing of containerized services
FAQ Section — Your Burning Docker Questions, Answered
Q: Is Docker required to use Kubernetes? No. Kubernetes uses the Container Runtime Interface (CRI) to support any OCI-compliant runtime. EKS, GKE, and AKS use containerd directly. You can still build images with Docker locally and push to a registry — the build tool is independent of the cluster runtime.
Q: What is the difference between Docker Desktop and Docker Engine? Docker Desktop is a GUI application for Mac and Windows that bundles Docker Engine, Docker CLI, Docker Compose, and a Linux VM (required since macOS/Windows don't have a native Linux kernel). Docker Engine is the CLI-only daemon that runs natively on Linux.
Q: Are Podman and Docker interchangeable?
Mostly yes. Podman is a daemonless, rootless Docker alternative that's fully OCI-compatible. alias docker=podman works for most use cases. Podman doesn't require a root daemon, making it more secure. It's the default on RHEL/Fedora and gaining ground in enterprise environments.
Q: What is the difference between docker stop and docker kill?
docker stop sends SIGTERM (graceful shutdown), waits for the container to exit (default 10 seconds), then sends SIGKILL. docker kill immediately sends SIGKILL (or a signal you specify with -s). Always use docker stop in production to allow graceful shutdown.
Q: How large should Docker images be? As small as practical. Under 100MB is good; under 30MB for Go/Rust apps is achievable. Large images slow CI/CD (pull times), increase attack surface, and cost more in registry storage. Use multi-stage builds and Alpine/distroless base images.
Q: Can you run Docker inside Docker (DinD)?
Yes, using --privileged mode and the official docker:dind image. Common in CI/CD (Jenkins Docker agents, GitLab runners). However, it requires privileged mode which is a security concern. Modern alternatives: Kaniko (builds images without Docker daemon, no privileged), Buildah, or BuildKit in rootless mode.
Q: What salary can I expect for Docker/container expertise in India? Docker alone isn't a differentiator — it's expected knowledge. Docker + Kubernetes + CI/CD at DevOps Engineer level: ₹15–35 LPA. Strong containers + K8s + cloud at SRE level: ₹25–60 LPA.
Q: What is OCI (Open Container Initiative)? OCI is a Linux Foundation project that standardizes container image format and runtime specification. All major container tools (Docker, containerd, Podman, CRI-O) implement OCI specs — meaning images built with Docker can run on any OCI runtime.
Master Docker and you've cleared the first gate. Now level up: Pair this with our Kubernetes guide for the full container orchestration picture, and our Microservices guide for the architecture these containers run.
Related Articles:
- Kubernetes Interview Questions 2026 — the natural next step after Docker
- Microservices Interview Questions 2026 — the architecture Docker enables
- Golang Interview Questions 2026 — Go + Docker = the cloud-native power combo
- DevOps Interview Questions 2026
- Cybersecurity Interview Questions 2026 — container security is a hot topic
Explore this topic cluster
More resources in Interview Questions
Use the category hub to browse similar questions, exam patterns, salary guides, and preparation resources related to this topic.
Related Articles
AWS Interview Questions 2026 — Top 50 with Expert Answers
AWS certifications command a 25-30% salary premium in India, and AWS skills appear in 74% of all cloud job postings. AWS...
DevOps Interview Questions 2026 — Top 50 with Expert Answers
Elite DevOps teams deploy to production multiple times per day with a change failure rate under 5%. That's the bar companies...
Kubernetes Interview Questions 2026 — Top 50 with Expert Answers
Kubernetes engineers command ₹25-60 LPA in India. Platform engineers with deep K8s expertise at Flipkart, Swiggy, and...
Microservices Interview Questions 2026 — Top 40 with Expert Answers
Senior backend engineers with microservices expertise earn ₹30-90 LPA at product companies. Staff/Principal architects at...
AI/ML Interview Questions 2026 — Top 50 Questions with Answers
AI/ML engineer is the highest-paid engineering role in 2026, with median compensation exceeding $200K at top companies. But...