Project 34: “The Cloud Native Deployer” — DevOps
| Attribute | Value |
|---|---|
| File | KIRO_CLI_LEARNING_PROJECTS.md |
| Main Programming Language | Docker / Kubernetes YAML |
| Coolness Level | Level 2: Practical |
| Difficulty | Level 3: Advanced |
| Knowledge Area | DevOps |
What you’ll build: Dockerize Project 33 and generate Kubernetes manifests.
Why it teaches Ops: Kiro removes boilerplate friction.
Success criteria:
- Containers build and Kubernetes manifests apply cleanly.
Real World Outcome
You’ll have a Kiro workflow that containerizes an existing full-stack application and generates production-ready Kubernetes manifests with:
- Multi-stage Dockerfiles (optimized layer caching, minimal image size)
- Kubernetes Deployments (frontend, backend, database)
- Service definitions (ClusterIP, LoadBalancer, NodePort)
- ConfigMaps and Secrets (environment variables, database credentials)
- Persistent Volumes (PostgreSQL data persistence)
- Ingress rules (routing, TLS certificates)
- Health checks (liveness and readiness probes)
- Resource limits (CPU, memory quotas)
Example Session:
$ cd task-manager-app # From Project 33
$ kiro "Containerize this full-stack app and generate Kubernetes manifests for production deployment"
🐳 Cloud Native Deployer - Containerization Phase
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📦 Analyzing Project Structure...
├─ Frontend: React app (Node.js build)
├─ Backend: Express API (Node.js runtime)
└─ Database: PostgreSQL (external managed service)
🔨 Generating Dockerfiles...
[1/3] Frontend Dockerfile (Multi-stage build)
✅ Created frontend/Dockerfile
# Stage 1: Build
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
RUN npm run build
# Stage 2: Production
FROM nginx:alpine
COPY --from=builder /app/dist /usr/share/nginx/html
COPY nginx.conf /etc/nginx/nginx.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
📊 Image size optimization:
├─ Before: 1.2 GB (node_modules included)
└─ After: 45 MB (only static files + nginx)
[2/3] Backend Dockerfile
✅ Created backend/Dockerfile
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
RUN npx prisma generate
EXPOSE 3001
CMD ["npm", "start"]
[3/3] Database (Using managed Postgres)
✅ Skipping Dockerfile (use managed service: AWS RDS, GCP Cloud SQL, or Azure PostgreSQL)
✅ Created k8s/postgres-statefulset.yaml (for local/dev clusters only)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
☸️ Generating Kubernetes Manifests...
[1/8] Namespace
✅ Created k8s/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: task-manager
[2/8] ConfigMaps
✅ Created k8s/configmap-frontend.yaml
✅ Created k8s/configmap-backend.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: backend-config
namespace: task-manager
data:
NODE_ENV: "production"
PORT: "3001"
DATABASE_HOST: "postgres-service"
DATABASE_PORT: "5432"
[3/8] Secrets (Base64 encoded)
✅ Created k8s/secret-db.yaml
apiVersion: v1
kind: Secret
metadata:
name: db-credentials
namespace: task-manager
type: Opaque
data:
DATABASE_URL: <base64-encoded-connection-string>
JWT_SECRET: <base64-encoded-secret>
⚠️ WARNING: Never commit real secrets to git!
Use sealed-secrets or external secret management (Vault, AWS Secrets Manager)
[4/8] Persistent Volumes (PostgreSQL)
✅ Created k8s/pvc-postgres.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pvc
namespace: task-manager
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
[5/8] Deployments
✅ Created k8s/deployment-frontend.yaml
✅ Created k8s/deployment-backend.yaml
✅ Created k8s/statefulset-postgres.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
namespace: task-manager
spec:
replicas: 3
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: backend
image: task-manager-backend:latest
ports:
- containerPort: 3001
envFrom:
- configMapRef:
name: backend-config
- secretRef:
name: db-credentials
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
livenessProbe:
httpGet:
path: /health
port: 3001
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 3001
initialDelaySeconds: 10
periodSeconds: 5
[6/8] Services
✅ Created k8s/service-frontend.yaml (type: LoadBalancer)
✅ Created k8s/service-backend.yaml (type: ClusterIP)
✅ Created k8s/service-postgres.yaml (type: ClusterIP)
[7/8] Ingress (NGINX with TLS)
✅ Created k8s/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: task-manager-ingress
namespace: task-manager
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- taskmanager.example.com
secretName: taskmanager-tls
rules:
- host: taskmanager.example.com
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: backend
port:
number: 3001
- path: /
pathType: Prefix
backend:
service:
name: frontend
port:
number: 80
[8/8] Helm Chart (Optional)
✅ Created helm/task-manager/Chart.yaml
✅ Created helm/task-manager/values.yaml
✅ Created helm/task-manager/templates/
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ Containerization Complete!
📂 Generated Files:
├── frontend/Dockerfile
├── backend/Dockerfile
├── docker-compose.yml (for local development)
├── k8s/
│ ├── namespace.yaml
│ ├── configmap-frontend.yaml
│ ├── configmap-backend.yaml
│ ├── secret-db.yaml
│ ├── pvc-postgres.yaml
│ ├── deployment-frontend.yaml
│ ├── deployment-backend.yaml
│ ├── statefulset-postgres.yaml
│ ├── service-frontend.yaml
│ ├── service-backend.yaml
│ ├── service-postgres.yaml
│ └── ingress.yaml
└── helm/task-manager/ (chart)
🚀 Deployment Instructions:
# Build and push images
docker build -t your-registry/task-manager-frontend:v1.0 ./frontend
docker build -t your-registry/task-manager-backend:v1.0 ./backend
docker push your-registry/task-manager-frontend:v1.0
docker push your-registry/task-manager-backend:v1.0
# Apply Kubernetes manifests
kubectl apply -f k8s/namespace.yaml
kubectl apply -f k8s/configmap-*.yaml
kubectl apply -f k8s/secret-db.yaml
kubectl apply -f k8s/pvc-postgres.yaml
kubectl apply -f k8s/statefulset-postgres.yaml
kubectl apply -f k8s/service-*.yaml
kubectl apply -f k8s/deployment-*.yaml
kubectl apply -f k8s/ingress.yaml
# Or use Helm
helm install task-manager ./helm/task-manager
# Check deployment status
kubectl get pods -n task-manager
kubectl get services -n task-manager
kubectl logs -n task-manager deployment/backend
What You See:
- Optimized multi-stage Dockerfiles (45 MB frontend vs 1.2 GB before)
- Complete Kubernetes manifests with health checks, resource limits, and auto-scaling
- Ingress configuration with TLS termination
- Persistent volumes for database data
- Helm chart for easy deployment and upgrades
The Core Question You’re Answering
“How can AI bridge the gap between ‘works on my machine’ and production-ready cloud-native deployments?”
This project teaches Infrastructure as Code (IaC): converting an application into declarative Kubernetes manifests. Success means understanding the 12-factor app principles, security boundaries (secrets management), and operational concerns (health checks, resource limits).
Concepts You Must Understand First
Stop and research these before coding:
- Container Fundamentals
- What is a container image vs a container instance?
- How do layers work in Docker (layer caching, .dockerignore)?
- What is the difference between
RUN,CMD, andENTRYPOINT? - Book Reference: “Docker Deep Dive” by Nigel Poulton - Ch. 3-5
- Kubernetes Architecture
- What are Pods, Deployments, Services, and Ingress?
- How does service discovery work (DNS, ClusterIP)?
- What is the difference between Deployment and StatefulSet?
- Book Reference: “Kubernetes in Action” by Marko Lukša - Ch. 1-5
- Configuration Management
- When should you use ConfigMaps vs Secrets vs environment variables?
- How do you handle database connection strings securely?
- What is the principle of least privilege for secrets?
- Book Reference: “Kubernetes Patterns” by Bilgin Ibryam - Ch. 4 (Configuration)
- Health Checks and Observability
- What is the difference between liveness and readiness probes?
- Why would a pod be “Running” but not “Ready”?
- How do you prevent cascading failures (circuit breakers, retries)?
- Book Reference: “Site Reliability Engineering” by Google - Ch. 21 (Monitoring)
Questions to Guide Your Design
Before implementing, think through these:
- Docker Image Optimization
- How will you minimize image size (multi-stage builds, Alpine Linux)?
- How will you optimize layer caching (COPY package.json before COPY .)?
- How will you handle secrets during build (BuildKit secrets, .dockerignore)?
- How will you tag images (semantic versioning, git SHA)?
- Kubernetes Resource Sizing
- How will you determine CPU/memory requests and limits?
- What happens if a pod exceeds its memory limit (OOMKilled)?
- How will you handle auto-scaling (HorizontalPodAutoscaler)?
- How will you prevent resource starvation (PodDisruptionBudgets)?
- Database Strategy
- Will you run Postgres in Kubernetes (StatefulSet) or use managed service (RDS)?
- How will you handle database migrations (init containers, Jobs)?
- How will you back up database data (PersistentVolume snapshots)?
- How will you handle connection pooling (PgBouncer sidecar)?
- Networking and Security
- How will you expose the app (LoadBalancer, Ingress, NodePort)?
- How will you handle TLS certificates (cert-manager, manual)?
- How will you restrict network traffic (NetworkPolicies)?
- How will you inject secrets (mounted volumes, environment variables)?
Thinking Exercise
Exercise: Design a Multi-Region Kubernetes Deployment
Given: “Deploy the task manager app to 3 regions (us-east, eu-west, ap-south) with geo-routing”
Architecture Decisions:
1. Image Registry Strategy
├─ Option A: Single registry with global replication (GCR, ECR with cross-region)
├─ Option B: Regional registries with image sync
└─ Trade-off: Latency vs consistency
2. Database Strategy
├─ Option A: Single primary DB in us-east, read replicas in other regions
├─ Option B: Multi-region DB with CockroachDB or Spanner
└─ Trade-off: Complexity vs latency
3. Traffic Routing
├─ Option A: Global load balancer (AWS Global Accelerator, GCP Cloud Load Balancing)
├─ Option B: DNS-based geo-routing (Route 53, Cloud DNS)
└─ Trade-off: Cost vs failover speed
Questions to answer:
- How do you ensure all regions run the same version (blue-green deployment)?
- How do you handle database writes (single-region write master vs multi-master)?
- How do you route users to the nearest region (latency-based DNS)?
- How do you handle region failures (automatic failover, manual intervention)?
- What monitoring would you add (Prometheus, Grafana, distributed tracing)?
The Interview Questions They’ll Ask
-
“Explain the difference between liveness and readiness probes. Give an example where a pod is alive but not ready.”
-
“How would you debug a pod that is CrashLoopBackOff? Walk me through your debugging process.”
-
“What are the security risks of mounting Secrets as environment variables vs as files?”
-
“Explain how Kubernetes service discovery works. How does a frontend pod find the backend service?”
-
“How would you perform a zero-downtime deployment with rolling updates in Kubernetes?”
-
“What is a sidecar container pattern? Give three examples of when you’d use it.”
Hints in Layers
Hint 1: Multi-Stage Docker Builds
# Build stage (large, includes dev dependencies)
FROM node:20 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci # Includes devDependencies for build
COPY . .
RUN npm run build
# Production stage (small, only runtime dependencies)
FROM node:20-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
CMD ["node", "dist/server.js"]
Hint 2: Kubernetes Resource Manifest Pattern Every Kubernetes resource needs:
apiVersion(which API group: v1, apps/v1, networking.k8s.io/v1)kind(resource type: Pod, Deployment, Service, Ingress)metadata(name, namespace, labels)spec(desired state)
Hint 3: Health Check Design
livenessProbe: # "Is the app alive?" (restart if fails)
httpGet:
path: /health
port: 3001
initialDelaySeconds: 30 # Wait 30s before first check
periodSeconds: 10 # Check every 10s
readinessProbe: # "Is the app ready for traffic?" (remove from service if fails)
httpGet:
path: /ready
port: 3001
initialDelaySeconds: 10
periodSeconds: 5
Hint 4: Secrets Management Best Practices Never commit secrets to git:
# k8s/secret-db.yaml.template
apiVersion: v1
kind: Secret
metadata:
name: db-credentials
type: Opaque
data:
DATABASE_URL: <REPLACE_WITH_BASE64_ENCODED_VALUE>
JWT_SECRET: <REPLACE_WITH_BASE64_ENCODED_VALUE>
Generate secrets at deployment time:
kubectl create secret generic db-credentials \
--from-literal=DATABASE_URL="postgres://..." \
--from-literal=JWT_SECRET="random-generated-secret"
Hint 5: Apply Order Matters Apply resources in dependency order:
- Namespace (everything else goes in here)
- ConfigMaps and Secrets (env vars for pods)
- PersistentVolumeClaims (storage for StatefulSets)
- StatefulSets/Deployments (the apps)
- Services (networking between apps)
- Ingress (external access)
Books That Will Help
| Topic | Book | Chapter |
|---|---|---|
| Docker fundamentals | “Docker Deep Dive” by Nigel Poulton | Ch. 3-5 (Images, Containers) |
| Kubernetes architecture | “Kubernetes in Action” by Marko Lukša | Ch. 1-5 (Pods, Deployments, Services) |
| K8s configuration patterns | “Kubernetes Patterns” by Bilgin Ibryam | Ch. 4 (Configuration), Ch. 5 (Health Probes) |
| Cloud-native apps | “Cloud Native DevOps with Kubernetes” by John Arundel | Ch. 3-6 |
| Site reliability | “Site Reliability Engineering” by Google | Ch. 21-23 (Monitoring, Alerts) |
| 12-factor apps | “The Twelve-Factor App” by Adam Wiggins | All (online resource) |
Common Pitfalls & Debugging
Problem 1: “Pod is in CrashLoopBackOff state”
- Why: Application exits immediately (missing env vars, connection refused to DB)
- Fix: Check logs:
kubectl logs -n task-manager deployment/backend - Quick test:
kubectl describe pod -n task-manager <pod-name>shows exit code and reason
Problem 2: “Frontend can’t connect to backend API (CORS errors)”
- Why: Backend Service is ClusterIP (internal only), not reachable from browser
- Fix: Frontend should call backend via Ingress path
/api, not directly - Quick test:
curl http://<ingress-ip>/api/healthfrom outside cluster
Problem 3: “Database data is lost when pod restarts”
- Why: No PersistentVolumeClaim, data stored in ephemeral pod storage
- Fix: Create PVC and mount it to
/var/lib/postgresql/datain Postgres pod - Quick test:
kubectl get pvc -n task-managershows Bound status
Problem 4: “Secrets are visible in plaintext when running kubectl get secret -o yaml”
- Why: Secrets are base64-encoded, not encrypted (base64 is reversible!)
- Fix: Use Sealed Secrets, external secret stores (Vault), or RBAC to restrict access
- Security: Secrets are only encrypted at rest in etcd, not in transit
Problem 5: “Liveness probe keeps killing healthy pods”
- Why:
initialDelaySecondsis too short; app needs more time to start - Fix: Increase
initialDelaySecondsto account for startup time (database migrations, cache warming) - Quick test:
kubectl describe pod <name>shows probe failure logs
Problem 6: “Docker build is slow (reinstalls node_modules every time)”
- Why: COPY . . invalidates cache before
npm install - Fix: COPY package.json first, run
npm install, then COPY rest of app - Pattern:
COPY package*.json ./ RUN npm ci COPY . . # This comes AFTER npm ci
Definition of Done
- Dockerfiles build successfully (
docker build -t test .) - Multi-stage builds reduce image size by >50% (use
docker imagesto compare) - All Kubernetes manifests apply without errors (
kubectl apply -f k8s/) - Pods start and reach “Running” and “Ready” states
- Liveness and readiness probes pass (check with
kubectl describe pod) - Frontend is accessible via Ingress (curl from outside cluster)
- Backend API responds to health checks (
/health,/ready) - Database data persists across pod restarts (PersistentVolumeClaim)
- Resource limits prevent pods from consuming excessive CPU/memory
- Secrets are not committed to git (use
.gitignore, sealed-secrets) - README includes deployment instructions and troubleshooting guide
- Helm chart (optional) successfully installs and upgrades the app