LEARN CLOUD NATIVE
Learn Cloud Native: From Docker to Kubernetes
Goal: To deeply understand the principles and practices of building and managing cloud-native applications—from containerizing your first service to orchestrating a full microservices ecosystem on Kubernetes with CI/CD and observability.
Why Go Cloud Native?
Cloud native isn’t just about running applications in the cloud; it’s a completely different approach to building and managing software. It enables applications that are scalable, resilient, and agile, allowing organizations to ship features faster and respond to change with confidence. Most modern, large-scale applications are built this way.
After completing these projects, you will:
- Package any application as a lightweight, portable container.
- Decompose a monolithic application into microservices.
- Use Kubernetes to declaratively manage, scale, and heal your applications.
- Automate your build, test, and deployment process with a CI/CD pipeline.
- Implement logging, monitoring, and alerting for a distributed system.
Core Concept Analysis
The Cloud Native Pillars (CNCF)
┌─────────────────────────────────────────────────────────────────────────┐
│ YOUR APPLICATION │
└─────────────────────────────────────────────────────────────────────────┘
│
▼ Decomposition
┌─────────────────────────────────────────────────────────────────────────┐
│ MICROSERVICES ARCHITECTURE │
│ (Small, independent services communicating over APIs, e.g., REST, gRPC) │
│ [Auth Svc] [User Svc] [Order Svc] [Payment Svc] │
└─────────────────────────────────────────────────────────────────────────┘
│
▼ Packaging
┌─────────────────────────────────────────────────────────────────────────┐
│ CONTAINERIZATION (Docker) │
│ (Each service and its dependencies are packaged into an isolated image) │
│ [Image A] [Image B] [Image C] [Image D] │
└─────────────────────────────────────────────────────────────────────────┘
│
▼ Management
┌─────────────────────────────────────────────────────────────────────────┐
│ CONTAINER ORCHESTRATION (Kubernetes) │
│ (Automates deployment, scaling, healing, and networking of containers)│
└─────────────────────────────────────────────────────────────────────────┘
│
┌──────────────────────────┼──────────────────────────┐
▼ ▼ ▼
┌──────────────────┐ ┌──────────────────┐ ┌──────────────────────┐
│ CI/CD & DEVOPS │ │ OBSERVABILITY │ │ SERVICE MESH │
│ (GitHub Actions)│ │(Prometheus, Loki)│ │ (Istio, Linkerd) │
│ • Automated Builds │ │ • Metrics │ │ • Secure Comms (mTLS)│
│ • Automated Deploy │ │ • Logs │ │ • Traffic Control │
│ • GitOps │ │ • Traces │ │ • Resilience │
└──────────────────┘ └──────────────────┘ └──────────────────────┘
Key Concepts Explained
- Containers (Docker): A standardized unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another.
- Microservices: An architectural style that structures an application as a collection of loosely coupled services. Each service is self-contained, owned by a small team, and can be deployed independently.
- Container Orchestration (Kubernetes): The automation of the operational effort required to run containerized workloads. It handles scheduling, scaling, self-healing, service discovery, and load balancing.
- CI/CD (Continuous Integration/Continuous Deployment): The automation of the software development and release process. CI automatically builds and tests code changes, while CD automatically deploys them to production.
- Observability: The ability to measure the internal states of a system by examining its outputs. In cloud native, this is achieved through three pillars:
- Metrics: Time-series numerical data (e.g., CPU usage, request latency).
- Logs: Timestamped records of discrete events (e.g., an application error).
- Traces: A representation of the flow of a request through a distributed system.
Project List
The following 10 projects provide a step-by-step journey from a single container to a fully operational cloud-native system.
Project 1: Containerize a Simple Web App
- File: LEARN_CLOUD_NATIVE.md
- Main Programming Language: Go
- Alternative Programming Languages: Python, Node.js
- Coolness Level: Level 2: Practical but Forgettable
- Business Potential: 1. The “Resume Gold”
- Difficulty: Level 1: Beginner
- Knowledge Area: Containerization / Docker
- Software or Tool: Docker
- Main Book: “Docker Deep Dive” by Nigel Poulton
What you’ll build: A Dockerfile for a simple “Hello, World” web server. You will build a Docker image and run it as a container on your local machine.
Why it teaches cloud native: This is the fundamental building block. Cloud native starts with packaging your application into a container. This project teaches you the essential skill of creating a portable, reproducible artifact for your code.
Core challenges you’ll face:
- Writing a Dockerfile → maps to understanding Dockerfile instructions (FROM, WORKDIR, COPY, RUN, CMD)
- Building an image → maps to the
docker buildcommand and image tagging - Running a container → maps to the
docker runcommand, port mapping, and viewing logs - Optimizing image size → maps to using a smaller base image (e.g.,
alpine)
Key Concepts:
- Dockerfile Syntax: Official Dockerfile reference
- Container vs. Image: “Docker Deep Dive” Ch. 2 - Poulton
- Port Mapping: Docker documentation on networking
Difficulty: Beginner Time estimate: A few hours Prerequisites: Basic programming knowledge in any language.
Real world outcome: You’ll have a running web server that you can access from your host machine’s browser, but it will be running inside an isolated container.
# Build the image
$ docker build -t hello-world:v1 .
Sending build context to Docker daemon...
# Run the container
$ docker run -d -p 8080:80 --name my-web-app hello-world:v1
abcdef123456...
# Access the app in your browser at http://localhost:8080
# Check the logs
$ docker logs my-web-app
Listening on port 80...
Request received for /
Implementation Hints:
- Create a simple web server in your language of choice (e.g., Go’s
net/httppackage). It should listen on a port (e.g., 80) and respond with “Hello, Cloud Native!”. - Create a file named
Dockerfilein the same directory. - Start the
Dockerfilewith aFROMinstruction for your language’s official image (e.g.,FROM golang:1.19-alpine). - Use
WORKDIRto set the working directory inside the container. - Use
COPYto copy your source code into the container. - Use
RUNto compile your code (if necessary). - Use
CMDto specify the command to run when the container starts.
Learning milestones:
- The Docker image builds successfully → You understand Dockerfile syntax.
- The container runs without errors → Your application is correctly packaged.
- You can access the web server from your browser → You understand port mapping.
- The image size is reasonably small → You are thinking about optimization.
Project 2: A Multi-Container App with Docker Compose
- File: LEARN_CLOUD_NATIVE.md
- Main Programming Language: Python (Flask)
- Alternative Programming Languages: Node.js (Express), Go
- Coolness Level: Level 2: Practical but Forgettable
- Business Potential: 1. The “Resume Gold”
- Difficulty: Level 1: Beginner
- Knowledge Area: Microservices / Container Orchestration
- Software or Tool: Docker Compose, Redis
- Main Book: “Docker in Action, 2nd Edition” by Jeff Nickoloff
What you’ll build: A simple web application with two services: a Python Flask web frontend and a Redis backend for a “page visits” counter. You’ll define and run this multi-container application using a single docker-compose.yml file.
Why it teaches cloud native: This is your first step into microservices. You learn that applications are composed of multiple, independent services that communicate over a network. Docker Compose introduces the concept of declarative configuration for a multi-service environment.
Core challenges you’ll face:
- Writing a
docker-compose.ymlfile → maps to defining services, images, ports, and volumes - Service-to-service communication → maps to understanding Docker’s internal networking (services connect via their names)
- Using a managed service → maps to integrating a standard database/cache (Redis) without building it yourself
- Data persistence → maps to using Docker volumes to persist Redis data across container restarts
Key Concepts:
- Docker Compose YAML format: Official Compose file reference
- Service Discovery in Docker: “Docker in Action” Ch. 3 - Nickoloff
- Docker Networking: Docker documentation
Difficulty: Beginner Time estimate: A weekend Prerequisites: Project 1.
Real world outcome:
A web app that shows “Hello! This page has been visited X times.” When you refresh the page, the counter increases. When you stop and restart the application with docker-compose down and docker-compose up, the counter persists.
# docker-compose.yml
version: '3.8'
services:
web:
build: .
ports:
- "8000:5000"
redis:
image: "redis:alpine"
volumes:
- redis-data:/data
volumes:
redis-data:
Implementation Hints:
- Create a simple Flask app that connects to a Redis host. The hostname for Redis in your code should be
redis, the service name fromdocker-compose.yml. - On each request, the app should increment a counter in Redis (
INCR page_visits) and display the current value. - Create a
Dockerfilefor the Flask app. - Create a
docker-compose.ymlfile. Define two services:web(built from your Dockerfile) andredis(using the official Redis image). - Link the services via Docker’s default network by simply having them in the same compose file.
- Define a named volume for the Redis data to ensure it persists.
Learning milestones:
docker-compose upstarts both containers successfully → Your compose file is valid.- The web app can connect to the Redis container → You understand service networking.
- The counter increments on each page refresh → Your application logic is working.
- The counter’s value is not reset after
docker-compose down/up→ You have successfully implemented data persistence.
Project 3: Deploy the App to Kubernetes
- File: LEARN_CLOUD_NATIVE.md
- Main Programming Language: YAML
- Alternative Programming Languages: N/A
- Coolness Level: Level 3: Genuinely Clever
- Business Potential: 1. The “Resume Gold”
- Difficulty: Level 2: Intermediate
- Knowledge Area: Container Orchestration / Kubernetes
- Software or Tool: Kubernetes (Minikube or Kind), kubectl
- Main Book: “Kubernetes in Action, 2nd Edition” by Marko Lukša
What you’ll build: Kubernetes manifest files (deployment.yml, service.yml) to deploy the two-container application from Project 2 onto a local Kubernetes cluster.
Why it teaches cloud native: This is the leap to true orchestration. You stop telling containers how to run and start telling Kubernetes the desired state. This project teaches you the core Kubernetes concepts of Pods, Deployments (for resiliency), and Services (for networking).
Core challenges you’ll face:
- Setting up a local Kubernetes cluster → maps to installing and using tools like Minikube or Kind
- Writing a Deployment manifest → maps to defining replicas, container images, and ports for a Pod
- Writing a Service manifest → maps to exposing your application to the network within and outside the cluster
- Using
kubectl→ maps to the essential CLI for interacting with a Kubernetes cluster
Key Concepts:
- Kubernetes Architecture: “Kubernetes in Action” Ch. 2 - Lukša
- Pods, Deployments, and Services: Kubernetes official documentation
- Declarative Configuration: The core principle of “telling Kubernetes what you want”
Difficulty: Intermediate Time estimate: 1-2 weeks Prerequisites: Project 2, comfortable with command line.
Real world outcome: Your application will be running inside your local Kubernetes cluster. You’ll be able to access the web app, and if you manually delete the application’s “Pod,” you will see Kubernetes automatically create a new one to replace it (self-healing).
# Apply the manifests
$ kubectl apply -f redis-deployment.yml -f redis-service.yml
$ kubectl apply -f web-deployment.yml -f web-service.yml
# Check the status
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
redis-d9d885c4f-abcde 1/1 Running 0 30s
web-6c9b5f9c9c-xyz12 1/1 Running 0 30s
# Get the URL to access the service (for Minikube)
$ minikube service web-service
|-----------|-------------|-------------|---------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|-------------|-------------|---------------------------|
| default | web-service | 5000 | http://192.168.49.2:31000 |
|-----------|-------------|-------------|---------------------------|
Implementation Hints:
- Install Minikube or Kind.
- Push your web app’s Docker image to a registry that your cluster can access (Docker Hub is easiest to start).
- Create
redis-deployment.yml: Define aDeploymentwith one replica, using theredis:alpineimage. - Create
redis-service.yml: Define aServiceof typeClusterIPfor Redis, exposing its port. The service name will beredis, which your web app will use to connect. - Create
web-deployment.yml: Define aDeploymentfor your web app, pointing to the image on Docker Hub. - Create
web-service.yml: Define aServiceof typeNodePortorLoadBalancerto expose your web app outside the cluster.
Learning milestones:
- All Pods are in the
Runningstate → Your manifests are correct and images are accessible. - The web app can connect to the Redis service → You understand Kubernetes service discovery.
- You can access the web app from your browser via the service URL → You understand Kubernetes external networking.
- When you run
kubectl delete pod <web-pod-name>, a new one is created automatically → You have witnessed self-healing in action.
Project 4: Add Health Checks and Resource Management
- File: LEARN_CLOUD_NATIVE.md
- Main Programming Language: YAML, Go
- Alternative Programming Languages: Python, Node.js
- Coolness Level: Level 3: Genuinely Clever
- Business Potential: 3. The “Service & Support” Model
- Difficulty: Level 2: Intermediate
- Knowledge Area: Application Reliability / Kubernetes
- Software or Tool: Kubernetes, kubectl
- Main Book: “Production-Ready Microservices” by Susan J. Fowler
What you’ll build: You will add /healthz (liveness) and /readyz (readiness) endpoints to your web application. Then, you will update your Kubernetes Deployment manifest to use these endpoints for liveness and readiness probes, and also set resource requests and limits.
Why it teaches cloud native: This is a critical step towards building resilient, production-grade systems. Health checks allow Kubernetes to automatically restart failing application instances (liveness) and prevent traffic from being sent to instances that are not yet ready to serve (readiness). Resource management ensures predictable performance and cluster stability.
Core challenges you’ll face:
- Implementing health check endpoints → maps to distinguishing between “alive” and “ready to serve traffic”
- Configuring probes in Kubernetes → maps to understanding
livenessProbe,readinessProbe,initialDelaySeconds, andperiodSeconds - Setting resource requests and limits → maps to understanding CPU and memory units (millicores, MiB/GiB)
- Observing Kubernetes’s behavior → maps to watching Kubernetes restart a container that fails its liveness probe
Key Concepts:
- Liveness vs. Readiness Probes: Kubernetes Documentation on Probes
- Resource Requests and Limits: “Kubernetes in Action” Ch. 17 - Lukša
- Quality of Service (QoS) Classes: Kubernetes documentation on QoS
Difficulty: Intermediate Time estimate: A weekend Prerequisites: Project 3.
Real world outcome:
You can simulate a failure in your application (e.g., by making the /healthz endpoint return a 500 error). After a few moments, you will see Kubernetes kill the unhealthy Pod and start a new one to replace it, without any manual intervention.
# In your web-deployment.yml
...
spec:
containers:
- name: web
image: my-app:v2
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 5000
livenessProbe:
httpGet:
path: /healthz
port: 5000
initialDelaySeconds: 3
periodSeconds: 3
readinessProbe:
httpGet:
path: /readyz
port: 5000
initialDelaySeconds: 5
periodSeconds: 5
Implementation Hints:
- In your web app, add two new HTTP endpoints:
/healthz: Should always return a 200 OK status as long as the process is running./readyz: Should check its dependencies (e.g., try to PING the Redis server). It should only return 200 OK if it’s ready to handle user traffic.
- Update your
Dockerfileand push a new image version (e.g.,my-app:v2). - Modify your
web-deployment.yml. In the container spec, addlivenessProbeandreadinessProbesections, configuring them to hit your new endpoints. - Add a
resourcessection withrequestsandlimitsfor CPU and memory. - Apply the updated manifest and use
kubectl describe pod <pod-name>to verify the probes and resource settings are active.
Learning milestones:
- The probes are configured and visible in
kubectl describe→ Your manifests are correct. - A Pod with a failing readiness probe does not receive traffic → You understand how Kubernetes manages service readiness.
- A Pod with a failing liveness probe is automatically restarted → You understand automated self-healing.
- The Pod’s QoS Class is
BurstableorGuaranteed→ You understand how resource requests/limits affect scheduling priority.
Project 5: Implement a CI/CD Pipeline
- File: LEARN_CLOUD_NATIVE.md
- Main Programming Language: YAML (GitHub Actions)
- Alternative Programming Languages: N/A
- Coolness Level: Level 3: Genuinely Clever
- Business Potential: 3. The “Service & Support” Model
- Difficulty: Level 3: Advanced
- Knowledge Area: DevOps / Automation
- Software or Tool: GitHub Actions, Docker Hub/GHCR
- Main Book: “Continuous Delivery” by Jez Humble and David Farley
What you’ll build: A GitHub Actions workflow that automatically triggers on a git push. The workflow will build your web application’s Docker image, push it to a container registry, and (for now) log a message indicating it’s ready for deployment.
Why it teaches cloud native: This automates your release process, which is a core tenet of cloud native. By creating a CI/CD pipeline, you make your deployments fast, repeatable, and less error-prone. This project is your first step towards “GitOps” - where Git is the single source of truth for your application’s state.
Core challenges you’ll face:
- Creating a GitHub Actions workflow file → maps to understanding YAML syntax for jobs, steps, and actions
- Storing credentials securely → maps to using GitHub Actions secrets to store your container registry password
- Building and pushing a Docker image in a pipeline → maps to using pre-built Actions from the marketplace (e.g.,
docker/build-push-action) - Tagging images effectively → maps to using the Git SHA or version tags for unique image identifiers
Key Concepts:
- CI/CD Principles: “Accelerate” - Nicole Forsgren, Jez Humble, Gene Kim
- GitHub Actions Workflow Syntax: GitHub Actions documentation
- Container Registries: Docker Hub, GitHub Container Registry (GHCR)
Difficulty: Advanced Time estimate: 1-2 weeks Prerequisites: Project 3, a GitHub account.
Real world outcome: When you push a code change to your repository, you can go to the “Actions” tab on GitHub and watch your pipeline run. You will see it build your Docker image and push a newly tagged version to Docker Hub or GHCR automatically.
# .github/workflows/ci-cd.yml
name: CI/CD Pipeline
on:
push:
branches: [ "main" ]
jobs:
build-and-push:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and push Docker image
uses: docker/build-push-action@v4
with:
context: .
push: true
tags: yourusername/my-app:${{ github.sha }}
Implementation Hints:
- Create a
.github/workflowsdirectory in your project. - Inside, create a YAML file (e.g.,
ci.yml). - Define the
on: pushtrigger. - Define a job called
build. Specifyruns-on: ubuntu-latest. - Add steps:
- Use the official
actions/checkout@v3to get your code. - Use
docker/login-action@v2to log in to your container registry. You’ll need to store your credentials as encrypted secrets in your GitHub repository settings. - Use
docker/build-push-action@v4to build and push the image. Configure it to tag the image with a unique identifier, like the Git commit SHA (${{ github.sha }}).
- Use the official
Learning milestones:
- The pipeline triggers automatically on
git push→ Your workflow file is correctly configured. - The pipeline successfully builds the Docker image → The runner environment can access your Dockerfile.
- The image is pushed to your container registry with a unique tag → Your authentication and tagging strategy is working.
- (Advanced) The pipeline automatically updates a Kubernetes manifest in another repo (GitOps) → You have a complete GitOps flow.
Project 6: Monitoring with Prometheus and Grafana
- File: LEARN_CLOUD_NATIVE.md
- Main Programming Language: Go, YAML
- Alternative Programming Languages: Python, Node.js
- Coolness Level: Level 4: Hardcore Tech Flex
- Business Potential: 3. The “Service & Support” Model
- Difficulty: Level 3: Advanced
- Knowledge Area: Observability / Monitoring
- Software or Tool: Prometheus, Grafana, Kubernetes
- Main Book: “Prometheus: Up & Running” by Brian Brazil
What you’ll build: You will instrument your web application to expose custom metrics (e.g., a counter for total requests) in the Prometheus format. Then, you will deploy Prometheus to your Kubernetes cluster to scrape these metrics and Grafana to build a dashboard visualizing them over time.
Why it teaches cloud native: You can’t manage what you can’t see. This project introduces the “metrics” pillar of observability. You’ll learn how modern cloud-native systems are monitored, moving from “is it up?” (health checks) to “how is it performing?” (metrics).
Core challenges you’ll face:
- Instrumenting an application → maps to adding a Prometheus client library and exposing a
/metricsendpoint - Deploying Prometheus to Kubernetes → maps to using a Helm chart or community operators for easy installation
- Configuring Prometheus to scrape targets → maps to understanding Prometheus’s service discovery and scrape configs
- Building a Grafana dashboard → maps to writing PromQL queries and creating panels to visualize data
Key Concepts:
- Metrics-based Monitoring: The “Four Golden Signals” (Latency, Traffic, Errors, Saturation) from the Google SRE book.
- Prometheus Data Model: “Prometheus: Up & Running” Ch. 2 - Brazil
- PromQL (Prometheus Query Language): Prometheus documentation
Difficulty: Advanced Time estimate: 2-3 weeks Prerequisites: Project 4.
Real world outcome: A Grafana dashboard accessible from your browser showing a real-time graph of your web application’s request count. As you send traffic to your app, you will see the graph tick upwards.
# Example PromQL query in Grafana
rate(http_requests_total{job="my-app"}[5m])
This query would show the per-second average rate of HTTP requests over the last 5 minutes.
Implementation Hints:
- Add a Prometheus client library to your web app.
- Create a new custom metric, like a Counter named
http_requests_total. - In your main request handler, increment the counter on every call.
- The client library will automatically expose a
/metricsendpoint. - Deploy Prometheus to your cluster. The easiest way is with the
kube-prometheus-stackHelm chart, which bundles Prometheus, Grafana, and Alertmanager. - Configure a
ServiceMonitorKubernetes object to tell Prometheus to find and scrape the/metricsendpoint of your application’s Service. - Access the Grafana UI (e.g., via
kubectl port-forward). Log in and create a new dashboard. - Add a panel and use a PromQL query to graph your custom metric.
Learning milestones:
- The
/metricsendpoint on your app shows your custom metric → Your application is correctly instrumented. - Prometheus’s UI shows your app as a “Target” with a state of “UP” → Prometheus is successfully scraping your app.
- You can query your custom metric in the Prometheus UI → Your metric data is being stored.
- You have a Grafana dashboard that visualizes the metric in real time → You have built a complete monitoring pipeline.
Summary
| Project | Main Language | Difficulty |
|---|---|---|
| Containerize a Simple Web App | Go / Python | Beginner |
| Multi-Container App with Docker Compose | Python | Beginner |
| Deploy the App to Kubernetes | YAML | Intermediate |
| Add Health Checks and Resource Management | YAML / Go | Intermediate |
| Implement a CI/CD Pipeline | YAML | Advanced |
| Monitoring with Prometheus and Grafana | Go / YAML | Advanced |