LEARN HASHICORP TOOLS
Learn the Hashicorp Stack: From Zero to Production-Grade Infrastructure
Goal: Deeply understand the core Hashicorp tools (Vagrant, Packer, Terraform, Consul, Nomad, and Vault) by building, provisioning, deploying, connecting, and securing a complete application stack from the ground up.
Why Learn the Hashicorp Stack?
In the world of DevOps and cloud infrastructure, Hashicorp provides a suite of open-source tools that have become industry standards. They are designed to work together, but can also be used independently, to manage the entire lifecycle of an application. Understanding this stack means you can automate infrastructure, manage secrets, schedule applications, and enable secure service-to-service communication, regardless of the underlying cloud provider.
After completing these projects, you will:
- Master the principles of Infrastructure as Code (IaC).
- Build and provision immutable infrastructure from scratch.
- Deploy and scale applications using a modern workload orchestrator.
- Implement robust service discovery and a service mesh.
- Securely manage and dynamically distribute application secrets.
- Understand how each tool fits into a modern cloud-native architecture.
Core Concept Analysis
The Hashicorp Workflow
The tools form a logical progression, from local development to production deployment and security.
┌─────────────────────────────────────────────────────────────────────────┐
│ DEVELOPMENT & BUILD │
│ │
│ Vagrant (Local Dev Environments) -> Packer (Builds Machine Images) │
│ │
└─────────────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────────┐
│ INFRASTRUCTURE PROVISIONING │
│ │
│ Terraform (Deploys Infrastructure) │
│ │
└─────────────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────────┐
│ APPLICATION LIFECYCLE │
│ │
│ Nomad (Runs & Schedules Apps) <-> Consul (Connects) <-> Vault (Secures) │
│ │
└─────────────────────────────────────────────────────────────────────────┘
Key Concepts by Tool
- Vagrant: Manages local development environments.
- Vagrantfile: A Ruby-based configuration file defining your environment.
- Boxes: Pre-packaged virtual machine images.
- Providers: The underlying virtualization (VirtualBox, VMware).
- Provisioners: Tools to install software (Shell, Ansible).
- Packer: Builds automated machine images.
- Builders: Create images for specific platforms (AWS AMI, Docker Image).
- Provisioners: Install and configure software inside the image.
- Post-Processors: Run tasks after the image is built (e.g., upload to a registry).
- Terraform: Provisions infrastructure as code.
- HCL (HashiCorp Configuration Language): Declarative syntax for defining resources.
- Providers: Plugins that interface with cloud APIs (AWS, Azure, GCP).
- State File: A JSON file that tracks the state of managed infrastructure.
- Execution Plan: A preview of what Terraform will create, change, or destroy.
- Modules: Reusable units of Terraform configuration.
- Consul: Service networking and discovery.
- Service Registry: A central catalog of services and their locations.
- Health Checking: Monitors the health of services.
- KV Store: A simple key-value store for dynamic configuration.
- Service Mesh: Provides secure, encrypted communication between services using mutual TLS.
- Nomad: Workload orchestration.
- Job, Group, Task: The hierarchy for defining a workload.
- Drivers: The technology used to run tasks (Docker, QEMU, exec).
- Scheduling: The algorithm used to place tasks on client nodes.
- Federation: Connecting multiple Nomad clusters across regions.
- Vault: Secrets management.
- Secrets Engines: Components that store, generate, or encrypt data (e.g., KV, Database).
- Auth Methods: Ways to authenticate to Vault (e.g., Tokens, AppRole).
- Policies: Rules that define who can access which secrets.
- Dynamic Secrets: Time-limited, on-demand credentials.
Project List
The following 12 projects will guide you through each core tool, culminating in a final project that integrates the entire stack.
Project 1: Build a Multi-Machine Dev Lab
- File: LEARN_HASHICORP_TOOLS.md
- Main Programming Language: Vagrantfile (Ruby DSL)
- Alternative Programming Languages: Shell Scripting (for provisioning)
- Coolness Level: Level 2: Practical but Forgettable
- Business Potential: 1. The “Resume Gold”
- Difficulty: Level 1: Beginner
- Knowledge Area: Local Virtualization / Development Environments
- Software or Tool: Vagrant, VirtualBox
- Main Book: Vagrant’s official documentation is the best resource.
What you’ll build: A single Vagrantfile that, with one command (vagrant up), launches and networks three distinct virtual machines: a load balancer, and two web servers, each provisioned with a basic web server.
Why it teaches Hashicorp basics: This is the “Hello, World!” of infrastructure. It teaches you how to define machines, networking, and basic software installation in a repeatable way, which is the foundational concept for all other Hashicorp tools.
Core challenges you’ll face:
- Defining multiple machines → maps to understanding multi-machine environments in a Vagrantfile
- Configuring private networking → maps to allowing VMs to communicate with each other but not the outside world
- Writing a simple provisioning script → maps to installing and configuring software like Nginx or Apache automatically
- Forwarding ports → maps to accessing a web server inside a VM from your host machine’s browser
Key Concepts:
- Vagrantfile Syntax: Vagrant Docs - Vagrantfile
- Private Networking: Vagrant Docs - Private Networks
- Provisioning: Vagrant Docs - Basic Usage of Provisioners
Difficulty: Beginner Time estimate: A few hours Prerequisites: Basic command-line knowledge, VirtualBox installed.
Real world outcome:
After running vagrant up, you can access http://localhost:8080 in your browser and see a “Hello from Web-1” or “Hello from Web-2” message, with the requests being distributed by the load balancer VM. Running vagrant destroy tears down the entire setup.
Implementation Hints:
- Start with a single VM definition to get the syntax right.
- Use a
forloop within theVagrantfileto define the two web servers to avoid repetition. - For the load balancer, use a shell provisioner to install Nginx and configure it with an
upstreamblock pointing to the private IP addresses of your web server VMs. - The web server provisioner should install Nginx/Apache and write a unique
index.htmlfile (e.g., using the machine’s hostname).
Learning milestones:
- Launch a single VM → You understand the basic
vagrant upworkflow. - Launch multiple, networked VMs → You understand how to define a small system.
- Provision software automatically → You grasp the core of automated setup.
- The whole environment is disposable and repeatable → You’ve internalized the key value of Vagrant.
Project 2: Create a Custom “Golden Image”
- File: LEARN_HASHICORP_TOOLS.md
- Main Programming Language: JSON (Packer HCL2 is preferred)
- Alternative Programming Languages: Shell Scripting (for provisioning)
- Coolness Level: Level 3: Genuinely Clever
- Business Potential: 1. The “Resume Gold”
- Difficulty: Level 2: Intermediate
- Knowledge Area: Immutable Infrastructure / Image Building
- Software or Tool: Packer, AWS/GCP/Azure account (or VirtualBox)
- Main Book: “The Packer Book” by James Turnbull is a good start, though the official docs are more current.
What you’ll build: A Packer template that automatically builds a custom Amazon Machine Image (AMI) (or a local VirtualBox image) pre-loaded with a web server (Nginx) and your application code.
Why it teaches immutable infrastructure: Instead of configuring servers after they launch, you’ll bake the configuration into the image itself. This is a core tenet of modern infrastructure: images are artifacts, built from source, and promoted through environments.
Core challenges you’ll face:
- Choosing and configuring a builder → maps to targeting a specific cloud (AWS) or local provider (VirtualBox)
- Using provisioners to run scripts → maps to installing packages, moving files, and setting permissions within the image
- Handling authentication with cloud providers → maps to securely giving Packer the credentials it needs to create resources
- Creating a parameterized build → maps to using variables for things like the source AMI ID, so your template is reusable
Key Concepts:
- Packer Templates (HCL2): Packer Docs - HCL2 Templates
- Builders: Packer Docs - Builders (e.g.,
amazon-ebs) - Provisioners: Packer Docs - Provisioners (e.g.,
shell,file)
Difficulty: Intermediate Time estimate: Weekend Prerequisites: A cloud account (or VirtualBox), basic shell scripting.
Real world outcome:
Running packer build . will trigger a temporary instance/VM to be created, configured, and then saved as a new, custom machine image (e.g., an AMI in your AWS account). You will see the ID of the new image as the final output.
Implementation Hints:
- Start with the
virtualbox-isobuilder if you don’t want to use a cloud provider yet. It’s completely local. - Your
shellprovisioner should contain the commands to update the OS, install nginx, and create a simpleindex.html. - Use the
fileprovisioner to copy a local application file (e.g.,app.py) into the image. - Always use a Packer variable for the source image ID (e.g., the latest Ubuntu LTS). This makes your template easier to maintain.
Learning milestones:
- Build a basic, unmodified image → You understand the builder workflow.
- Successfully run a provisioner → You can customize the image with software.
- Parameterize your template → Your builds are now configurable and reusable.
- Produce a cloud-ready “golden image” → You’ve created a core artifact for an immutable infrastructure pipeline.
Project 3: Deploy a Basic Web Server with Terraform
- File: LEARN_HASHICORP_TOOLS.md
- Main Programming Language: HCL (HashiCorp Configuration Language)
- Alternative Programming Languages: N/A
- Coolness Level: Level 2: Practical but Forgettable
- Business Potential: 1. The “Resume Gold”
- Difficulty: Level 1: Beginner
- Knowledge Area: Infrastructure as Code (IaC) / Cloud Provisioning
- Software or Tool: Terraform, AWS/GCP/Azure
- Main Book: “Terraform: Up & Running, 3rd Edition” by Yevgeniy Brikman
What you’ll build: A Terraform configuration that deploys a single virtual server on a cloud provider, configures its firewall, and uses a startup script to install and run a basic web server.
Why it teaches IaC: This is the foundational “Hello, World!” of Terraform. You’ll learn to declare the desired state of your infrastructure in code, and Terraform will figure out how to make it happen. You’ll experience the core plan/apply workflow.
Core challenges you’ll face:
- Configuring the provider → maps to telling Terraform how to authenticate to your cloud
- Defining a resource → maps to the basic building block of Terraform, like an
aws_instance - Using resource attributes → maps to referencing outputs from one resource as inputs to another (e.g., a security group ID for a VM)
- Managing state → maps to understanding what the
terraform.tfstatefile is and why it’s critical
Key Concepts:
- Terraform Core Workflow: Terraform Docs - Core Workflow
- HCL Syntax: Terraform Docs - Configuration Syntax
- Resources: Terraform Docs - Resources
- Providers: Terraform Docs - Providers (e.g.,
aws)
Difficulty: Beginner Time estimate: A few hours Prerequisites: A cloud provider account.
Real world outcome:
After running terraform apply, you’ll get a public IP address as an output. Visiting this IP in a browser will show the “Hello, World” page served by your newly provisioned server. Running terraform destroy will terminate the server and clean up all resources.
Implementation Hints:
- Define your provider (e.g.,
aws) and the desired region. - Create a
aws_security_groupresource that allows inbound traffic on port 80 (HTTP). - Create an
aws_instanceresource. Use auser_datascript to install a web server on boot. - Use an
outputblock to print the instance’s public IP address after it’s created. - Always run
terraform planbeforeterraform applyto see what will change.
Learning milestones:
- Successfully authenticate to your cloud → Provider configuration is correct.
terraform planshows the correct planned changes → Your resource definitions are valid.terraform applycreates the resources successfully → You’ve provisioned live infrastructure.terraform destroycleans everything up → You’ve completed the full IaC lifecycle.
Project 4: Build Production-Grade Web Infrastructure
- File: LEARN_HASHICORP_TOOLS.md
- Main Programming Language: HCL
- Alternative Programming Languages: N/A
- Coolness Level: Level 3: Genuinely Clever
- Business Potential: 3. The “Service & Support” Model
- Difficulty: Level 3: Advanced
- Knowledge Area: Cloud Architecture / IaC
- Software or Tool: Terraform, AWS/GCP/Azure
- Main Book: “Terraform: Up & Running, 3rd Edition” by Yevgeniy Brikman
What you’ll build: A robust, multi-tier infrastructure setup using Terraform modules. This includes a custom network (VPC), public and private subnets, a load balancer, an auto-scaling group for web servers, and a managed database in a private subnet.
Why it teaches advanced IaC: You move from single resources to composing a complete, secure, and scalable system. It forces you to think about networking, security boundaries, and high availability. Using modules teaches you how to create reusable, abstracted components.
Core challenges you’ll face:
- Designing a VPC from scratch → maps to understanding CIDR blocks, subnets, route tables, and NAT gateways
- Creating a load balancer and auto-scaling group → maps to implementing high availability and scalability
- Controlling traffic with security groups → maps to securing communication between tiers (e.g., LB to web, web to DB)
- Refactoring code into reusable modules → maps to creating clean, maintainable, and shareable infrastructure code
Key Concepts:
- Terraform Modules: Terraform Docs - Modules
- VPC and Networking: “AWS Networking Fundamentals” (or cloud equivalent)
- Auto Scaling: AWS Docs - Auto Scaling Groups
- Remote State: Terraform Docs - Remote State (using S3, etc.)
Difficulty: Advanced Time estimate: 1-2 weeks Prerequisites: Project 3, basic understanding of cloud networking concepts.
Real world outcome:
After terraform apply, you will have a load balancer URL. Accessing it serves a page from one of your auto-scaled web servers, which can successfully connect to a private database. You can terminate an instance and watch the auto-scaling group automatically replace it.
Implementation Hints:
- Structure your project with a root
main.tfthat calls modules (e.g.,./modules/vpc,./modules/app). - The
vpcmodule should create the VPC, subnets, and gateways, and output the subnet IDs. - The
appmodule should take subnet IDs as input and create the LB, ASG, and security groups. - Use a remote backend like S3 to store your state file, which is crucial for team collaboration and production environments.
- Use a data source to look up the latest AMI ID for your instances.
Learning milestones:
- A working, multi-subnet VPC is created → You understand cloud networking fundamentals.
- The application is served through a load balancer → You’ve implemented a highly-available entry point.
- Web servers can connect to the private database → You’ve mastered security groups and network ACLs.
- Your code is organized into reusable modules → You can now write production-quality, maintainable IaC.
Project 5: Implement Service Discovery with Consul
- File: LEARN_HASHICORP_TOOLS.md
- Main Programming Language: Go (for sample apps), HCL (for config)
- Alternative Programming Languages: Python, Node.js
- Coolness Level: Level 3: Genuinely Clever
- Business Potential: 4. The “Open Core” Infrastructure
- Difficulty: Level 2: Intermediate
- Knowledge Area: Service Networking / Distributed Systems
- Software or Tool: Consul, Vagrant (or Docker)
- Main Book: “Consul: Up and Running” by Luke Kysow
What you’ll build: A two-service application. One is a backend “API” service that registers itself with Consul. The other is a frontend “Web” service that dynamically discovers the API’s location using Consul’s DNS interface and makes a call to it.
Why it teaches service discovery: It solves the problem of “how does service A find service B” without hardcoding IP addresses. This is the foundation of microservices architecture. You’ll see how services can come and go, and the system adapts automatically.
Core challenges you’ll face:
- Setting up a Consul cluster → maps to understanding the server vs. client agent model
- Registering a service with a sidecar → maps to using a service definition file to tell Consul about your application
- Configuring a health check → maps to enabling Consul to monitor your service and remove it from rotation if it fails
- Querying Consul’s DNS interface → maps to using
api.service.consulas a hostname to get the IP of a healthy service instance
Key Concepts:
- Consul Architecture: Consul Docs - Architecture
- Service Registration: Consul Docs - Service Registration
- Health Checking: Consul Docs - Health Checks
- DNS Interface: Consul Docs - DNS Interface
Difficulty: Intermediate Time estimate: Weekend Prerequisites: Project 1, basic programming knowledge.
Real world outcome: You’ll have two services running. When you access the “Web” service, it will display data it fetched from the “API” service. You can stop the API service, and the Web service will show an error. When you restart the API service (potentially on a new IP), the Web service will find it again automatically.
Implementation Hints:
- Use the Vagrant setup from Project 1 to create your nodes. Install Consul on each.
- Run one node as a Consul server (
-server -bootstrap-expect=1) and the others as clients (-join <server_ip>). - For the API service, create a JSON file in
/etc/consul.d/defining the service name, port, and a health check (e.g., a simple HTTP check). - In your Web service code, make an HTTP request to
http://api.service.consul:PORT. - You’ll need to configure the VMs’ DNS resolvers to point to the local Consul agent (at
127.0.0.1:8600).
Learning milestones:
- Your Consul cluster is up and healthy → You understand the basic cluster setup.
- The API service appears in the Consul UI → You’ve mastered service registration.
- A failing API service is marked as “critical” → Your health checks are working.
- The Web service can find and call the API service via DNS → You’ve successfully implemented service discovery.
Project 6: Dynamic App Configuration with Consul KV
- File: LEARN_HASHICORP_TOOLS.md
- Main Programming Language: Go, Python, or Node.js
- Alternative Programming Languages: HCL (for consul-template)
- Coolness Level: Level 3: Genuinely Clever
- Business Potential: 2. The “Micro-SaaS / Pro Tool”
- Difficulty: Level 2: Intermediate
- Knowledge Area: Service Networking / Dynamic Configuration
- Software or Tool: Consul, consul-template
- Main Book: “Consul: Up and Running” by Luke Kysow
What you’ll build: An application that displays a “Feature Flag” status and a “Message of the Day”. These values are not hardcoded; they are read from Consul’s Key-Value (KV) store. You will use the consul-template tool to dynamically update the application’s configuration file and restart the service when a value changes in Consul.
Why it teaches dynamic configuration: It breaks the need to redeploy or restart an application just to change a simple setting. You’ll learn how to manage configuration centrally and push updates to a fleet of servers in real-time.
Core challenges you’ll face:
- Writing and reading from the Consul KV store → maps to using the Consul UI or CLI to manage configuration data
- Creating a
consul-templatetemplate → maps to writing a file that mixes static configuration with dynamic values from Consul - Running the
consul-templatedaemon → maps to configuring it to watch for changes and execute a command (like restarting a service) - Designing an application that can be reconfigured → maps to building apps that can handle configuration changes gracefully
Key Concepts:
- Consul KV Store: Consul Docs - Key/Value Store
consul-template:consul-templateGitHub Repository- Dynamic Configuration: The “Twelve-Factor App” - III. Config
Difficulty: Intermediate Time estimate: Weekend Prerequisites: Project 5.
Real world outcome:
Your application will be running. You can go to the Consul UI or use the CLI to change the value of a key (e.g., features/new-ui from false to true). Within seconds, you’ll see consul-template log that it detected a change, rewrote the config file, and reloaded your application, which now displays the new feature status.
Implementation Hints:
- First, write a simple app that reads its config from a file (e.g., a JSON or .env file).
- Use the Consul CLI to put some initial values in the KV store:
consul kv put features/new-ui false. - Create a template file (
config.json.ctmpl) for your app’s config. Use{{key "features/new-ui"}}to embed the KV value. - Run the
consul-templatedaemon with the template file as input, the target config file as output, and a command to reload your app service (e.g.,systemctl reload my-app).
Learning milestones:
- Values are successfully written to and read from the KV store → You understand the basics of the KV API.
consul-templategenerates the correct config file on its first run → Your template syntax is correct.- Changing a value in Consul automatically updates the file → The watch mechanism is working.
- Changing a value in Consul results in the application reloading and showing the new state → You have a complete, dynamic configuration pipeline.
Project 7: Deploy a Dockerized Application with Nomad
- File: LEARN_HASHICORP_TOOLS.md
- Main Programming Language: HCL (Nomad Jobspec)
- Alternative Programming Languages: Dockerfile
- Coolness Level: Level 3: Genuinely Clever
- Business Potential: 4. The “Open Core” Infrastructure
- Difficulty: Level 2: Intermediate
- Knowledge Area: Workload Orchestration / Scheduling
- Software or Tool: Nomad, Docker, Vagrant
- Main Book: “Mastering HashiCorp Nomad” by Bryan Krausen.
What you’ll build: A Nomad job file that deploys and manages a multi-instance, containerized web application (e.g., the official Nginx demo image). You will then scale the application up and down with a single command.
Why it teaches orchestration: This project is your introduction to modern workload scheduling. Instead of telling a machine how to run your app, you tell Nomad what to run, and it finds a machine with the capacity and runs it for you. It’s the core concept behind Kubernetes, but in a much simpler package.
Core challenges you’ll face:
- Setting up a Nomad cluster → maps to understanding the server vs. client agent model
- Writing a Nomad jobspec file → maps to learning the
job,group, andtaskstanza hierarchy - Using the Docker driver → maps to telling Nomad how to pull and run a Docker container
- Configuring networking and ports → maps to exposing your container’s port on the host machine so you can access it
Key Concepts:
- Nomad Architecture: Nomad Docs - Architecture
- Job Specification: Nomad Docs - Job Specification
- Docker Driver: Nomad Docs - Docker Driver
nomad runworkflow: Nomad Docs -runCommand
Difficulty: Intermediate Time estimate: Weekend
- Prerequisites: Project 1, basic Docker knowledge.
Real world outcome:
After nomad run my-job.nomad, you’ll see the job running in the Nomad UI. You can visit the IP address of your client node on the dynamically assigned port and see your application. Running nomad scale my-job my-group 3 will cause Nomad to start two more instances of your application across the cluster.
Implementation Hints:
- Use your Vagrant cluster from Project 1. Install Nomad and Docker on all nodes.
- Run one node as a Nomad server (
-server -bootstrap-expect=1) and the others as clients (-join <server_ip>). - Your jobspec should define one
job, onegroup, and onetask. - In the
task, specifydriver = "docker". - In the
configblock for the driver, specify the image, e.g.,image = "nginx:latest". - Use a
networkblock with aportdefinition to tell Nomad you need to expose a port. Use dynamic ports to start.
Learning milestones:
- Your Nomad cluster is up and has detected the client nodes → The cluster is healthy.
nomad job runsuccessfully starts the Docker container → Your jobspec and Docker driver are configured correctly.- You can access the service via the host’s IP and dynamic port → Your networking is working.
- You can scale the job up and down → You understand how Nomad manages application instances.
Project 8: Service Discovery for Nomad Jobs with Consul
- File: LEARN_HASHICORP_TOOLS.md
- Main Programming Language: HCL (Nomad Jobspec)
- Alternative Programming Languages: Go or Python for sample apps
- Coolness Level: Level 4: Hardcore Tech Flex
- Business Potential: 4. The “Open Core” Infrastructure
- Difficulty: Level 3: Advanced
- Knowledge Area: Orchestration / Service Networking
- Software or Tool: Nomad, Consul, Docker
- Main Book: “Mastering HashiCorp Nomad” by Bryan Krausen.
What you’ll build: A two-tier application deployed by Nomad. An “api” service will be deployed, and Nomad will automatically register it with Consul. A “web” service will then be deployed, which uses a template stanza to get the “api” service’s address from Consul and configure itself at runtime.
Why it teaches tool integration: This project reveals the magic of the HashiCorp ecosystem. You’ll see how Nomad and Consul work together seamlessly. Nomad manages the application lifecycle, and Consul handles the networking, all with minimal configuration.
Core challenges you’ll face:
- Integrating the Nomad and Consul agents → maps to configuring the agents to be aware of each other
- Using the
servicestanza in a Nomad job → maps to telling Nomad to automatically register this task in Consul - Using the
templatestanza in a Nomad job → maps to dynamically creating a configuration file for your task using data from Consul - Understanding the application lifecycle → maps to seeing how the
webtask waits for theapitask to be available before it starts
Key Concepts:
- Consul Integration: Nomad Docs - Consul Integration
serviceStanza: Nomad Docs -serviceStanzatemplateStanza: Nomad Docs -templateStanza- Service Mesh: While not building a full mesh, this is the first step towards Consul Connect.
Difficulty: Advanced Time estimate: 1-2 weeks Prerequisites: Projects 5 and 7.
Real world outcome:
You will run a single nomad run multi-tier.nomad file. Nomad will deploy both services. The api service will appear in Consul automatically. The web service will start, render a configuration file with the correct IP and port for the api service, and then launch. Accessing the web service will show data it successfully fetched from the api service, all without any hardcoded IPs.
Implementation Hints:
- Ensure your Nomad and Consul agents are running on all nodes and can communicate.
- In your “api” job file, add a
serviceblock. Give it aname = "api". Nomad will handle the rest. - In your “web” job file, add a
templateblock.- The
datashould be a template that renders the address of the API service:{{ range service "api" }} {{ .Address }}:{{ .Port }} {{ end }}. destinationshould be a path inside the container, likelocal/config.txt.change_modeshould be set torestartso the web task restarts if the api location changes.
- The
- Your
webtask’s entrypoint script shouldcat local/config.txtto see the rendered address before launching the main process.
Learning milestones:
- The
apitask appears in Consul after being deployed by Nomad → Theservicestanza is working. - The
webtask’s logs show the correctly rendered config file → Thetemplatestanza is working. - The
webservice successfully communicates with theapiservice → The end-to-end discovery is functional. - You can scale the
apiservice up, and thewebservices will start to balance load across them → You understand dynamic load balancing with Consul.
Project 9: Manage Static Secrets with Vault
- File: LEARN_HASHICORP_TOOLS.md
- Main Programming Language: Go or Python
- Alternative Programming Languages: Shell (using
curlandjq) - Coolness Level: Level 3: Genuinely Clever
- Business Potential: 3. The “Service & Support” Model
- Difficulty: Level 2: Intermediate
- Knowledge Area: Security / Secrets Management
- Software or Tool: Vault
- Main Book: “Getting Started with HashiCorp Vault” by Anubhav Mishra
What you’ll build: A simple application that needs a secret API key to function. You will set up a Vault server, store the API key in it, create a policy, and configure an authentication role. The application will then authenticate to Vault, fetch the secret, and use it.
Why it teaches secrets management: This project moves secrets out of config files, environment variables, and code—all insecure places—and into a centralized, secure, and auditable system. It’s the first step to professional-grade application security.
Core challenges you’ll face:
- Initializing and unsealing Vault → maps to understanding Vault’s core security mechanism where the master key is split
- Enabling and using a secrets engine → maps to storing a simple key-value secret
- Writing a policy (ACL) → maps to defining which paths an authenticated entity is allowed to read
- Authenticating programmatically → maps to using an authentication method like AppRole to let a machine log in to Vault
Key Concepts:
- Vault Architecture: Vault Docs - Architecture
- Initializing and Unsealing: Vault Docs - Unsealing
- Policies: Vault Docs - Policies
- AppRole Auth Method: Vault Docs - AppRole Auth
Difficulty: Intermediate Time estimate: Weekend Prerequisites: Basic programming, understanding of REST APIs.
Real world outcome:
Your application, when it starts, will have no knowledge of the API key. It will use a pre-configured RoleID and SecretID to log in to Vault, receive a temporary token, use that token to read the API key from secret/data/my-app/config, and then proceed to use it. The secret lives only in memory, for the duration of the app’s execution.
Implementation Hints:
- Start Vault in dev mode for simplicity:
vault server -dev. This starts it unsealed with a root token. - Use the Vault CLI to enable the KV v2 secrets engine:
vault secrets enable -path=secret kv-v2. - Write a key:
vault kv put secret/my-app/config api_key="SUPER_SECRET_VALUE". - Create a policy file (
my-app-policy.hcl) that allowsreadaccess tosecret/data/my-app/config. - Enable the AppRole auth method, create a role tied to your policy, and get the
RoleIDandSecretID. - Your application will need to make two API calls: one to the AppRole login endpoint with the Role/Secret IDs, and a second to the KV read endpoint using the token from the first call.
Learning milestones:
- Vault is initialized and you can read/write secrets with the root token → You understand basic Vault interaction.
- You create a policy that successfully restricts access → You grasp the core of Vault’s authorization model.
- You can programmatically log in with AppRole → You understand machine-to-machine authentication.
- Your application successfully fetches the secret and uses it → You’ve completed a secure secrets workflow.
Project 10: Generate Dynamic Database Credentials with Vault
- File: LEARN_HASHICORP_TOOLS.md
- Main Programming Language: Go or Python
- Alternative Programming Languages: Java, Node.js
- Coolness Level: Level 4: Hardcore Tech Flex
- Business Potential: 4. The “Open Core” Infrastructure
- Difficulty: Level 3: Advanced
- Knowledge Area: Security / Dynamic Secrets
- Software or Tool: Vault, PostgreSQL/MySQL
- Main Book: “Running HashiCorp Vault in Production” by Bryan Krausen.
What you’ll build: You’ll extend the application from Project 9. Instead of a static API key, the application needs database credentials. You will configure Vault’s Database Secrets Engine to connect to a database and dynamically generate a unique, time-limited username/password for each instance of your application.
Why it teaches dynamic secrets: This is the “magic” of Vault. It eliminates the problem of long-lived, shared database credentials. Every access is unique, temporary, and audited. A breach of one application instance doesn’t expose a credential that can be used elsewhere.
Core challenges you’ll face:
- Enabling and configuring the database secrets engine → maps to telling Vault how to connect to your database and what SQL to use to create users
- Creating a role → maps to defining the permissions (e.g., read-only) for the dynamic users Vault will create
- Updating application logic → maps to changing your app to fetch
usernameandpasswordfrom Vault instead of a config file - Understanding leasing and revocation → maps to seeing how Vault automatically cleans up the temporary credentials after they expire
Key Concepts:
- Database Secrets Engine: Vault Docs - Database Secrets Engine
- Dynamic Secrets: Vault Docs - Dynamic Secrets
- Leases and Renewals: Vault Docs - Leases
Difficulty: Advanced Time estimate: 1-2 weeks Prerequisites: Project 9, a running database instance (e.g., in Docker).
Real world outcome:
When your application starts, it authenticates to Vault and requests credentials from the database engine path. Vault connects to the database, creates a new user v-appid-.... with a random password and a 1-hour lifetime, and returns these credentials to the app. The app connects to the database. If you inspect the database, you’ll see the temporary user. After an hour, Vault will automatically revoke the lease and delete the user.
Implementation Hints:
- You’ll need a database (PostgreSQL is well-supported) that Vault can connect to.
- Enable the database engine in Vault:
vault secrets enable database. - Configure the connection:
vault write database/config/my-postgres plugin_name=postgresql-database-plugin connection_url=... allowed_roles="my-role". - Configure the role:
vault write database/roles/my-role db_name=my-postgres creation_statements="CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}'; GRANT SELECT ON ALL TABLES IN SCHEMA public TO \"{{name}}\";" default_ttl="1h" max_ttl="24h". - Update your app’s policy to allow
readaccess todatabase/creds/my-role. - Your app now reads from that path and gets back a username and password.
Learning milestones:
- Vault successfully connects to your database → The connection config is correct.
- Requesting credentials from Vault creates a user in the database → The role and creation statements are working.
- Your application can use the dynamic credentials to query the database → The end-to-end workflow is complete.
- The temporary database user is automatically deleted after the TTL expires → You’ve witnessed the full lifecycle of a dynamic secret.
Project 11: The Full HashiStack: A Truly Cloud-Native Deployment
- File: LEARN_HASHICORP_TOOLS.md
- Main Programming Language: HCL (Terraform, Nomad), Go/Python
- Alternative Programming Languages: Dockerfile, Shell
- Coolness Level: Level 5: Pure Magic (Super Cool)
- Business Potential: 5. The “Industry Disruptor”
- Difficulty: Level 5: Master
- Knowledge Area: Cloud-Native Architecture / DevOps
- Software or Tool: Terraform, Packer, Consul, Nomad, Vault
- Main Book: “Infrastructure as Code, 2nd Edition” by Kief Morris (for concepts).
What you’ll build: A complete, automated pipeline that builds, provisions, deploys, and secures a multi-tier application.
- Packer will build a “golden AMI” with Nomad, Consul, and Vault clients installed.
- Terraform will provision a VPC, a server cluster for the HashiCorp tools, and a client cluster using the golden AMI.
- Nomad will deploy the containerized web and API services.
- Consul will provide service discovery between the web and API tiers.
- Vault will provide the API service with dynamic database credentials.
Why it teaches the whole stack: This is the capstone project. It forces you to integrate every tool and concept you’ve learned into a single, cohesive, production-grade system. You will move from understanding individual tools to architecting a complete, modern application platform.
Core challenges you’ll face:
- System bootstrapping → maps to how do the first servers come online and form a cluster?
- Agent configuration → maps to getting all the different client agents (Nomad, Consul) to talk to the right servers
- Secure introduction → maps to how does a new Nomad job securely get a Vault token to fetch its secrets?
- End-to-end automation → maps to tying all the scripts and commands together so the entire deployment can be triggered with one action
Key Concepts:
- Cloud-Native Architecture: Principles of building for the cloud.
- System Bootstrapping: Using Terraform user-data and remote-exec to form clusters.
- Vault Agent: Vault Docs - Vault Agent
- Nomad’s Vault Integration: Nomad Docs - Vault Integration
Difficulty: Master Time estimate: 1 month+ Prerequisites: All previous projects.
Real world outcome:
You will have a Git repository. A terraform apply in this repository will build and deploy an entire, working, secure, and scalable application on a cloud provider from scratch. You can change a line of code, commit, and have the pipeline automatically update the running service. You will have built a personal Platform-as-a-Service (PaaS).
Implementation Hints:
- Your Terraform code will be complex. Use modules extensively.
- The server cluster can run the server agents for all three tools (Consul, Nomad, Vault).
- Use Terraform’s
user_datato pass configuration details to the client nodes, like the IP addresses of the server cluster. - Look into Vault Agent as a way to automatically inject secrets into the Nomad task’s environment.
- Nomad’s native Vault integration can be configured to provide a token to jobs that have the appropriate
vaultstanza.
Learning milestones:
- Terraform successfully provisions all infrastructure and clusters → Your cloud architecture is solid.
- Nomad jobs are deployed and automatically register with Consul → Your orchestration and service discovery are integrated.
- The deployed application can fetch its secrets from Vault → Your secure introduction process is working.
- The entire system is reproducible and can be torn down and brought back up flawlessly → You have achieved true Infrastructure as Code mastery.
Summary
| Project | Main Tool | Main Language | Difficulty |
|---|---|---|---|
| 1. Multi-Machine Dev Lab | Vagrant | Ruby DSL | Beginner |
| 2. Custom “Golden Image” | Packer | HCL/JSON | Intermediate |
| 3. Basic Web Server | Terraform | HCL | Beginner |
| 4. Production-Grade Infra | Terraform | HCL | Advanced |
| 5. Service Discovery | Consul | Go/Python | Intermediate |
| 6. Dynamic App Config | Consul | Go/Python | Intermediate |
| 7. Dockerized App | Nomad | HCL | Intermediate |
| 8. SD for Nomad Jobs | Nomad/Consul | HCL | Advanced |
| 9. Static Secrets | Vault | Go/Python | Intermediate |
| 10. Dynamic DB Credentials | Vault | Go/Python | Advanced |
| 11. The Full HashiStack | All | HCL | Master |