Project 9: Backup and PITR Drill
Document and validate a point-in-time recovery workflow.
Quick Reference
| Attribute | Value |
|---|---|
| Difficulty | Level 4 |
| Time Estimate | 20-30 hours |
| Main Programming Language | SQL |
| Alternative Programming Languages | Shell for automation |
| Coolness Level | See REFERENCE.md |
| Business Potential | See REFERENCE.md |
| Prerequisites | PostgreSQL installed, psql available |
| Key Topics | Backup, WAL |
1. Learning Objectives
- Explain the core ideas behind Backup and PITR Drill and why they matter.
- Demonstrate the workflow with a repeatable PostgreSQL session transcript.
- Identify and explain key metrics or evidence from the run.
- Document findings in a short operational report.
2. All Theory Needed (Per-Concept Breakdown)
Backup, Recovery, and Replication
Fundamentals PostgreSQL durability depends on the write-ahead log and backup strategies. Base backups capture a consistent snapshot. WAL archiving allows point-in-time recovery (PITR). Replication can be physical (byte-level) or logical (row-level change streams). Logical replication uses publish and subscribe with replication identity. Understanding these mechanisms is essential for data safety and high availability. Without a tested backup and replication plan, you do not have a reliable system. Reliability is therefore an engineering discipline, not a feature toggle. It requires routine drills and documented procedures, plus clear recovery objectives. It is a core part of operational trust.
Deep Dive Backup and recovery in PostgreSQL revolve around WAL. The server writes all changes to WAL before data files, ensuring crash safety. For disaster recovery, you need base backups and WAL archives. A base backup is a full copy of the database cluster. WAL archives capture changes since that backup. By restoring the base backup and replaying WAL to a target timestamp, you can recover to a precise point in time. This is critical for protection against accidental deletes or corruption.
Replication builds on WAL. Physical replication streams WAL records to a standby, which replays them to keep a binary copy. This is ideal for high availability and read replicas. Logical replication instead extracts row-level changes and applies them on subscribers. It is more flexible: you can replicate selected tables, run different major versions, or filter data. The tradeoff is complexity and limitations: DDL changes are not automatically replicated and must be synchronized manually. Logical replication uses a publish/subscribe model where a publication defines which tables are replicated and a subscription pulls changes from the publisher.
Replication identities matter because logical replication needs a stable way to identify rows. Primary keys are the usual identity. Without them, updates and deletes cannot be replicated cleanly. This is why logical replication reinforces good schema design.
Recovery is not just about restoring data. It is also about ensuring consistent configuration, roles, and extensions. A production-grade backup strategy includes configuration backups, version compatibility checks, and regular recovery drills. The PITR process should be tested, not just assumed to work.
Understanding backup and replication is essential for scaling and reliability. Many outages are not due to data loss but to slow or incorrect recovery. A skilled PostgreSQL engineer can restore a system quickly because they understand WAL, recovery timelines, and replication lag.
Replication introduces operational decisions. Physical replication provides a byte-for-byte copy and is ideal for failover, but it ties you to the same major version and architecture. Logical replication offers flexibility for upgrades and selective data sharing, but it requires careful DDL coordination and monitoring. For example, if you rename a column on the publisher, the subscriber may fail until the change is applied there too. These are not edge cases; they are common operational workflows. Backup strategies also vary by workload. For small databases, nightly base backups may be sufficient. For larger systems, continuous archiving and frequent incremental backups become necessary. You must also consider retention and encryption, because backups often contain the most sensitive data. Recovery time objectives (RTO) and recovery point objectives (RPO) should guide how you design your backup cadence and storage. PostgreSQL gives you the primitives; you must design the policy.
Replication lag is not just a number; it is a risk signal. A replica that is minutes behind can lead to stale reads or slow failovers. Monitoring lag and understanding its causes (network throughput, I/O contention, or long-running transactions on the replica) helps you decide whether to fail over or to wait. This is why replication monitoring is part of operational readiness, not just a dashboard metric.
Definitions and key terms
- Base backup: full snapshot of the database cluster.
- WAL archiving: storing WAL for recovery.
- PITR: point-in-time recovery.
- Logical replication: row-level change replication.
Mental model diagram
Base backup + WAL archive -> Restore -> Replay -> Target time
Publisher -> Publication -> Subscriber (logical replication)
How it works (step-by-step, include invariants and failure modes)
- Take base backup (invariant: consistent snapshot).
- Archive WAL continuously (failure: gaps break recovery).
- Restore and replay WAL (failure: wrong timeline or missing WAL).
- Configure replication (failure: missing replication identity).
Minimal concrete example (pseudocode, config, protocol transcript, or CLI output)
Backup workflow (pseudo):
- base backup at T0
- WAL archive from T0 onward
- restore base + replay WAL to T1
Common misconceptions
- “Logical replication copies schemas” -> DDL is not replicated automatically.
- “Backups are enough without testing” -> Untested backups are guesses.
Check-your-understanding questions
- Why is WAL archiving required for PITR?
- Why does logical replication need replication identity?
Check-your-understanding answers
- WAL provides the changes between the base backup and target time.
- It needs a stable key to identify rows for updates and deletes.
Real-world applications
- High availability failover using physical replication.
- Zero-downtime upgrades via logical replication.
Where you will apply it
- See Section 3.7 Real World Outcome and Section 5.10 Implementation Phases in this file.
- Also used in: P10-logical-replication-pipeline
References
- PostgreSQL WAL reliability (postgresql.org/docs/current/wal.html)
- PostgreSQL PITR (postgresql.org/docs/current/continuous-archiving.html)
- PostgreSQL Logical Replication (postgresql.org/docs/current/logical-replication.html)
Key insight Reliability depends on WAL; replication and recovery are structured workflows, not ad-hoc fixes.
Summary PostgreSQL backup and replication are WAL-driven and must be tested to be trusted.
Homework/exercises to practice the concept
- Sketch a PITR timeline for a daily backup with hourly WAL archiving.
- Explain the difference between physical and logical replication.
Solutions to the homework/exercises
- Restore base backup, then replay WAL to the desired hour.
- Physical replicates bytes; logical replicates row-level changes.
PostgreSQL Architecture and Storage Engine
Fundamentals PostgreSQL is a client-server database with a process-based architecture, shared memory, and durable storage built around a write-ahead log (WAL). The server accepts connections, parses SQL, plans queries, and executes them using operators that scan or index data. Rows live in heap files and are referenced by tuple identifiers (TIDs). Indexes are separate structures that point to heap tuples. Durability comes from WAL: changes are written to the log before data files are updated. This means the system can recover after a crash by replaying the WAL. Understanding the storage engine is essential because it explains why PostgreSQL behaves the way it does under concurrency, how VACUUM reclaims space, and why indexes are not free.
Deep Dive PostgreSQL uses a multi-process model: each client connection is handled by its own backend process. These processes coordinate through shared memory, where PostgreSQL keeps shared buffers, lock tables, and process state. When a query arrives, the parser transforms SQL into a parse tree, the planner generates possible execution strategies, and the executor runs a selected plan. The plan uses operators such as sequential scans, index scans, joins, and aggregates. These operators pull tuples from storage and apply filters or join conditions. Each tuple is stored in heap pages on disk and cached in shared buffers.
The storage engine uses the write-ahead log to ensure durability. WAL is a sequential log of changes. The rule is simple: the log must reach durable storage before the corresponding data page is flushed. This guarantees that even if the system crashes, recovery can replay the log and bring the data files into a consistent state. Checkpoints bound recovery time by forcing dirty buffers to disk and recording a checkpoint record in the WAL. This is why checkpoint tuning impacts performance: frequent checkpoints increase I/O; infrequent checkpoints lengthen recovery windows.
PostgreSQL storage is not append-only, but MVCC creates multiple versions of tuples. Updates and deletes do not overwrite rows; they create new row versions and mark old ones as obsolete. This creates table bloat that must be reclaimed. VACUUM is the mechanism that cleans up dead tuples and updates visibility maps. The storage engine is therefore both append-like and reclaiming, and understanding this interplay is critical for performance and disk usage.
Indexes in PostgreSQL are separate structures. A B-tree index stores keys and pointers to heap tuples. When the executor uses an index, it fetches matching TIDs and then visits the heap to check visibility and fetch full rows. This is why index-only scans are special: they can skip heap fetches when the visibility map proves that all tuples are visible. The storage engine also supports other index types like GIN for composite values, GiST for geometric or extensible indexing, and BRIN for large, ordered datasets. Each index type represents a tradeoff between maintenance cost and query performance.
Understanding this architecture explains why you may see a query that is fast in isolation but slow under concurrency, or why a table that is frequently updated grows without bound until vacuumed. The storage engine is designed for correctness and flexibility, not just raw speed. Its behaviors are predictable if you understand the lifecycle of tuples, the WAL rules, and the relationship between heap storage and indexes.
PostgreSQL also relies on background processes to keep the system healthy. The background writer smooths I/O by flushing dirty buffers, the checkpointer coordinates checkpoints, and the autovacuum workers clean up dead tuples. These background processes compete with foreground query work for I/O and CPU. This is why tuning is a balancing act: aggressive checkpoints or vacuuming can reduce bloat and recovery time, but can also increase I/O contention. The internal architecture is therefore a systems tradeoff between durability, latency, and throughput.
Definitions and key terms
- Heap: the main table storage file containing row versions.
- WAL: write-ahead log used for durability and recovery.
- Checkpoint: a point where dirty buffers are flushed and a recovery marker is written.
- Shared buffers: in-memory cache for data pages.
Mental model diagram
Client -> Backend process -> Planner -> Executor -> Heap + Index
| |
v v
WAL record Data page
How it works (step-by-step, include invariants and failure modes)
- Parse SQL and generate a plan (invariant: plan respects schema and permissions).
- Executor reads tuples from heap or index (failure: missing statistics leads to bad plan).
- Update creates a new tuple version (invariant: old version remains for MVCC).
- WAL records are flushed before data pages (failure: WAL disabled -> no crash safety).
- VACUUM removes dead tuples (failure: disabled vacuum -> bloat).
Minimal concrete example (pseudocode, config, protocol transcript, or CLI output)
SQL-like pseudocode:
UPDATE <table>
SET <column> = <new_value>
WHERE <predicate>;
Outcome: new tuple version + WAL record + old tuple marked dead.
Common misconceptions
- “PostgreSQL overwrites rows in place” -> Updates create new versions; old ones stay until vacuumed.
- “WAL is only for replication” -> WAL exists primarily for crash recovery.
Check-your-understanding questions
- Why does PostgreSQL need VACUUM even if you delete rows?
- What invariant makes WAL-based recovery possible?
Check-your-understanding answers
- Deletes create dead tuples; VACUUM reclaims space and updates visibility maps.
- WAL must be durable before data pages are written, ensuring replay can restore consistency.
Real-world applications
- Diagnosing table bloat and tuning autovacuum.
- Explaining why write-heavy workloads need WAL tuning.
Where you will apply it
- See Section 3.7 Real World Outcome and Section 5.10 Implementation Phases in this file.
- Also used in: P01-environment-psql-mastery, P06-bloat-vacuum-lab
References
- PostgreSQL WAL configuration and reliability documentation (postgresql.org/docs/current/runtime-config-wal.html)
- PostgreSQL reliability and WAL overview (postgresql.org/docs/current/wal.html)
Key insight Storage and WAL rules explain most performance and durability behaviors in PostgreSQL.
Summary PostgreSQL stores row versions in heap files and uses WAL for durability; VACUUM is essential for reclaiming space.
Homework/exercises to practice the concept
- Draw the path from SQL statement to WAL record and heap page.
- Explain why an update can increase table size even if the row count stays constant.
Solutions to the homework/exercises
- SQL -> parse -> plan -> executor -> WAL -> data page.
- Updates create new row versions while old ones remain until vacuumed.
3. Project Specification
3.1 What You Will Build
Document and validate a point-in-time recovery workflow. You will produce a repeatable transcript and a short evidence report for the workflow.
3.2 Functional Requirements
- Provide a deterministic scenario with a known outcome.
- Capture at least two key pieces of evidence (metrics, plans, or outputs).
- Produce a short report that another engineer could verify.
3.3 Non-Functional Requirements
- Performance: Queries should complete within a few seconds on a small dataset.
- Reliability: The workflow should be repeatable across multiple runs.
- Usability: Outputs should be readable and structured.
3.4 Example Usage / Output
$ pg_basebackup <pseudo-args>
Base backup completed
$ restore workflow (pseudo)
Recovery complete at <timestamp>
3.5 Data Formats / Schemas / Protocols
Use a simple evidence log format:
SESSION_START
command: <psql or SQL-like command>
result: <summary line>
KEY_EVIDENCE: name=value, name=value
SESSION_END
3.6 Edge Cases
- Missing permissions for a required command.
- Stale statistics leading to unexpected results.
- Misconfiguration that hides expected data.
3.7 Real World Outcome
This section is the gold standard for verification.
3.7.1 How to Run (Copy/Paste)
- Use a local PostgreSQL instance or container.
- Load a minimal dataset with predictable values.
- Execute the command sequence exactly as described.
3.7.2 Golden Path Demo (Deterministic)
Restore reaches a known timestamp and verifies expected data. Exit code: 0
3.7.3 If CLI: exact terminal transcript
$ pg_basebackup <pseudo-args>
Base backup completed
$ restore workflow (pseudo)
Recovery complete at <timestamp>
3.7.4 Failure Demo (Deterministic)
Restore fails due to missing WAL segments. Exit code: 1
4. Solution Architecture
4.1 High-Level Design
+--------------------+ +----------------------+ +-------------------+
| Dataset + Schema | --> | PostgreSQL Workflow | --> | Evidence Report |
+--------------------+ +----------------------+ +-------------------+
4.2 Key Components
| Component | Responsibility | Key Decisions |
|---|---|---|
| Dataset | Provide deterministic inputs | Keep small and fixed |
| Workflow | Execute queries or commands | Use explicit steps |
| Evidence Report | Capture outcomes | Use consistent format |
4.4 Data Structures (No Full Code)
EvidenceRecord:
- step
- command
- output_summary
- key_metrics
Report:
- summary
- evidence
- conclusions
4.4 Algorithm Overview
Key Algorithm: Evidence Capture Loop
- Prepare dataset and baseline metrics.
- Execute the workflow step by step.
- Capture outputs and compare with expected outcomes.
- Write a short report with conclusions.
Complexity Analysis:
- Time: O(Q) where Q is number of queries.
- Space: O(R) for report size.
5. Implementation Guide
5.1 Development Environment Setup
psql --version
5.2 Project Structure
project-root/
|-- notes/
| `-- report.md
|-- transcripts/
| `-- session.txt
`-- README.md
5.3 The Core Question You’re Answering
“Can I restore my data to an exact point in time?”
Write the evidence you need before you start.
5.4 Concepts You Must Understand First
Stop and research these before debugging:
- Core PostgreSQL mechanics
- What does PostgreSQL guarantee at this layer?
- Book Reference: “PostgreSQL: The Definitive Guide” - Ch. 3
- Planning and observation
- Which signals confirm success?
- Book Reference: “SQL Performance Explained” - Ch. 1
- Data integrity rules
- Which invariants must always hold?
- Book Reference: “Database System Concepts” - Ch. 2
5.5 Questions to Guide Your Design
Before implementing, think through these:
- Evidence capture
- Which outputs prove correctness?
- What is the smallest set of metrics that matters?
- Repeatability
- How will you ensure the same results each run?
- Which inputs must be fixed?
5.6 Thinking Exercise
Sketch a timeline of actions and identify the key evidence at each step.
5.7 The Interview Questions They’ll Ask
- “How do you verify correctness in PostgreSQL workflows?”
- “What evidence shows a query plan change?”
- “How do you ensure isolation and consistency?”
- “Why do you need VACUUM?”
- “How do you validate a backup?”
5.8 Hints in Layers
Hint 1: Start with a small dataset Keep the dataset minimal and predictable.
Hint 2: Record every step Use a transcript so you can compare runs.
Hint 3: Compare against expected outputs Use a golden path to validate correctness.
Hint 4: Use system catalogs for verification Confirm results with pg_stat and catalog views.
5.9 Books That Will Help
| Topic | Book | Chapter |
|---|---|---|
| PostgreSQL fundamentals | “PostgreSQL: The Definitive Guide” | Ch. 3 |
| SQL performance | “SQL Performance Explained” | Ch. 1-4 |
| Data modeling | “Database System Concepts” | Ch. 2 |
5.10 Implementation Phases
Phase 1: Foundation (2-4 hours)
Goals:
- Set up the environment.
- Define the minimal dataset.
Tasks:
- Install and connect to PostgreSQL.
- Load a deterministic dataset.
Checkpoint: You can run a basic query and see expected output.
Phase 2: Core Workflow (4-8 hours)
Goals:
- Execute the core steps.
- Capture evidence.
Tasks:
- Run the main workflow commands.
- Capture outputs and metrics.
Checkpoint: Evidence matches the golden path.
Phase 3: Polish and Edge Cases (2-6 hours)
Goals:
- Validate edge cases.
- Document failures and fixes.
Tasks:
- Trigger a failure scenario.
- Document how you detected and fixed it.
Checkpoint: Failure case and fix are documented.
5.11 Key Implementation Decisions
| Decision | Options | Recommendation | Rationale |
|---|---|---|---|
| Dataset size | Small or large | Small | Easier to verify outputs |
| Evidence format | Transcript or report | Both | Transcript for detail, report for summary |
6. Testing Strategy
6.1 Test Categories
| Category | Purpose | Examples |
|---|---|---|
| Unit Tests | Verify single steps | One query or command |
| Integration Tests | End-to-end workflow | Full golden path |
| Edge Case Tests | Validate failures | Missing permissions |
6.2 Critical Test Cases
- Primary path: expected output is produced.
- Failure path: error is detected and documented.
- Repeatability: results match across two runs.
6.3 Test Data
Input: deterministic dataset with fixed values
Expected: same outputs each run
7. Common Pitfalls & Debugging
7.1 Frequent Mistakes
| Pitfall | Symptom | Solution |
|---|---|---|
| Wrong database | Output references unexpected objects | Verify connection and search_path |
| Missing privileges | Permission denied errors | Grant least-privilege access for the task |
| Stale stats | Planner chooses unexpected plan | Run ANALYZE and re-check |
7.2 Debugging Strategies
- Capture before changing: record outputs first.
- Verify connection context: ensure correct database and schema.
7.3 Performance Traps
If dataset size grows, some commands may appear slow. Keep tests small and repeatable.
8. Extensions & Challenges
8.1 Beginner Extensions
- Add one extra table or metric and document it.
- Create a short checklist for future runs.
8.2 Intermediate Extensions
- Add a second scenario and compare results.
- Automate part of the workflow with a script.
8.3 Advanced Extensions
- Integrate the workflow into a CI or scheduled job.
- Create a dashboard or long-term trend report.
9. Real-World Connections
9.1 Industry Applications
- Operational runbooks for production PostgreSQL systems.
- Data platform reliability and incident response.
9.2 Related Open Source Projects
- PostgreSQL core project - open source database engine.
- pg_stat_statements - query statistics extension.
9.3 Interview Relevance
- Transaction semantics and MVCC.
- Index selection and query planning.
10. Resources
10.1 Essential Reading
- PostgreSQL documentation (current) - reference for commands and behavior.
- SQL Performance Explained - reasoning about indexes and plans.
- Designing Data-Intensive Applications - reliability patterns.
10.2 Video Resources
- PostgreSQL conference talks on operations and performance
- Database reliability talks from systems conferences
10.3 Tools & Documentation
- psql - interactive CLI for PostgreSQL.
- pg_stat_* views - built-in monitoring tables.
- pg_dump and pg_basebackup - backup tooling.
10.4 Related Projects in This Series
- Previous Project: P08-full-text-search.md
- Next Project: P10-logical-replication-pipeline.md
11. Self-Assessment Checklist
11.1 Understanding
- I can explain the key concept without notes.
- I can justify the evidence I collected.
- I understand the tradeoffs in this workflow.
11.2 Implementation
- All functional requirements are met.
- The golden path transcript matches expected output.
- Failure modes are documented with evidence.
11.3 Growth
- I documented lessons learned.
- I can explain this project in a job interview.
12. Submission / Completion Criteria
Minimum Viable Completion:
- A working transcript that matches the golden path.
- A short report with evidence and conclusions.
- A failure case documented with explanation.
Full Completion:
- All minimum criteria plus:
- Comparison across at least two runs.
- Notes on tradeoffs and possible improvements.
Excellence (Going Above & Beyond):
- Automated capture of transcripts.
- A reusable checklist for future runs.