Project 4: LiveView Real-Time Operations Dashboard
A server-rendered realtime dashboard with diff updates.
Quick Reference
| Attribute | Value |
|---|---|
| Difficulty | Level 2 |
| Time Estimate | 12-20 hours |
| Main Programming Language | Elixir (Alternatives: None) |
| Alternative Programming Languages | None |
| Coolness Level | Level 3 |
| Business Potential | Level 2 |
| Prerequisites | LiveView + Backpressure, Process Model |
| Key Topics | LiveView, realtime UI, metrics |
1. Learning Objectives
By completing this project, you will:
- Design a BEAM process model that isolates failures.
- Apply OTP supervision strategies to real services.
- Validate correctness with deterministic test scenarios.
2. All Theory Needed (Per-Concept Breakdown)
LiveView + Backpressure
Fundamentals LiveView renders HTML on the server and sends diff updates over a persistent connection. This reduces client complexity and ensures consistency. GenStage provides backpressure so consumers can control how much work they accept, preventing producers from overwhelming the system. Together, these tools allow you to build real-time user experiences that remain stable under bursty workloads.
Deep Dive into the concept LiveView shifts the UI state to the server. The client receives an initial HTML render and then a stream of diffs for changes. This keeps the UI in sync with server state and avoids duplication of business logic in the browser. It is ideal for dashboards and collaborative tools where consistency is essential.
GenStage formalizes backpressure in producer-consumer systems. Consumers request a specific number of events, and producers only send what was requested. This prevents unbounded queues and keeps latency predictable. Backpressure is the foundation of resilient pipelines.
When combined, LiveView can consume data from a GenStage pipeline. Events are aggregated, filtered, and transformed before reaching the UI. This ensures that the dashboard remains responsive even when event volume spikes.
The critical design decision is where to aggregate and how to batch updates. Too many updates will overwhelm the UI; too much aggregation will reduce fidelity. You must choose a balance based on the use case.
How this fit on projects This concept is essential for the project and appears repeatedly in BEAM systems.
Definitions & key terms
- Core vocabulary for this concept, defined in project context.
Mental model diagram
[Input] -> [Process] -> [State] -> [Output]
How it works (step-by-step, with invariants and failure modes)
- Identify inputs and their constraints.
- Apply the core rules of the concept.
- Validate outputs and error states.
- Invariant: the system preserves isolation and deterministic behavior.
- Failure modes: overload, incorrect state transitions, missing supervision.
Minimal concrete example
Small example flow using messages and state updates.
Common misconceptions
- Confusing representation with runtime behavior.
- Assuming failures are rare instead of expected.
Check-your-understanding questions
- Explain the concept in your own words.
- Predict the outcome of a simple failure scenario.
- Why is this concept crucial for reliability?
Check-your-understanding answers
- It defines how the system behaves under concurrency.
- The supervisor should recover failed components.
- Without it, failure handling becomes ad hoc and unreliable.
Real-world applications
- High-concurrency services
- Fault-tolerant backends
- Real-time pipelines
Where you’ll apply it
- In this project’s core runtime loop and error handling.
References
- Official OTP and BEAM documentation for this concept.
Key insights This concept is the lever that makes BEAM systems resilient and scalable.
Summary Mastering this concept makes the project predictable and robust.
Homework/Exercises to practice the concept
- Draw a failure path and its recovery.
- Design a message flow for a small subsystem.
Solutions to the homework/exercises
- The failure should trigger a supervisor restart.
- The flow should isolate state and avoid shared mutation.
Process Model
Fundamentals BEAM processes are lightweight, isolated units with their own heap and mailbox. They are not OS threads; they are runtime-managed actors designed to be created in large numbers. Messages are copied between processes, which removes shared-memory races and enforces isolation. This means you can design systems by composing many small, single-purpose processes without worrying about locks. Failures are contained because no process can corrupt another’s memory; failure propagation only happens when you explicitly link or monitor processes. This model is the foundation for BEAM concurrency and fault tolerance and is essential for understanding how OTP systems behave under load.
Deep Dive into the concept The BEAM process model replaces shared-memory concurrency with message passing. Each process owns its data and only mutates its own state. When a message is sent, it is copied into the receiver’s mailbox. The receiver processes messages sequentially, updating its state in a controlled loop. This eliminates data races and deadlocks because there is no shared memory to corrupt.
The mailbox is FIFO but supports selective receive. This means a process can scan its mailbox to find a message that matches a specific pattern, which is powerful but can be expensive if the mailbox is large. System designers learn to keep mailboxes small, use tagged messages, or split responsibilities across processes to avoid mailbox buildup.
The cost of message copying is the trade-off for safety. You can mitigate this by keeping messages small or using reference-counted binaries for large payloads, but the default assumption is: data is immutable and copied. This shapes how you design interfaces: you send IDs or small payloads, not giant blobs.
Links and monitors define failure relationships. Links are bidirectional and propagate exits, while monitors are one-way and send a DOWN message. Supervisors use these mechanisms to observe child failures and restart them. This makes failure handling declarative: you define a supervision tree, and the runtime enforces the policy. Your job becomes designing the tree, not writing defensive code everywhere.
The mental model is: a BEAM system is a network of actors. Each actor is simple, deterministic, and isolated. The system is resilient because failures are expected and handled at the boundaries. This is why BEAM excels at concurrency-heavy workloads: instead of fighting complexity with locks, you structure complexity into supervised actors.
In practice, this means designing message protocols carefully. Every message should have a clear shape and meaning. You should also define error paths as messages (or exits) instead of trying to catch every error locally. The goal is to make failure handling part of the system architecture, not an afterthought.
How this fit on projects This concept is essential for the project and appears repeatedly in BEAM systems.
Definitions & key terms
- Core vocabulary for this concept, defined in project context.
Mental model diagram
[Input] -> [Process] -> [State] -> [Output]
How it works (step-by-step, with invariants and failure modes)
- Identify inputs and their constraints.
- Apply the core rules of the concept.
- Validate outputs and error states.
- Invariant: the system preserves isolation and deterministic behavior.
- Failure modes: overload, incorrect state transitions, missing supervision.
Minimal concrete example
Small example flow using messages and state updates.
Common misconceptions
- Confusing representation with runtime behavior.
- Assuming failures are rare instead of expected.
Check-your-understanding questions
- Explain the concept in your own words.
- Predict the outcome of a simple failure scenario.
- Why is this concept crucial for reliability?
Check-your-understanding answers
- It defines how the system behaves under concurrency.
- The supervisor should recover failed components.
- Without it, failure handling becomes ad hoc and unreliable.
Real-world applications
- High-concurrency services
- Fault-tolerant backends
- Real-time pipelines
Where you’ll apply it
- In this project’s core runtime loop and error handling.
References
- Official OTP and BEAM documentation for this concept.
Key insights This concept is the lever that makes BEAM systems resilient and scalable.
Summary Mastering this concept makes the project predictable and robust.
Homework/Exercises to practice the concept
- Draw a failure path and its recovery.
- Design a message flow for a small subsystem.
Solutions to the homework/exercises
- The failure should trigger a supervisor restart.
- The flow should isolate state and avoid shared mutation.
3. Project Specification
3.1 What You Will Build
Build a focused service with clear inputs, outputs, and failure behavior. It should be observable and testable, and should demonstrate the core BEAM concept for the project.
3.2 Functional Requirements
- Validated Input: Reject malformed or out-of-range values.
- Deterministic Output: Same input always yields the same output.
- Fault Behavior: Defined recovery path when a worker crashes.
3.3 Non-Functional Requirements
- Performance: Must handle expected input rate without unbounded queues.
- Reliability: Must recover from injected failures.
- Usability: Outputs are explicit and reproducible.
3.4 Example Usage / Output
$ run-project --demo
[expected output]
3.5 Data Formats / Schemas / Protocols
- Inputs: CLI arguments or small config file
- Outputs: structured logs + CLI status
3.6 Edge Cases
- Empty input
- Over-limit bursts
- Process crashes mid-operation
3.7 Real World Outcome
3.7.1 How to Run (Copy/Paste)
- Build:
mix compileorrebar3 compile - Run:
./bin/project --demo
3.7.2 Golden Path Demo (Deterministic)
A known input produces a known, testable output.
3.7.3 If CLI: exact terminal transcript
$ ./project --demo
[result line 1]
[result line 2]
4. Solution Architecture
4.1 High-Level Design
[Client] -> [Router] -> [Worker] -> [State]
4.2 Key Components
| Component | Responsibility | Key Decisions |
|---|---|---|
| Router | Dispatch requests | Choose routing strategy |
| Worker | Handle operations | Isolation and failure handling |
| Storage | Maintain state | ETS vs process state |
4.4 Data Structures (No Full Code)
- Process state maps
- ETS tables for shared data
- Message structs with tagged fields
4.4 Algorithm Overview
Key Algorithm: Core Service Loop
- Parse input into a message.
- Dispatch to target process.
- Update state and emit output.
Complexity Analysis:
- Time: O(n) in number of messages
- Space: O(k) in number of active keys/processes
5. Implementation Guide
5.1 Development Environment Setup
# Use standard OTP tooling (mix or rebar3)
5.2 Project Structure
project-root/
├── lib/
│ ├── router.ex
│ ├── worker.ex
│ └── storage.ex
├── test/
│ └── project_test.exs
└── README.md
5.3 The Core Question You’re Answering
“How do I make this service recover automatically without global failure?”
5.4 Concepts You Must Understand First
- Review the concepts above and ensure you can explain them clearly.
5.5 Questions to Guide Your Design
- How will you partition work across processes?
- Where is state stored and how is it protected?
- What happens when a worker crashes?
5.6 Thinking Exercise
Sketch the message flow and failure paths before coding.
5.7 The Interview Questions They’ll Ask
- “Why does this design scale better than shared-memory locks?”
- “How do you detect and recover from failures?”
- “How do you prevent mailbox buildup?”
5.8 Hints in Layers
Hint 1: Start with one worker process and a single message type.
Hint 2: Add supervision and confirm restart behavior.
Hint 3: Add routing and concurrency only after correctness.
Hint 4: Validate output with scripted test vectors.
5.9 Books That Will Help
| Topic | Book | Chapter |
|---|---|---|
| Core BEAM | “Programming Erlang” | Processes chapter |
| OTP | “Designing for Scalability with Erlang/OTP” | Supervision chapter |
5.10 Implementation Phases
Phase 1: Foundation (2-4 hours)
- Build a single worker process
- Define message shapes
Phase 2: Core Functionality (4-8 hours)
- Add routing and state storage
- Validate correctness on test cases
Phase 3: Polish & Edge Cases (2-4 hours)
- Add failure injection tests
- Document recovery behavior
5.11 Key Implementation Decisions
| Decision | Options | Recommendation | Rationale |
|---|---|---|---|
| State location | process vs ETS | process | simpler, safe |
| Supervision | one_for_one vs one_for_all | one_for_one | isolate failures |
6. Testing Strategy
6.1 Test Categories
| Category | Purpose | Examples |
|---|---|---|
| Unit Tests | Validate core logic | message parsing |
| Integration Tests | End-to-end flow | demo scenario |
| Failure Tests | Crash recovery | kill worker |
6.2 Critical Test Cases
- Normal path: request -> response
- Crash path: worker exit -> restart
- Overload: burst input -> bounded queue
6.3 Test Data
inputs: demo messages
expected: stable outputs
7. Common Pitfalls & Debugging
7.1 Frequent Mistakes
| Pitfall | Symptom | Solution |
|---|---|---|
| Missing supervisor | app dies on crash | add supervisor |
| Oversized mailbox | latency spikes | split processes |
| Unbounded state | memory growth | add eviction |
7.2 Debugging Strategies
- Use
observerto inspect process counts and queues - Add structured logs on message receive
7.3 Performance Traps
- Avoid heavy work in a single process
8. Extensions & Challenges
8.1 Beginner Extensions
- Add structured logging
- Add basic metrics
8.2 Intermediate Extensions
- Add distribution across two nodes
- Add persistence layer
8.3 Advanced Extensions
- Add rolling upgrades
- Add multi-region replication
9. Real-World Connections
9.1 Industry Applications
- Chat, presence, and realtime monitoring systems
9.2 Related Open Source Projects
- Phoenix Channels
- GenStage-based pipelines
9.3 Interview Relevance
- OTP supervision and process model questions
10. Resources
10.1 Essential Reading
- “Programming Erlang” by Joe Armstrong
- “Designing for Scalability with Erlang/OTP” by Cesarini/Thompson
10.2 Video Resources
- Conference talks on OTP supervision and BEAM concurrency