Project 2: System Monitor Dashboard

Build a production-minded implementation of “System Monitor Dashboard” focused on polling architecture, threshold logic, and stable rendering.

Quick Reference

Attribute Value
Difficulty Level 3
Time Estimate 10-14h
Main Programming Language TypeScript
Alternative Programming Languages JavaScript, C#, Python
Coolness Level Level 3 (Genuinely clever)
Business Potential Level 3 (Service & Support)
Prerequisites Event-driven architecture, JSON schemas, plugin lifecycle basics
Key Topics polling architecture, threshold logic, and stable rendering

1. Learning Objectives

By completing this project, you will:

  1. Design a deterministic state model for the action flow.
  2. Implement validated configuration and persistence boundaries.
  3. Build clear user feedback for success, pending, and failure states.
  4. Define a repeatable debugging and release-readiness workflow.

2. All Theory Needed (Per-Concept Breakdown)

2.1 Lifecycle and Event Semantics

  • Fundamentals: The Stream Deck runtime treats each action instance as a context-scoped interaction loop. You must reason in terms of appearance, interaction, state transition, and disappearance. This prevents hidden cross-instance coupling and timer/subscription leaks.
  • Deep Dive into the concept: For System Monitor Dashboard, lifecycle correctness means your logic is resistant to rapid profile switches, repeated key events, and delayed external responses. Model interactions with explicit transitions and ensure every asynchronous path is tied to context identity and cleanup behavior. Event bursts should never mutate stale contexts. Teardown paths must be first-class, not afterthoughts. Invariants should include: one state object per context, one active subscription set per context, and one deterministic renderer output for each canonical state snapshot.
  • How this fit on projects: The project cannot pass definition-of-done without lifecycle-safe behavior under stress.
  • Definitions & key terms:
    • Context: unique action instance identity.
    • Transition: controlled move between valid states.
    • Invariant: condition that must always remain true.
  • Mental model diagram:
Event -> Validate -> Reduce -> Persist -> Render
  ^                                 |
  |------------- Cleanup <----------|
  • How it works:
    1. Receive event with context.
    2. Validate payload and current state compatibility.
    3. Reduce into next state.
    4. Persist relevant settings.
    5. Render minimal-diff feedback.
    6. Cleanup resources on disappearance.
  • Minimal concrete example:
PSEUDOCODE
state = reduce(state, event)
if stateChanged:
  persist(state)
  render(state)
  • Common misconceptions:
    • One action UUID implies one runtime object. (False)
    • Cleanup can be delayed until plugin shutdown. (Unsafe)
  • Check-your-understanding questions:
    1. Why is context required in every log line?
    2. Which bug appears when stale async callbacks update current state?
    3. Why must teardown be explicit?
  • Check-your-understanding answers:
    1. It isolates concurrent action instances.
    2. Out-of-order overwrites and ghost updates.
    3. Prevent leaks and undefined behavior after disappearance.
  • Real-world applications: Any multi-instance hardware control workflow.
  • Where you’ll apply it: This project action handlers and all subsequent projects.
  • References: Stream Deck SDK getting-started and lifecycle docs.
  • Key insights: Lifecycle correctness scales better than patch-based bug fixing.
  • Summary: State machines beat callback improvisation.
  • Homework/Exercises to practice the concept:
    1. Draw transition table for three failure scenarios.
    2. Simulate rapid appear/disappear events.
  • Solutions to the homework/exercises:
    1. Ensure each failure has explicit terminal or recoverable path.
    2. Verify no orphaned intervals or subscriptions remain.

2.2 Configuration, Validation, and Safe Persistence

  • Fundamentals: Persisted plugin state is long-lived product behavior. Validation must happen at every boundary: PI input, backend receive, and migration load.
  • Deep Dive into the concept: System Monitor Dashboard depends on reliable configuration because user trust is built on predictable behavior across restarts and upgrades. Separate per-action settings from global settings. Keep secret-like material isolated from regular settings. Introduce schema versions and migration rules. Reject invalid partial updates rather than tolerating silent corruption. For concurrency, include revision or ordering metadata to avoid stale writes overriding newer intent.
  • How this fit on projects: Ensures changes in settings do not destabilize runtime behavior.
  • Definitions & key terms:
    • Schema version
    • Migration
    • Revision-safe write
  • Mental model diagram:
PI Input -> Merge -> Validate -> Commit -> Observe
  • How it works:
    1. Merge incoming delta into canonical state.
    2. Validate against schema.
    3. Apply migration if needed.
    4. Commit and emit controlled update events.
  • Minimal concrete example:
PSEUDOCODE
merged = merge(prev, delta)
assert schemaValid(merged)
save(merged)
  • Common misconceptions:
    • Compile-time types replace runtime validation. (False)
    • Global settings can safely hold everything. (Not maintainable)
  • Check-your-understanding questions:
    1. Why is partial update merge risky without validation?
    2. Why version settings schema?
    3. How do you prevent stale writes?
  • Check-your-understanding answers:
    1. Required fields can disappear and create invalid states.
    2. Enables safe upgrades and compatibility.
    3. Use revision/order checks before commit.
  • Real-world applications: Marketplace plugins that survive frequent updates.
  • Where you’ll apply it: PI configuration and runtime persistence in this project.
  • References: Stream Deck settings guide and schema-validation patterns.
  • Key insights: State integrity is a feature users feel immediately.
  • Summary: Validate early, validate often, migrate deliberately.
  • Homework/Exercises to practice the concept:
    1. Define v1 and v2 setting shapes.
    2. Write migration acceptance criteria.
  • Solutions to the homework/exercises:
    1. Include defaults, required fields, and optional extensions.
    2. Preserve semantics and fail safely on invalid legacy data.

2.3 Feedback UX and Operational Readiness

  • Fundamentals: Hardware feedback must be glanceable and truthful. Release quality must be automated through validation and packaging gates.
  • Deep Dive into the concept: For System Monitor Dashboard, build a visual grammar that clearly distinguishes neutral, active, pending, warning, and error states. Avoid excessive updates that reduce clarity. Keep PI controls concise and action-oriented. Operationally, pair implementation with logs that include context and transition details. Use CI gates (validate, package checks, smoke tests) before publishing. Document rollback and troubleshooting paths.
  • How this fit on projects: Converts a working prototype into a production-ready plugin action.
  • Definitions & key terms:
    • Glanceability
    • Render diffing
    • Release gate
  • Mental model diagram:
State -> Visual Grammar -> User Trust
Code -> Validate -> Pack -> Release Confidence
  • How it works:
    1. Define visual state map.
    2. Render only meaningful changes.
    3. Emit structured logs for each transition.
    4. Pass automated release gates.
  • Minimal concrete example:
PSEUDOCODE
if severityChanged or valueDelta >= threshold:
  render()
  • Common misconceptions:
    • More visual movement means better UX. (Often false)
    • Manual pre-release checks are enough. (High regression risk)
  • Check-your-understanding questions:
    1. Why can high-frequency redraw harm usability?
    2. What is the role of smoke tests after packaging?
    3. Why include context IDs in logs?
  • Check-your-understanding answers:
    1. Flicker/noise hides important signal.
    2. Packaging can introduce environment-specific failures.
    3. They make multi-instance debugging tractable.
  • Real-world applications: Production plugin support and incident triage.
  • Where you’ll apply it: This project final polish and release checklist.
  • References: PI UI guidelines and Stream Deck CLI docs.
  • Key insights: Trustworthy feedback and reliable delivery are one product loop.
  • Summary: UX and ops are one continuous quality pipeline.
  • Homework/Exercises to practice the concept:
    1. Define one visual style guide for this project.
    2. Write a three-stage release gate checklist.
  • Solutions to the homework/exercises:
    1. Include state colors, text limits, and icon semantics.
    2. Add validate, pack, smoke-install steps with pass/fail criteria.

3. Project Specification

3.1 What You Will Build

A complete implementation of System Monitor Dashboard with clearly modeled states, validated settings, deterministic rendering, and testable operational behavior.

Included:

  • one or more actions with coherent interaction model,
  • Property Inspector settings flow,
  • runtime persistence and recovery behavior,
  • diagnostic logging and reproducibility workflow.

Excluded:

  • proprietary platform internals,
  • unrelated native OS automation outside project scope.

3.2 Functional Requirements

  1. Action behavior: Action responds correctly to expected inputs.
  2. State consistency: No cross-context state contamination.
  3. Persistence correctness: Settings survive restart and updates.
  4. Error handling: Clear fallback for invalid config or external failures.

3.3 Non-Functional Requirements

  • Performance: Feedback updates remain stable and bounded.
  • Reliability: No orphaned timers/subscriptions after teardown.
  • Usability: PI controls and key feedback are self-explanatory.

3.4 Example Usage / Output

User places action on key -> configures in PI -> triggers interaction ->
state changes -> key feedback updates -> settings persist -> logs show trace.

3.5 Data Formats / Schemas / Protocols

  • Settings object includes schemaVersion, per-action config, and optional global linkage.
  • Event handling uses typed payload boundaries and explicit operation names.

3.6 Edge Cases

  • Action appears/disappears rapidly.
  • Settings payload arrives incomplete.
  • External dependency is unavailable.
  • Legacy settings require migration.

3.7 Real World Outcome

3.7.1 How to Run (Copy/Paste)

npm install
npm run build
streamdeck validate ./com.example.system-monitor-dashboard.sdPlugin
npm run watch

3.7.2 Golden Path Demo (Deterministic)

  1. Add action to Stream Deck profile.
  2. Configure minimal valid settings in PI.
  3. Trigger one expected interaction.
  4. Observe predictable state and feedback transitions.

3.7.3 If CLI: exact terminal transcript

$ npm run watch
[plugin] actionAppear ctx=demo-01
[plugin] event=primary_interaction ctx=demo-01
[plugin] state transition prev=idle next=active
[plugin] render ctx=demo-01 status=active

3.7.4 If GUI / Desktop

  • PI shows current settings, validation hints, and status summary.
  • Key feedback updates without requiring manual refresh.
  • Error states include one clear remediation step.

4. Solution Architecture

4.1 High-Level Design

Hardware Event -> Event Adapter -> Domain Reducer -> Persistence -> Renderer
                              \-> Telemetry

4.2 Key Components

Component Responsibility Key Decisions
Event Adapter Normalize controller events Context-first routing
Domain Reducer Compute next state Explicit transition table
Persistence Layer Validate and save settings Versioned schema + migrations
Renderer Generate titles/images/feedback Diff-based updates
Telemetry Debug and support traces Correlation IDs + event tags

4.3 Data Structures (No Full Code)

StateShape:
- contextId
- mode/status
- config
- revision
- diagnostics

4.4 Algorithm Overview

Key Algorithm: Event-to-Render Loop

  1. Parse and validate event.
  2. Resolve effective configuration.
  3. Reduce state via transition rules.
  4. Persist if state or config changed.
  5. Render if visual output changed.

Complexity Analysis:

  • Time: O(1) per event for local transitions, O(n) when iterating rules/assets.
  • Space: O(k) for active contexts.

5. Implementation Guide

5.1 Development Environment Setup

npm install
npm run build
streamdeck validate ./com.example.system-monitor-dashboard.sdPlugin

5.2 Project Structure

project-root/
├── src/
│   ├── actions/
│   ├── domain/
│   ├── adapters/
│   └── telemetry/
├── ui/
│   └── property-inspector/
├── tests/
│   ├── unit/
│   └── fixtures/
└── manifest.json

5.3 The Core Question You’re Answering

How do I make System Monitor Dashboard reliable, explainable, and shippable as a Stream Deck plugin feature?

5.4 Concepts You Must Understand First

  1. Lifecycle invariants
    • What must remain true across appear/interaction/disappear?
  2. Configuration contract
    • Which fields are required and versioned?
  3. Feedback semantics
    • How does each state map to user-visible output?

5.5 Questions to Guide Your Design

  1. Which transitions are valid, invalid, or recoverable?
  2. What is the minimum data needed to render a trustworthy key state?
  3. Which errors should be retried versus surfaced immediately?

5.6 Thinking Exercise

Before implementation, map one full success path and two failure paths, then define exact rendered output for each step.

5.7 The Interview Questions They’ll Ask

  1. What invariants did your state model enforce?
  2. How did you keep settings compatible across versions?
  3. How did you avoid race conditions in async updates?
  4. How did you validate release readiness?
  5. What would you refactor next and why?

5.8 Hints in Layers

Hint 1: Start with explicit states

  • Keep transition table visible in docs/tests.

Hint 2: Validate at boundaries

  • Validate both PI input and backend-received payload.

Hint 3: Emit structured logs

  • Log context, event, transition, and render outcome.

Hint 4: Gate releases

  • No package publish without validate + smoke checks.

5.9 Books That Will Help

Topic Book Chapter
Architecture boundaries Clean Architecture Ch. 20-22
Implementation quality Code Complete Ch. 18, 24
Pragmatic engineering The Pragmatic Programmer Ch. 2, 8
Safe model evolution Refactoring Ch. 11

5.10 Common Pitfalls and Debugging

Problem 1: State behaves differently after restart

  • Why: Missing migration/default application.
  • Fix: Add explicit load-time normalization.
  • Quick test: Restart with legacy settings fixture and compare output.

Problem 2: Action updates wrong key

  • Why: Context identity not propagated.
  • Fix: Ensure all handlers/renderers require context parameter.
  • Quick test: Place two instances and trigger one repeatedly.

5.11 Definition of Done

  • Core functionality works on reference scenarios.
  • Edge cases are tested and documented.
  • Settings are versioned and migration-safe.
  • Feedback states are deterministic and glanceable.
  • Build/validate/package workflow is reproducible.

6. Stretch Extensions

  1. Add plugin-level diagnostics export for support cases.
  2. Add performance counters for render/event throughput.
  3. Add localization-ready PI message keys.
  4. Add chaos test scenarios for integration failures.