Project 27: Plugin With Companion Desktop App

A Stream Deck plugin paired with a local companion desktop daemon that unlocks advanced automation, richer data pipelines, and durable defensibility.

Quick Reference

Attribute Value
Difficulty Level 5
Time Estimate 28-48h
Main Programming Language TypeScript
Alternative Programming Languages Rust, Go, C#
Coolness Level Level 5 (Defensive moat architecture)
Business Potential Level 5 (High defensibility)
Prerequisites Process boundary contracts, Local IPC security model, Companion lifecycle orchestration
Key Topics Companion app architecture, local daemon orchestration, advanced automation

1. Learning Objectives

By completing this project, you will:

  1. Build a production-quality implementation of Plugin With Companion Desktop App.
  2. Apply concept boundaries around Process boundary contracts, Local IPC security model, and Companion lifecycle orchestration.
  3. Validate behavior with explicit outcomes and failure-mode tests.
  4. Produce evidence artifacts suitable for review, support, and iteration.

2. All Theory Needed (Per-Concept Breakdown)

2.1 Process boundary contracts

  • Fundamentals: This concept defines the first architectural boundary for this project. You should know the invariant conditions that must remain true during normal operation and failure operation. In Stream Deck plugin work, the most useful mindset is to treat interaction paths as explicit contracts, not ad-hoc callbacks, so behavior remains deterministic under context churn and profile switching.
  • Deep Dive into the concept: For this project, Process boundary contracts is where correctness begins. Model state transitions explicitly, define allowed events, and reject illegal transitions early. Tie every side effect to context identity and traceability fields so debugging can reconstruct the full sequence. Design your test plan around race-prone paths first. Add failure classes and recovery transitions before polishing UX. This creates robust behavior under load and avoids hidden coupling across action instances.
  • How this fit on projects: This concept is the primary driver of runtime correctness in this project.
  • Definitions & key terms: invariant, transition contract, failure class, recovery path.
  • Mental model diagram:
Intent -> Validate -> Reduce -> Persist -> Render
  ^                                       |
  +--------------- Recover/Retry <--------+
  • How it works: model inputs, validate boundaries, reduce deterministic state, emit minimal side effects, then observe and recover.
  • Minimal concrete example:
PSEUDOCODE
if !isValid(event, state):
  return rejectWithHint()
next = reduce(state, event)
apply(next)
  • Common misconceptions: fast prototypes do not remove the need for explicit invariants.
  • Check-your-understanding questions: Which invalid transition causes highest user impact? Why?
  • Check-your-understanding answers: Any transition that mutates irreversible state without confirmation.
  • Real-world applications: production plugins that must survive long sessions and rapid profile switches.
  • Where you will apply it: project runtime handlers and teardown logic.
  • References: Stream Deck SDK docs + main sprint Theory Primer concepts 1/2/6.
  • Key insights: deterministic state design scales better than callback patching.
  • Summary: make invalid states unrepresentable and observable.
  • Homework/Exercises to practice the concept: draw one transition table and one failure table.
  • Solutions to the homework/exercises: each transition/failure should map to explicit UI feedback and test case.

2.2 Local IPC security model

  • Fundamentals: Local IPC security model handles data integrity and long-lived behavior. Treat user configuration, entitlement, and environment state as a schema-governed domain.
  • Deep Dive into the concept: Build validation at every boundary: PI input, backend receive, persistence write, and migration load. Use explicit versioning and conflict policy so stale updates cannot silently win. If sensitive fields exist, isolate them through secret-safe adapters and redact all diagnostics. This prevents corruption, race bugs, and support incidents that usually appear only after release.
  • How this fit on projects: ensures reliable persistence and predictable restart/recovery behavior.
  • Definitions & key terms: schema, migration, revision, redaction.
  • Mental model diagram:
Input Delta -> Merge -> Validate -> Version -> Commit -> Observe
  • How it works: merge safely, validate strictly, commit atomically, expose clear error feedback.
  • Minimal concrete example:
PSEUDOCODE
merged = merge(prev, delta)
assert schemaValid(merged)
save(merged, revision+1)
  • Common misconceptions: compile-time types are not runtime safety.
  • Check-your-understanding questions: Why must backend revalidate PI values?
  • Check-your-understanding answers: PI can be stale/malformed; backend is source of truth.
  • Real-world applications: paid plugins, sync features, and multi-account integrations.
  • Where you will apply it: persistence, entitlement checks, and API credential handling.
  • References: Stream Deck settings/secrets docs + RFC security guidance where applicable.
  • Key insights: data integrity is a user-visible feature.
  • Summary: strict boundaries prevent expensive post-release bugs.
  • Homework/Exercises to practice the concept: define v1/v2 schema and migration tests.
  • Solutions to the homework/exercises: include defaults, backward compatibility, and rollback path.

2.3 Companion lifecycle orchestration

  • Fundamentals: Companion lifecycle orchestration translates implementation quality into user trust, adoption, and maintainability.
  • Deep Dive into the concept: Build release and support workflows in parallel with features. Define observability schema, packaging checks, and non-functional budgets (latency, memory, retry behavior). Add diagnostics UX so users can self-report actionable data. If this project targets commercial outcomes, connect operational quality to listing confidence and retention. For hardware-diverse use cases, ensure adaptive behavior is explicitly tested across capability subsets.
  • How this fit on projects: provides the delivery and sustainment layer beyond core functionality.
  • Definitions & key terms: SLA mindset, supportability, release gate, degraded mode.
  • Mental model diagram:
Feature Build -> Validation Gate -> Pack/Release -> Observe -> Support -> Improve
  • How it works: define quality gates, ship artifacts, monitor signals, feed incidents back into design.
  • Minimal concrete example:
PSEUDOCHECKLIST
validate pass
smoke install pass
diagnostics export pass
rollback artifact present
  • Common misconceptions: once it works locally, release risk is low.
  • Check-your-understanding questions: Which quality gate catches packaging regressions earliest?
  • Check-your-understanding answers: deterministic CLI validate/pack + smoke install checks.
  • Real-world applications: marketplace submission, enterprise team deployment, paid support.
  • Where you will apply it: release checklist, diagnostics, and post-launch iteration.
  • References: Stream Deck CLI docs, marketplace docs, and reliability references.
  • Key insights: sustainable plugins are operated products, not one-off scripts.
  • Summary: build supportability and release discipline into the first milestone.
  • Homework/Exercises to practice the concept: create one pre-release gate matrix and one incident response runbook.
  • Solutions to the homework/exercises: each gate/runbook step must include pass/fail evidence.

3. Project Specification

3.1 What You Will Build

A Stream Deck plugin paired with a local companion desktop daemon that unlocks advanced automation, richer data pipelines, and durable defensibility.

3.2 Functional Requirements

  1. Implement all user-facing behaviors listed in the source sprint project.
  2. Preserve deterministic state behavior under context churn and restart.
  3. Enforce boundary validation for configuration and external events.
  4. Expose clear feedback for success, pending, and failure modes.
  5. Provide release/support artifacts aligned with project scope.

3.3 Non-Functional Requirements

  • Performance: Remain responsive under expected event rates for this project.
  • Reliability: No orphaned timers/subscriptions after teardown paths.
  • Usability: Users can understand current state from key/PI feedback quickly.
  • Supportability: Logs and diagnostics must be actionable and redacted.

3.4 Example Usage / Output

“How do I extend plugin capability beyond SDK boundaries while keeping local architecture secure, observable, and user-trustworthy?”

3.5 Real World Outcome

Your plugin communicates with a signed local companion process through a clearly defined IPC contract. Advanced workflows (e.g., long-running automation, local system integrations, complex caching) execute in daemon safely while plugin remains responsive. If daemon is unavailable, plugin enters degraded mode with recovery guidance instead of failing silently.


4. Solution Architecture

4.1 High-Level Design

Stream Deck Events -> Runtime Reducer -> Capability/Policy Layer -> Side Effects
        ^                                                          |
        +---------------------- Diagnostics/Observability <--------+

4.2 Key Components

  • Action Runtime Layer: Handles event routing, context scoping, and state reduction.
  • Policy Layer: Applies validation, feature gates, retries, throttles, and safety rules.
  • Feedback Layer: Produces deterministic key/dial/PI feedback from canonical state.
  • Persistence/Integration Layer: Manages settings, secrets, sync, and external API boundaries.

4.3 Design Questions (From Sprint)

  1. Architecture boundaries
    • Which logic must remain plugin-side for low-latency interactions?
    • Which tasks move to daemon for durability/performance?
  2. Failure handling
    • What happens when daemon is down, stale, or version-mismatched?
    • How do you recover with minimal user intervention?

5. Thinking Exercise (Before Building)

Threat + Failure Model

Draw one diagram with security threats (spoofed IPC, stale binaries, privilege abuse) and operational failures (daemon crash, startup race, upgrade mismatch). Map one mitigation per risk.


6. Implementation Hints in Layers

Hint 1: Starting Point

  • Define protobuf/JSON contract versions before daemon implementation.

Hint 2: Next Level

  • Add heartbeat and capability-negotiation handshake on startup.

Hint 3: Technical Details

PSEUDOFLOW
plugin boot -> daemon handshake(version, capabilities) -> secure channel established -> request/response + heartbeat -> degrade/recover on failures

Hint 4: Tools/Debugging

  • Add separate log streams for plugin and daemon linked by shared correlation ID.

7. Verification and Testing Plan

  1. Unit-level: transition validity, schema validation, and policy decisions.
  2. Integration-level: PI/backend flow, persistence/restart, and dependency adapters.
  3. Failure-level: network/auth/retry/teardown behavior under injected faults.
  4. Release-level: validate/pack/smoke workflow and artifact integrity checks.

8. Interview Questions

  1. “Why split into plugin + companion instead of keeping everything in plugin runtime?”
  2. “How did you secure local IPC channels?”
  3. “How do you version API contracts between plugin and daemon?”
  4. “What degraded-mode behavior did you implement when daemon fails?”
  5. “How does this architecture create product defensibility?”

9. Common Pitfalls and Debugging

Problem 1: “Plugin waits forever when daemon crashes”

  • Why: Missing timeout and fallback state in IPC request path.
  • Fix: Add strict timeout, circuit-breaker, and degraded-state renderer.
  • Quick test: Kill daemon during active workflow and verify recovery UX appears within timeout budget.

10. Definition of Done

  • Local companion daemon runs with explicit plugin-daemon contract.
  • Advanced automation workflows execute through daemon reliably.
  • Security model covers local message authentication and least privilege.
  • Degraded mode is clear and recoverable when daemon unavailable.
  • Contract versioning supports safe independent updates.
  • Architecture rationale demonstrates defensibility moat.

11. Additional Notes

  • Why this project matters: Companion architecture creates capability depth competitors cannot copy with simple macro packs.
  • Source sprint project file: P27-plugin-companion-desktop-app.md
  • Traceability: Generated from ### Project 27 in the sprint guide.