Project 21: Agent Product Validation and ROI Studio

Validate demand, scope, and economics before writing production agent code.


Quick Reference

Attribute Value
Difficulty Level 2: Intermediate
Time Estimate 8-14 hours
Language TypeScript (alt: Python, Go)
Prerequisites Product discovery basics, spreadsheet modeling
Key Topics JTBD, automation boundaries, ROI modeling, market sizing

Learning Objectives

  1. Convert customer interviews into explicit JTBD statements.
  2. Differentiate automation and augmentation with risk-aware boundaries.
  3. Build ROI and market sizing models that survive conservative assumptions.
  4. Define a narrow MVP and explicit non-goals.

The Core Question You’re Answering

“Should this agent be built now, and what is the smallest scope with defensible ROI?”


Concepts You Must Understand First

Concept Why It Matters Where to Learn
JTBD mapping Finds real progress users pay for The Mom Test Ch. 3-5
Painkiller vs vitamin Prioritizes urgent demand Obviously Awesome Ch. 2
ROI modeling Prevents false-positive product bets Lean Analytics Ch. 11
Competitor mapping Avoids entering crowded low-margin niches Market analysis methods

Theoretical Foundation

Interview Evidence -> Job Statement -> Pain/Frequency Score -> Automation Fit -> ROI -> MVP Decision

A viable agent idea has: clear budget owner, frequent pain, measurable baseline, and bounded risk.


Project Specification

What You’ll Build

A validation toolkit that ingests interview notes and process metrics, then produces:

  • Opportunity scorecards
  • Automation/augmentation boundary maps
  • ROI scenarios (best/base/worst)
  • Competitive positioning summary

Functional Requirements

  1. Structured JTBD capture
  2. Opportunity scoring rubric
  3. ROI calculator with sensitivity analysis
  4. Market sizing worksheet (SAM/SOM)

Non-Functional Requirements

  • Reproducible assumptions
  • Explicit uncertainty labels
  • Shareable decision artifact

Real World Outcome

$ node p21_validate_roi.js --segment "it_helpdesk"
[jtbd] 4 priority jobs identified
[roi] base_case=+132% worst_case=+22%
[scope] mvp_focus="tier-1 account unlock + password reset"
[decision] proceed=true
[artifact] product_validation_packet.md

Architecture Overview

Data Intake -> Scoring Engine -> ROI Model -> Decision Report Generator

Implementation Guide

Phase 1: Discovery Inputs

  • Define interview schema and baseline metric inputs.

Phase 2: Scoring and Economics

  • Implement score weighting and ROI sensitivity toggles.

Phase 3: Decision Output

  • Generate decision memo with go/no-go and MVP boundary.

Testing Strategy

  • Re-score consistency tests
  • Assumption stress tests
  • Contradictory interview signal tests

Common Pitfalls & Debugging

Pitfall Symptom Fix
Confirmation bias Every idea appears positive Force baseline and worst-case ROI
Scope drift MVP turns into platform Add strict non-goals and reject list
Weak evidence Anecdotes dominate Require repeated job patterns

Interview Questions They’ll Ask

  1. How do you validate agent demand before prototyping?
  2. What distinguishes augmentation from automation?
  3. How do you estimate ROI with sparse data?
  4. Why do narrow vertical wedges win early?

Hints in Layers

  • Hint 1: Start with one user segment only.
  • Hint 2: Use numeric scoring, not narrative ranking.
  • Hint 3: Build worst-case ROI first.
  • Hint 4: Document why each non-goal is excluded.

Submission / Completion Criteria

Minimum Completion

  • One validated JTBD wedge with base/worst ROI

Full Completion

  • Competitor map + automation boundary + MVP scope

Excellence

  • Evidence-backed go/no-go memo accepted by stakeholders