Project 24: Trust-Centered Assistant UX Studio (Human Experience)
Build a user interface focused on trust: transparent state, failure explanations, confidence indicators, autonomy controls, rollback, and decision audit trails.
Quick Reference
| Attribute | Value |
|---|---|
| Difficulty | Level 4: Expert |
| Time Estimate | 20-35 hours |
| Main Programming Language | TypeScript |
| Alternative Programming Languages | Python, Swift |
| Coolness Level | Level 4: Hardcore Tech Flex |
| Business Potential | 4. The “Open Core” Infrastructure |
| Prerequisites | product design fundamentals, state management, observability basics |
| Key Topics | conversation state UX, trust signals, autonomy controls, rollback flows |
1. Learning Objectives
- Design clear conversational state representations.
- Explain assistant failures in actionable language.
- Display confidence and provenance without overload.
- Give users autonomy-level controls and safe defaults.
- Implement rollback and decision audit trail UX.
2. Theoretical Foundation
2.1 Trust as a Product Constraint
Users trust assistants when systems are predictable, transparent, and reversible. Hidden logic and silent failures erode trust quickly. Strong UX for AI assistants must expose what the system intends to do, why it intends to do it, and how users can intervene.
2.2 Explainability vs Cognitive Load
Too little explanation creates fear. Too much detail creates fatigue. Good trust UX uses layered disclosure: concise action cards first, deeper rationale and evidence on demand.
3. Project Specification
3.1 What You Will Build
A trust console with:
- state timeline
- action proposal cards
- confidence/provenance badges
- autonomy mode controls
- rollback center
- audit trail explorer
3.2 Functional Requirements
- Show current conversation goals and active tasks.
- Present action proposals before execution in review-required mode.
- Attach confidence and source quality indicators.
- Support undo for reversible actions.
- Record user approvals/rejections in audit history.
3.3 Non-Functional Requirements
- Clarity: understandable by non-technical users.
- Safety: high-risk defaults require review.
- Accessibility: clear status labels and state changes.
3.4 Real World Outcome
$ uxctl demo --scenario "schedule-and-email"
[UI] autonomy_mode=assistive
[Propose] move meeting + draft email
[Trust] confidence=0.74 sources=3
[Review] calendar move approved, email send rejected
[Rollback] undo successful within 120s window
[Audit] timeline updated with user decision log
4. Solution Architecture
4.1 High-Level Design
Assistant Runtime -> Trust API -> UI State Store -> User Controls
\-> Audit Timeline Service
4.2 Key Components
| Component | Responsibility | Key Decisions |
|---|---|---|
| Action cards | present proposed actions | concise rationale + risk labels |
| Autonomy controls | mode switching | per-scope permissions |
| Rollback center | undo actions | compensation plan support |
| Audit explorer | timeline visibility | filterable event trail |
5. Implementation Guide
5.1 The Core Question You’re Answering
“How do I design assistant UX that keeps users informed, in control, and confident when autonomy increases?”
5.2 Concepts You Must Understand First
- Human-in-the-loop interaction patterns
- Explainability design heuristics
- Reversible action modeling
- Trust signal calibration
5.3 Questions to Guide Your Design
- Which actions should default to approval-required?
- What confidence format avoids false precision?
- How should error explanations differ by failure type?
5.4 Thinking Exercise
Draft two flows for the same high-risk tool action: full autonomy vs approval-required. Compare risk and usability outcomes.
5.5 The Interview Questions They’ll Ask
- How do you communicate uncertainty effectively?
- What makes rollback design trustworthy?
- How do you avoid overwhelming users with explanations?
- Which trust indicators matter most in practice?
- How does auditability improve adoption?
5.6 Hints in Layers
Hint 1: begin with action cards and one confidence signal.
Hint 2: add layered detail instead of dense default explanations.
Hint 3: implement undo for reversible actions first.
Hint 4: connect every UI action to a trace/audit event.
5.7 Books That Will Help
| Topic | Book | Chapter |
|---|---|---|
| Interface patterns | “Designing Interfaces” | pattern chapters |
| User-centered iteration | “The Pragmatic Programmer” | feedback and iteration |
| Reliable architecture | “Clean Architecture” | use-case boundaries |
5.8 Common Pitfalls and Debugging
Problem 1: users cannot explain assistant actions
- Why: rationale hidden in logs only.
- Fix: concise rationale on action cards.
- Quick test: user can answer “why” in one click.
Problem 2: undo fails for external effects
- Why: no compensation flow design.
- Fix: explicit irreversible-action warnings and compensating actions.
- Quick test: rollback drills for each mutating action type.
5.9 Definition of Done
- Trust console shows state, rationale, confidence, and provenance
- Autonomy levels are user-controllable with safe defaults
- Rollback works for reversible operations
- Decision audit trails are human-readable and complete