Project 9: APL Multimodal Companion with Voice Fallback
Use APL to improve confidence and verification without breaking voice-only parity.
Quick Reference
| Attribute | Value |
|---|---|
| Difficulty | Level 2 (Intermediate) |
| Time Estimate | 1 week |
| Main Programming Language | TypeScript |
| Alternative Programming Languages | Python, Java |
| Key Topics | Viewport adaptation, voice-first design, fallback parity |
1. Learning Objectives
- Design APL documents that confirm key decisions visually.
- Keep spoken responses concise and action-oriented.
- Guarantee voice-only fallback can complete all target tasks.
2. All Theory Needed (Per-Concept Breakdown)
Concept A: Progressive Disclosure in Multimodal UX
- Speech gives summary, screen provides verification detail.
- Avoid duplicating long paragraphs across modalities.
Concept B: Capability-Aware Rendering
- Detect device capabilities before directives.
- Define no-screen equivalent outputs.
3. Architecture and Build Plan
- Build one transaction confirmation screen.
- Add viewport variants for small/medium/large displays.
- Implement fallback equivalence tests.
4. Validation and Testing
- APL renders across target viewport profiles.
- Voice-only path produces equivalent completion.
- Spoken summary remains under target duration.
5. Troubleshooting
- Symptom: users still ask follow-up verification questions.
- Fix: move key fields to first visual row and spoken summary.
6. Deliverables
- APL template set.
- Voice fallback script.
- Multimodal parity test report.
7. Stretch Goals
- Add personalization tokens for visual preference profiles.