Project 11: UI Flows and AI Asset Pipeline
Menu and HUD flows wired to a repeatable AI assisted asset pipeline using Nano Banana prompts and QA gates.
Quick Reference
| Attribute | Value |
|---|---|
| Difficulty | Level 2 |
| Time Estimate | 1 week |
| Main Programming Language | C# (.NET 8) + MonoGame |
| Alternative Programming Languages | F#, C++ (raylib), Godot C# |
| Coolness Level | Level 3 |
| Business Potential | Level 2 |
| Prerequisites | Deterministic loop basics, debugging discipline, content pipeline fundamentals |
| Key Topics | UI information hierarchy, Prompt versioning, Human QA and provenance logs |
1. Learning Objectives
- Translate one concrete production question into a testable implementation plan.
- Implement and validate the feature in a MonoGame runtime context.
- Instrument success and failure paths with actionable diagnostics.
- Produce a repeatable demo artifact for portfolio or interview use.
2. All Theory Needed (Per-Concept Breakdown)
UI information hierarchy
Fundamentals UI information hierarchy is central to this project because it defines the non-negotiable behavioral contract for the feature. You should be able to describe valid inputs, legal state transitions, and expected outputs under normal and failure conditions.
Deep Dive into the concept Treat UI information hierarchy as a boundary-setting mechanism. Start by defining the smallest deterministic scenario that proves the feature works. Stress that scenario under altered timing, altered content inputs, and altered user actions. If behavior changes unexpectedly, document hidden coupling and sequence assumptions. Keep transitions explicit and observable via logs or debug panels. Connect each transition to an event record so regression analysis is possible after refactors.
Prompt versioning
Fundamentals Prompt versioning ensures the project scales from local prototype behavior to repeatable system behavior.
Deep Dive into the concept Use Prompt versioning to reason about data flow ownership and mutation timing. Document where writes occur, when validation runs, and how rollback behaves if a write fails.
Human QA and provenance logs
Fundamentals Human QA and provenance logs connects this project to shipping reality by forcing you to think about operational constraints early.
Deep Dive into the concept Define one production-like failure mode related to Human QA and provenance logs and build a mitigation checklist. The solution is complete when you can demonstrate both a golden path and a controlled failure path.
3. Project Specification
3.1 What You Will Build
A complete UI flow implementation tied to an AI-assisted asset pipeline with prompt versioning, provenance tracking, and human QA gates.
Visible game deliverable:
- Main menu + HUD rendered from AI-generated approved atlas
- Provenance panel shows prompt template, model id, reviewer, approval status
- Asset QA panel highlights rejected variants and reasons
3.2 Functional Requirements
- Render main menu, settings, pause, and HUD screens from atlas assets.
- Display provenance metadata for each active asset group.
- Track approved/rejected asset variants with QA reasons.
- Validate style consistency checklist before promoting assets to release set.
3.3 Non-Functional Requirements
- Performance: Must remain inside project-appropriate frame budget.
- Reliability: Must recover from at least one injected failure mode.
- Usability: Outcome must be observable by a reviewer in under two minutes.
3.4 Example Usage / Output
[UI] flow Main->Settings->Gameplay->Pause PASS
[ASSET] approved=12 rejected=4
[PROVENANCE] template=v12 model=gemini-2.5-flash-image-preview
3.5 Data Formats / Schemas / Protocols
- Event record: {timestamp, module, action, result}
- Feature state snapshot: {version, state, counters, flags}
3.6 Edge Cases
- Missing provenance metadata for loaded asset.
- Mixed-style assets in one UI screen.
- Atlas rebuild with stale references.
3.7 Real World Outcome
This is a game-facing outcome you can see and play immediately.
What you will see in the game window:
- Main menu + HUD rendered from AI-generated approved atlas
- Provenance panel shows prompt template, model id, reviewer, approval status
- Asset QA panel highlights rejected variants and reasons

How you interact:
- F6 toggles provenance overlay
- N cycles style variants
- Enter navigates full menu flow
3.7.1 How to Run (Copy/Paste)
$ dotnet restore
$ dotnet build
$ dotnet run --project src/Game -- --scene ui-ai-pipeline
3.7.2 Golden Path Demo (Deterministic)
- Start the scene and confirm all HUD panels load.
- Perform the three core interactions listed above.
- Verify the success signal appears without warnings.
3.7.3 If CLI: exact transcript
$ dotnet run --project src/Game -- --scene ui-ai-pipeline
[UI] flow Main->Settings->Gameplay->Pause PASS
[ASSET] approved=12 rejected=4
[PROVENANCE] template=v12 model=gemini-2.5-flash-image-preview
3.7.7 If GUI / Desktop
+------------------------------------------------------+
| ui-ai-pipeline [F1 HUD] |
|------------------------------------------------------|
| PLAYFIELD: gameplay objects and interactions |
| HUD: key metrics + status badges |
| STATUS: success/failure cues and prompts |
+------------------------------------------------------+
4. Solution Architecture
4.1 High-Level Design
Prompt Library -> Variant Generation -> Human QA -> Atlas Build -> UI Scene Loader
Prompt Library -> Variant Generation -> Human QA -> Atlas Build -> UI Scene Loader
4.2 Key Components
| Component | Responsibility | Key Decisions |
|---|---|---|
| UIFlowController | Manages menu and HUD transitions | Explicit route graph for predictable navigation |
| AssetProvenanceStore | Stores prompt/model/reviewer metadata | Treat assets like versioned artifacts |
| StyleQAValidator | Applies visual consistency checks | Reject assets that fail readability/style rules |
4.4 Algorithm Overview
- Validate preconditions.
- Apply deterministic transition.
- Emit feedback and telemetry.
- Persist if required.
5. Implementation Guide
5.3 The Core Question You’re Answering
“How do you use AI generated assets fast while still enforcing art direction and legal hygiene?”
5.4 Concepts You Must Understand First
- UI information hierarchy
- Prompt versioning
- Human QA and provenance logs
5.5 Questions to Guide Your Design
- Which metadata fields are mandatory before an asset is shippable?
- How will you detect style drift across multiple menus?
- What rollback path exists if a generated atlas fails QA late?
5.6 Thinking Exercise
Trace one full success path and one failure path on paper before implementation.
5.7 The Interview Questions They’ll Ask
- Why did you pick this architecture boundary?
- Which failure mode did you prioritize first and why?
- How does your instrumentation accelerate debugging?
- How would you scale this feature to a larger game?
5.8 Hints in Layers
- Hint 1: Stabilize one invariant before feature expansion.
- Hint 2: Add diagnostics before optimization.
- Hint 3: Keep platform calls at system boundaries.
- Hint 4: Re-run deterministic scenario after each refactor.
5.9 Books That Will Help
| Topic | Book | Chapter |
|---|---|---|
| Core concept | “The Non Designers Design Book by Robin Williams” | Relevant concept chapter |
| Reliability | “Release It!” | Failure handling chapters |
| Architecture | “Clean Architecture” | Boundary and dependency chapters |
6. Testing Strategy
- Golden path completes and emits success signal.
- Injected failure path recovers without crash.
- Re-run scenario after restart and confirm consistency.
7. Common Pitfalls & Debugging
- Hidden initialization order coupling
- Time-coupled behavior tied to render rate
- Missing fallback behavior on platform call failure
8. Extensions & Challenges
- Beginner: add one extra diagnostics panel metric.
- Intermediate: add replay capture for event flow.
- Advanced: add automated stress test harness.
9. Real-World Connections
This project mirrors shipping feature-module work in real indie and mid-size game teams.
10. Resources
- Steamworks official docs
- MonoGame docs
- Gemini image generation docs (for asset-related projects)
11. Self-Assessment Checklist
- I can explain the feature invariant and prove it in a demo.
- I can trigger and handle one deterministic failure scenario.
- I can describe tradeoffs and future scaling choices.
12. Submission / Completion Criteria
Minimum Viable Completion:
- Feature works in deterministic golden path.
- One controlled failure path is handled gracefully.
- Core diagnostics are visible and documented.
Full Completion:
- All minimum criteria plus edge-case coverage and regression checks.
Excellence:
- Includes polished instrumentation and clear productionization notes.