Project 2: Configuration Rules Engine

Build a deterministic rule runtime that evaluates dependencies, exclusions, recommendations, and auto-inference with explainable outputs.

Quick Reference

Attribute Value
Difficulty Level 3 (Advanced)
Time Estimate 2-3 weeks
Main Programming Language Python (Alternatives: TypeScript, Java, Rust)
Coolness Level Level 4
Business Potential 4. Open Core Infrastructure
Prerequisites Project 1 complete, indexing basics, test discipline
Key Topics Rule precedence, evaluation order, diagnostics, simulation

1. Learning Objectives

  1. Represent rules in a business-readable but runtime-safe format.
  2. Execute rule sets deterministically with stable ordering.
  3. Return explainable result packets for sellers and auditors.
  4. Create a simulation suite to validate rule changes safely.

2. All Theory Needed (Per-Concept Breakdown)

Concept A: Rule Precedence and Determinism

Fundamentals Rule systems fail when precedence is implicit. CPQ requires an explicit evaluation policy because multiple rules can trigger on the same selection.

Deep Dive into the concept Use typed rules and stable priorities. Define tie-break behavior. Keep deterministic ordering independent of storage ordering. Emit rule evidence for every applied or rejected decision.

How this fit on projects Used in this project, P03-pricing-engine-core.md, and P09-pricing-rules-dsl.md.

Definitions & key terms

  • Rule family
  • Priority
  • Evidence packet

Mental model diagram

Selection -> Candidate Rules -> Ordered Evaluator -> Decision Packet

How it works

  1. Collect candidate rules from indexes.
  2. Sort by explicit priority policy.
  3. Evaluate and merge outcomes.
  4. Persist evidence.

Minimal concrete example

R100 requires API_ACCESS for SSO
R200 excludes STARTER_SUPPORT for ENTERPRISE

Common misconceptions

  • “Database order is good enough.” It is not stable enough.

Check-your-understanding questions

  1. Why is stable sort critical?
  2. Why store rejected-rule evidence?

Check-your-understanding answers

  1. To keep repeated runs identical.
  2. To explain why options were not applied.

Real-world applications Rule authoring in CPQ, underwriting, entitlement systems.

Where you’ll apply it This project and P08-product-rules-dsl.md.

References

  • “Language Implementation Patterns” by Terence Parr
  • json-rules-engine docs

Key insights Determinism is the feature users trust most.

Summary Explicit precedence and evidence turn rule engines from black boxes into reliable systems.

Homework/Exercises to practice the concept Create five overlapping rules and prove outputs remain identical across 100 runs.

Solutions to the homework/exercises Implement stable sort keys and snapshot expected outputs.

3. Project Specification

3.1 What You Will Build

A service that loads active rules, evaluates candidate sets, and returns structured decision packets.

3.2 Functional Requirements

  1. Support requires, excludes, recommends, and auto-add rules.
  2. Respect explicit rule precedence.
  3. Return violations, warnings, and recommendations.
  4. Include trace metadata for each evaluation.

3.3 Non-Functional Requirements

  • Performance: p95 under 200ms at 500 rules.
  • Reliability: deterministic run-to-run behavior.
  • Usability: plain-language messages.

3.4 Example Usage / Output

$ cpq rules validate --selection BASE_ENTERPRISE,SSO
valid=false
violation=R100 SSO requires API_ACCESS

3.5 Data Formats / Schemas / Protocols

  • Rule: {id,type,priority,condition,action,effectiveRange}
  • Result: {valid,violations,warnings,recommendations,evidence}

3.6 Edge Cases

  • Contradictory rules.
  • Missing referenced options.
  • Effective date overlaps.

3.7 Real World Outcome

3.7.1 How to Run (Copy/Paste)

$ cpq rules import fixtures/rules_v1.json
$ cpq rules validate --selection BASE_ENTERPRISE,SSO,STARTER_SUPPORT

3.7.2 Golden Path Demo (Deterministic)

Same input must produce identical ordered violations every run.

3.7.3 If CLI: Exact terminal transcript

$ cpq rules validate --selection BASE_ENTERPRISE,SSO,STARTER_SUPPORT
[info] loaded_rules=428
[info] matched_rules=12
[error] R100: SSO requires API_ACCESS
[error] R200: STARTER_SUPPORT excluded for ENTERPRISE
[result] valid=false

4. Solution Architecture

4.1 High-Level Design

Rule Store -> Rule Index -> Deterministic Evaluator -> Diagnostics API

4.2 Key Components

| Component | Responsibility | Key Decisions | |———–|—————-|—————| | Rule Loader | Pulls active rules | Effective-date filters | | Rule Index | Narrows candidate set | Attribute-based index | | Evaluator | Applies precedence | Stable sort and merge |

4.4 Data Structures (No Full Code)

Rule { id, type, priority, predicate, action }
Evidence { ruleId, triggered, message, timestamp }

4.4 Algorithm Overview

  1. Match candidate rules.
  2. Sort deterministically.
  3. Evaluate and accumulate decisions.

Complexity: O(m log m) for m matched rules.

5. Implementation Guide

5.1 Development Environment Setup

$ cpq seed --scenario rules-engine
$ cpq test --suite rules-determinism

5.2 Project Structure

rules-engine/
  src/
    parser/
    evaluator/
    diagnostics/
  tests/
  fixtures/

5.3 The Core Question You’re Answering

“Can we keep business rule logic flexible without sacrificing deterministic behavior?”

5.4 Concepts You Must Understand First

  • Deterministic sorting
  • Conflict resolution
  • Rule simulation

5.5 Questions to Guide Your Design

  • How will you handle contradictory rule actions?
  • What metadata will approvers need for trust?

5.6 Thinking Exercise

Draw the evaluation sequence for 10 conflicting rules.

5.7 The Interview Questions They’ll Ask

  1. How do you design an explainable rule engine?
  2. How do you manage precedence changes safely?
  3. How do you performance-test rule evaluation?

5.8 Hints in Layers

  • Hint 1: keep rule types explicit.
  • Hint 2: index by changed attributes.
  • Hint 3: always return ordered evidence.

5.9 Books That Will Help

| Topic | Book | Chapter | |——-|——|———| | Parser and rule representation | “Language Implementation Patterns” | Ch. 2-5 | | Reliability habits | “The Pragmatic Programmer” | Ch. 8 |

5.10 Implementation Phases

  • Phase 1: Rule data model and parser.
  • Phase 2: Evaluator and precedence policy.
  • Phase 3: Diagnostics and simulation suite.

5.11 Key Implementation Decisions

| Decision | Options | Recommendation | Rationale | |———-|———|—————-|———–| | Rule storage | JSON, DB | DB + export | Governance and audit | | Execution | full-scan, indexed | indexed | lower latency |

6. Testing Strategy

6.1 Test Categories

| Category | Purpose | Examples | |———-|———|———-| | Unit | Predicate correctness | requires/excludes | | Integration | API + store behavior | effective dates | | Regression | rule pack updates | golden snapshots |

6.2 Critical Test Cases

  1. Contradictory rules with clear precedence.
  2. Missing reference option.
  3. Determinism over repeated runs.

6.3 Test Data

rules_v1.json, selection_matrix.json, deterministic timestamps.

7. Common Pitfalls & Debugging

7.1 Frequent Mistakes

| Pitfall | Symptom | Solution | |———|———|———-| | Implicit ordering | inconsistent output | explicit sort policy | | Overbroad candidate set | slow evaluations | attribute index |

7.2 Debugging Strategies

  • Replay same payload with fixed seed.
  • Diff rule evidence lists between runs.

7.3 Performance Traps

Avoid evaluating all active rules for every minor selection change.

8. Extensions & Challenges

8.1 Beginner Extensions

  • Add severity levels.
  • Add rule tags.

8.2 Intermediate Extensions

  • What-if simulation API.
  • Rule dependency graph visualization.

8.3 Advanced Extensions

  • Incremental evaluation engine.
  • Policy A/B rollout.

9. Real-World Connections

9.1 Industry Applications

  • CPQ policy engines.
  • Compliance gating systems.

9.3 Interview Relevance

Demonstrates policy runtime design and deterministic behavior under complexity.

10. Resources

10.1 Essential Reading

  • “Language Implementation Patterns” by Terence Parr.
  • Workflow patterns references.

10.2 Video Resources

  • Rules engine architecture talks.

10.3 Tools & Documentation

  • Rule simulation harness docs.

11. Self-Assessment Checklist

  • I can explain precedence policy and why it is deterministic.
  • I can reproduce any evaluation outcome from evidence logs.
  • I can run regression simulations before publishing new rules.

12. Submission / Completion Criteria

Minimum Viable Completion:

  • Rule runtime supports core families.
  • Decision packet includes ordered evidence.

Full Completion:

  • Regression suite and performance tests pass.
  • Rule publication pipeline includes simulation gating.

Excellence (Going Above & Beyond):

  • Interactive explainability UI for sellers and analysts.