Project 4: Negotiation & Conflict Lab

Build a negotiation lab where agents propose, critique, and reconcile plans with evidence-based arbitration.

Quick Reference

Attribute Value
Difficulty Level 4
Time Estimate 16-24 hours
Language Python (Alternatives: TypeScript, Go)
Prerequisites Role design, message schemas
Key Topics Negotiation protocols, arbitration, conflict resolution

1. Learning Objectives

By completing this project, you will:

  1. Design a negotiation protocol with proposal and critique cycles.
  2. Implement arbitration rules grounded in evidence.
  3. Detect deadlocks and enforce timeouts.
  4. Produce a decision log that explains resolutions.

2. Theoretical Foundation

2.1 Core Concepts

  • Negotiation Protocols: Structured cycles of propose → critique → revise.
  • Arbitration: Choosing a winning plan based on evidence.
  • Deadlock Handling: Preventing endless debate.

2.2 Why This Matters

Agents often disagree, especially when tasks are ambiguous. Without explicit arbitration, systems degrade into conflicting outputs.

2.3 Historical Context / Background

Negotiation and auction mechanisms are standard in distributed AI. LLM agents now make these patterns practical for real workflows.

2.4 Common Misconceptions

  • “Majority vote always works.” It can amplify shared bias.
  • “Confidence is enough.” Evidence-based arbitration is more reliable.

3. Project Specification

3.1 What You Will Build

A lab where multiple agents propose plans for a task, critique each other, and a mediator selects the final plan based on evidence.

3.2 Functional Requirements

  1. Proposal Format: Each plan includes steps, risks, evidence.
  2. Critique Phase: Agents identify weaknesses and missing evidence.
  3. Arbitration Rules: Mediator selects based on evidence quality.
  4. Timeouts: Negotiation ends after a fixed number of cycles.

3.3 Non-Functional Requirements

  • Transparency: Decisions must be explainable.
  • Stability: Avoid endless loops.
  • Auditability: Log all negotiation rounds.

3.4 Example Usage / Output

$ run-negotiation --task "Design a research workflow"

[Agent A] proposal submitted
[Agent B] critique submitted
[Mediator] plan A selected (evidence score: 0.82)

3.5 Real World Outcome

You can show a negotiation transcript and a final plan with a decision rationale and evidence links.


4. Solution Architecture

4.1 High-Level Design

Task -> Proposal Round -> Critique Round -> Mediation -> Final Plan

4.2 Key Components

Component Responsibility Key Decisions
Proposal Engine Collect plans Format enforcement
Critique Engine Evaluate plans Evidence checks
Mediator Select winner Arbitration rules
Negotiation Log Store transcript Auditability

4.3 Data Structures

Pseudo-structures:

STRUCT Proposal:
  plan_steps
  risks
  evidence_links
  confidence

STRUCT Decision:
  chosen_plan_id
  rationale

4.4 Algorithm Overview

Negotiation Loop

  1. Collect proposals.
  2. Collect critiques.
  3. Score evidence quality.
  4. Mediator selects plan.

Complexity Analysis:

  • Time: O(P * C) proposals and critiques
  • Space: O(P + C) logs

5. Implementation Guide

5.1 Development Environment Setup

Use a simple storage layer to persist negotiation rounds.

5.2 Project Structure

project-root/
├── proposals/
├── critiques/
├── mediation/
├── logs/
└── reports/

5.3 The Core Question You’re Answering

“How do you resolve conflicts when agents disagree with high confidence?”

5.4 Concepts You Must Understand First

  1. Negotiation cycles
    • How do agents refine plans?
    • Book Reference: “Fundamentals of Software Architecture” - Ch. 8
  2. Arbitration criteria
    • How does evidence trump confidence?
    • Book Reference: “Release It!” - Ch. 4

5.5 Questions to Guide Your Design

  1. Scoring
    • How do you score evidence quality?
  2. Timeouts
    • When do you stop negotiating?

5.6 Thinking Exercise

Write two conflicting plans and decide which evidence should win.

5.7 The Interview Questions They’ll Ask

  1. “How do you design a negotiation protocol?”
  2. “What is the role of a mediator?”
  3. “Why are timeouts necessary?”
  4. “How do you score evidence?”
  5. “How do you prevent deadlock?”

5.8 Hints in Layers

Hint 1: Start with simple scoring Use evidence count as a baseline.

Hint 2: Add critique scoring Penalize plans with unresolved risks.

Hint 3: Add timeouts Limit to two rounds.

Hint 4: Log rationale Always record why a plan was chosen.


5.9 Books That Will Help

Topic Book Chapter
Architecture trade-offs “Fundamentals of Software Architecture” Ch. 8
Reliability “Release It!” Ch. 4

5.10 Implementation Phases

Phase 1: Foundation (4-6 hours)

Goals:

  • Define proposal format
  • Collect proposals

Tasks:

  1. Create proposal schema
  2. Store proposals

Checkpoint: Proposals logged and valid.

Phase 2: Core Functionality (6-8 hours)

Goals:

  • Implement critique round
  • Add mediator scoring

Tasks:

  1. Collect critiques
  2. Score evidence

Checkpoint: Mediator selects a plan.

Phase 3: Polish & Edge Cases (4-6 hours)

Goals:

  • Add timeouts
  • Add rationale logs

Tasks:

  1. Enforce max rounds
  2. Generate decision reports

Checkpoint: Negotiation ends reliably.

5.11 Key Implementation Decisions

Decision Options Recommendation Rationale
Arbitration Majority vote vs evidence Evidence-based Accuracy
Rounds Unlimited vs fixed Fixed Avoid deadlock

6. Testing Strategy

6.1 Test Categories

Category Purpose Examples
Unit Tests Proposal validation Missing evidence rejected
Integration Tests Negotiation rounds Two rounds complete
Edge Case Tests Deadlock Timeout triggers

6.2 Critical Test Cases

  1. Conflicting plans should trigger mediation.
  2. Missing evidence should lower scores.
  3. Deadlock should resolve with timeout.

6.3 Test Data

Proposal A: Evidence x2
Proposal B: Evidence x0
Expected: A chosen

7. Common Pitfalls & Debugging

7.1 Frequent Mistakes

Pitfall Symptom Solution
No timeouts Negotiation loops Enforce max rounds
Weak arbitration Random results Evidence-based scoring
Missing logs No traceability Persist decisions

7.2 Debugging Strategies

  • Compare evidence quality across proposals.
  • Review mediation logs for rationale.

7.3 Performance Traps

  • Too many negotiation rounds increase cost.

8. Extensions & Challenges

8.1 Beginner Extensions

  • Add a simple confidence score.
  • Add a reviewer role.

8.2 Intermediate Extensions

  • Weighted voting by expertise.
  • Risk scoring system.

8.3 Advanced Extensions

  • Auction-based task allocation.
  • Multi-criteria decision analysis.

9. Real-World Connections

9.1 Industry Applications

  • Design reviews with multiple stakeholders.
  • Compliance arbitration systems.
  • AutoGen (multi-agent negotiation examples)
  • LangGraph (coordination workflows)

9.3 Interview Relevance

  • Arbitration protocols and coordination design are common system design topics.

10. Resources

10.1 Essential Reading

  • “Fundamentals of Software Architecture” - trade-offs
  • “Release It!” - reliability and safety

10.2 Tools & Documentation

  • FIPA ACL Specification: http://www.fipa.org/specs/fipa00061/
  • Previous Project: Message Bus + Shared Memory (P03)
  • Next Project: Knowledge Ledger (P05)

11. Self-Assessment Checklist

11.1 Understanding

  • I can explain negotiation and arbitration protocols

11.2 Implementation

  • Negotiation completes within timeouts

11.3 Growth

  • I can analyze trade-offs between arbitration strategies

12. Submission / Completion Criteria

Minimum Viable Completion:

  • Proposal, critique, and mediation pipeline works

Full Completion:

  • Evidence-based scoring and logs added

Excellence (Going Above & Beyond):

  • Auction-based or multi-criteria arbitration implemented

This guide was generated from LEARN_COMPLEX_MULTI_AGENT_SYSTEMS_DEEP_DIVE.md. For the complete learning path, see the README.