Project 14: IPC Message Hub with Multiple Mechanisms

Build a message broker that supports multiple IPC mechanisms—pipes, FIFOs, UNIX domain sockets, POSIX message queues, and shared memory with semaphores—allowing different processes to communicate through the same hub.

Quick Reference

Attribute Value
Difficulty Level 4 - Expert
Time Estimate 2 Weeks
Language C (primary), Rust (alternative)
Prerequisites File I/O, process management, socket basics
Key Topics IPC mechanisms, shared memory, semaphores, message queues, UNIX sockets

1. Learning Objectives

After completing this project, you will:

  • Master all major UNIX IPC mechanisms: Understand when to use pipes, FIFOs, sockets, message queues, or shared memory
  • Implement shared memory with proper synchronization: Use semaphores to prevent race conditions and corruption
  • Build a publish/subscribe message system: Route messages from publishers to multiple subscribers
  • Design a unified abstraction over different transports: Same API, different underlying mechanisms
  • Handle resource cleanup properly: Clean up shared memory segments, semaphores, and sockets on exit
  • Benchmark IPC performance: Measure and compare throughput and latency of each mechanism
  • Understand file descriptor passing: Send file descriptors between processes using SCM_RIGHTS

2. Theoretical Foundation

2.1 Core Concepts

Interprocess Communication (IPC) is how separate processes share data and coordinate their work. UNIX provides multiple IPC mechanisms, each with different characteristics for speed, complexity, and use cases.

IPC Mechanisms from Simplest to Most Complex

┌─────────────────────────────────────────────────────────────────────────┐
│                          SAME MACHINE                                   │
│  ┌─────────────────┐  ┌─────────────────┐  ┌─────────────────────────┐ │
│  │     Pipes       │  │   Named Pipes   │  │    Shared Memory        │ │
│  │  (anonymous)    │  │    (FIFOs)      │  │    + Semaphores         │ │
│  │                 │  │                 │  │                         │ │
│  │ Parent ─────>   │  │ /tmp/myfifo     │  │ ┌─────────────────────┐ │ │
│  │         Child   │  │                 │  │ │ Memory region       │ │ │
│  │                 │  │ Any two procs   │  │ │ visible to both     │ │ │
│  │ Unidirectional  │  │ Unidirectional  │  │ │ processes           │ │ │
│  │ Related procs   │  │ Unrelated OK    │  │ └─────────────────────┘ │ │
│  │ Byte stream     │  │ Byte stream     │  │ Fastest, but complex    │ │
│  └─────────────────┘  └─────────────────┘  └─────────────────────────┘ │
│                                                                         │
│  ┌─────────────────────────────────────────────────────────────────┐   │
│  │                      UNIX Domain Sockets                         │   │
│  │   /var/run/app.sock   -  Like network sockets, but local         │   │
│  │   Bidirectional, can pass file descriptors (!), fast             │   │
│  └─────────────────────────────────────────────────────────────────┘   │
│                                                                         │
│  ┌─────────────────────────────────────────────────────────────────┐   │
│  │                    POSIX Message Queues                          │   │
│  │   /myqueue   -  Message-oriented, with priorities               │   │
│  │   Kernel-managed, persists until explicitly removed             │   │
│  └─────────────────────────────────────────────────────────────────┘   │
└─────────────────────────────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────────────────────────────┐
│                        ACROSS MACHINES                                  │
│  ┌─────────────────────────────────────────────────────────────────┐   │
│  │                      TCP/UDP Sockets                             │   │
│  │   IP:Port addressing  -  The foundation of the internet          │   │
│  │   Reliable (TCP) or fast (UDP), works across networks            │   │
│  └─────────────────────────────────────────────────────────────────┘   │
└─────────────────────────────────────────────────────────────────────────┘

IPC Mechanism Comparison:

Mechanism      │ Relationship │ Direction │ Message │ Persist │ Speed
───────────────┼──────────────┼───────────┼─────────┼─────────┼────────
Pipe           │ Parent-child │ One-way   │ Stream  │ No      │ Fast
FIFO           │ Any          │ One-way   │ Stream  │ File    │ Fast
UNIX Socket    │ Any          │ Two-way   │ Both    │ File    │ Fast
Msg Queue      │ Any          │ Both      │ Message │ Kernel  │ Medium
Shared Mem     │ Any          │ Both      │ Custom  │ Name    │ Fastest

Key Questions:
- Why is shared memory fastest?
  → No kernel involvement for data transfer (only synchronization)
- Why do pipes only work parent-child?
  → File descriptors are inherited on fork, no other way to share
- What makes UNIX sockets preferred for daemons?
  → Bidirectional, can pass file descriptors, familiar socket API

Shared Memory Architecture:

Shared Memory with Semaphore Synchronization

Process A                    Shared Region                    Process B
┌─────────────┐             ┌─────────────┐              ┌─────────────┐
│             │             │             │              │             │
│  Writer     │  sem_wait   │ ┌─────────┐ │  sem_wait    │  Reader     │
│             │────────────>│ │ Mutex   │ │<─────────────│             │
│  Write data │             │ └─────────┘ │              │  Read data  │
│  to shm     │────────────>│ ┌─────────┐ │<─────────────│  from shm   │
│             │             │ │  Data   │ │              │             │
│  sem_post   │             │ │ Buffer  │ │              │  sem_post   │
│             │<────────────│ │         │ │──────────────│             │
│             │             │ └─────────┘ │              │             │
│             │             │ ┌─────────┐ │              │             │
│             │             │ │ items   │ │              │             │
│             │             │ │ (count) │ │              │             │
│             │             │ └─────────┘ │              │             │
└─────────────┘             └─────────────┘              └─────────────┘

shm_open() ─> Creates named memory region
mmap()     ─> Maps region into process address space
sem_open() ─> Creates named semaphore for synchronization

UNIX Domain Socket Communication:

UNIX Domain Socket Architecture

┌────────────────────────────────────────────────────────────────┐
│                           SERVER                               │
│                                                                │
│   socket(AF_UNIX, SOCK_STREAM, 0)                             │
│        │                                                       │
│        v                                                       │
│   bind("/tmp/app.sock", ...)                                  │
│        │                                                       │
│        v                                                       │
│   listen(fd, backlog)                                         │
│        │                                                       │
│        v                                                       │
│   accept() ───────────────────────────┐                       │
│        │                               │                       │
│        v                               v                       │
│   ┌─────────┐                    ┌─────────┐                  │
│   │ Client1 │                    │ Client2 │                  │
│   │ conn_fd │                    │ conn_fd │                  │
│   └─────────┘                    └─────────┘                  │
│                                                                │
└────────────────────────────────────────────────────────────────┘
                    /tmp/app.sock
                         │
┌────────────────────────┴───────────────────────────────────────┐
│                         CLIENTS                                │
│                                                                │
│   socket(AF_UNIX, SOCK_STREAM, 0)                             │
│        │                                                       │
│        v                                                       │
│   connect("/tmp/app.sock", ...)                               │
│        │                                                       │
│        v                                                       │
│   send()/recv()                                               │
│                                                                │
└────────────────────────────────────────────────────────────────┘

2.2 Why This Matters

Real-World Importance:

  • Every daemon uses IPC: systemd, Docker, databases all use UNIX sockets
  • High-performance systems need shared memory: Trading systems, databases, gaming servers
  • Microservices on same host: IPC is faster than localhost TCP
  • Process isolation with communication: Security through separation

Industry Usage:

System IPC Mechanism Why
D-Bus UNIX sockets Desktop inter-app messaging
PostgreSQL UNIX sockets + shared memory Client connections + shared buffers
Redis UNIX sockets Local connections, 2x faster than TCP
Docker UNIX sockets Docker CLI to daemon communication
ZeroMQ Multiple Abstracted messaging library
nginx Shared memory Worker process coordination
Chrome Pipes + shmem Renderer/browser process communication

Career Relevance:

Understanding all IPC mechanisms demonstrates:

  • Deep operating systems knowledge
  • Systems programming expertise
  • Performance optimization skills
  • Debugging complex multi-process systems

2.3 Historical Context

Evolution of UNIX IPC:

UNIX IPC Timeline

1971  Pipes (Version 1 UNIX)
        │  - Ken Thompson's brilliant invention
        │  - "Everything is a file" philosophy
        v
1977  FIFOs / Named Pipes (Version 7 UNIX)
        │  - Allow unrelated processes to communicate
        v
1983  System V IPC (SVR3)
        │  - Message queues (msgget, msgsnd, msgrcv)
        │  - Semaphores (semget, semop)
        │  - Shared memory (shmget, shmat)
        │  - Complex, ID-based, non-file interface
        v
1983  UNIX Domain Sockets (4.2BSD)
        │  - Socket API for local communication
        │  - File-based addressing
        │  - Can pass file descriptors!
        v
1993  POSIX Real-time Extensions (IEEE 1003.1b)
        │  - POSIX message queues (mq_*)
        │  - POSIX semaphores (sem_*)
        │  - POSIX shared memory (shm_*)
        │  - Cleaner API than System V
        v
2002  Linux eventfd, signalfd, timerfd
        │  - Everything is a file descriptor
        │  - Unifies with epoll/select
        v
2010+ Modern alternatives
          - Binder (Android)
          - Mach ports (macOS/iOS)
          - io_uring (Linux)

POSIX vs System V IPC:

Aspect System V POSIX
Naming Integer keys String names
API style get/ctl/op functions open/close/unlink
Cleanup Manual, persists Named, easier cleanup
Integration Separate namespace File-like semantics
Portability All UNIX Most modern UNIX

2.4 Common Misconceptions

Misconception 1: “Shared memory is simple—just use pointers”

  • Reality: Without synchronization, you’ll have race conditions, torn reads, and corruption. Proper shared memory requires semaphores or mutexes.

Misconception 2: “Pipes are slow because they copy data”

  • Reality: Modern kernels optimize pipes. For small messages, pipes can be faster than shared memory due to cache effects and simplicity.

Misconception 3: “UNIX sockets are just like TCP sockets”

  • Reality: UNIX sockets can pass file descriptors between processes (SCM_RIGHTS), have different connection semantics, and are significantly faster.

Misconception 4: “Use shared memory for everything—it’s fastest”

  • Reality: Shared memory complexity often isn’t worth it. UNIX sockets are simpler and fast enough for most applications. Benchmark first!

Misconception 5: “Message queues provide ordering guarantees”

  • Reality: POSIX message queues are priority-ordered, not FIFO. Same-priority messages are FIFO, but priorities change order.

3. Project Specification

3.1 What You Will Build

A complete IPC message hub (myipchub) that:

  1. Supports multiple IPC backends: pipes, FIFOs, UNIX sockets, POSIX message queues, shared memory
  2. Implements publish/subscribe messaging: Publishers send to channels, subscribers receive
  3. Provides a unified client API: Same commands work with any backend
  4. Includes a benchmarking tool: Compare performance of all mechanisms
  5. Handles resource cleanup: Properly removes shared resources on shutdown
System Architecture

┌───────────────────────────────────────────────────────────────────────┐
│                           IPC Hub (myipchub)                           │
│                                                                        │
│  ┌────────────────────────────────────────────────────────────────┐   │
│  │                     Backend Selector                            │   │
│  │                                                                 │   │
│  │  ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐  │   │
│  │  │  Pipe   │ │  FIFO   │ │  Socket │ │ MQueue  │ │  SHM    │  │   │
│  │  │ Backend │ │ Backend │ │ Backend │ │ Backend │ │ Backend │  │   │
│  │  └────┬────┘ └────┬────┘ └────┬────┘ └────┬────┘ └────┬────┘  │   │
│  │       │           │           │           │           │        │   │
│  │       └───────────┴───────────┴───────────┴───────────┘        │   │
│  │                               │                                 │   │
│  │                               v                                 │   │
│  │                      ┌───────────────┐                          │   │
│  │                      │ Unified API   │                          │   │
│  │                      │ connect()     │                          │   │
│  │                      │ send()        │                          │   │
│  │                      │ recv()        │                          │   │
│  │                      │ close()       │                          │   │
│  │                      └───────────────┘                          │   │
│  └────────────────────────────────────────────────────────────────┘   │
│                                                                        │
│  ┌────────────────────────────────────────────────────────────────┐   │
│  │                    Channel Manager                               │   │
│  │                                                                 │   │
│  │  Channel: "weather"    Channel: "stocks"    Channel: "events"  │   │
│  │  ┌─────────────┐       ┌─────────────┐      ┌─────────────┐    │   │
│  │  │ Subscribers │       │ Subscribers │      │ Subscribers │    │   │
│  │  │ ├─ Client1  │       │ ├─ Client3  │      │ ├─ Client5  │    │   │
│  │  │ ├─ Client2  │       │ └─ Client4  │      │ └─ Client6  │    │   │
│  │  │ └─ Client7  │       │             │      │             │    │   │
│  │  └─────────────┘       └─────────────┘      └─────────────┘    │   │
│  └────────────────────────────────────────────────────────────────┘   │
│                                                                        │
└───────────────────────────────────────────────────────────────────────┘
              ▲                    ▲                    ▲
              │                    │                    │
    ┌─────────┴────────┐ ┌────────┴────────┐ ┌────────┴────────┐
    │   Publisher      │ │   Subscriber    │ │   Subscriber    │
    │   (myipcpub)     │ │   (myipcsub)    │ │   (myipcsub)    │
    │                  │ │                 │ │                 │
    │ --backend socket │ │ --backend shm   │ │ --backend mqueue│
    │ --channel weather│ │ --channel weather│ │ --channel weather│
    └──────────────────┘ └─────────────────┘ └─────────────────┘

3.2 Functional Requirements

  1. IPC Hub Daemon (myipchub)
    • Accept connections from publishers and subscribers
    • Route messages from publishers to channel subscribers
    • Support multiple backends simultaneously
    • Clean shutdown on SIGINT/SIGTERM
  2. Publisher Client (myipcpub)
    • Connect to hub with selected backend
    • Send messages to named channel
    • Report delivery count (how many subscribers received)
  3. Subscriber Client (myipcsub)
    • Connect to hub with selected backend
    • Subscribe to named channel
    • Receive and display messages
  4. Backend Implementations
    • Pipe: For parent-child communication within hub
    • FIFO: Named pipe backend for unrelated processes
    • UNIX Socket: Primary backend for general use
    • POSIX Message Queue: Message-oriented backend
    • Shared Memory: High-performance backend with ring buffer
  5. Benchmark Tool (myipcbench)
    • Measure throughput (messages/second, bytes/second)
    • Measure latency (average, p50, p99)
    • Test different message sizes
    • Compare all backends

3.3 Non-Functional Requirements

  1. Performance
    • Shared memory: >1GB/sec throughput
    • UNIX sockets: >500MB/sec throughput
    • Message queues: >100MB/sec throughput
  2. Reliability
    • Handle subscriber disconnect gracefully
    • No message loss (within backend capabilities)
    • Proper error reporting
  3. Resource Management
    • Clean up all IPC resources on exit
    • Handle SIGINT/SIGTERM for cleanup
    • No leaked file descriptors, memory, or IPC objects
  4. Portability
    • Compile on Linux and macOS
    • Use POSIX APIs where available

3.4 Example Usage / Output

# 1. Start the hub
$ ./myipchub
IPC Hub started
Backends available: pipe, fifo, socket, mqueue, shm
Listening for connections...

# 2. Publisher using UNIX socket
$ ./myipcpub --backend socket --channel weather
Connected to hub via UNIX socket
Publishing to 'weather' channel...
> {"temp": 72, "humidity": 45}
Published (12 subscribers)
> {"temp": 73, "humidity": 44}
Published (12 subscribers)

# 3. Subscriber using shared memory (fastest)
$ ./myipcsub --backend shm --channel weather
Connected to hub via shared memory
Subscribed to 'weather'
[weather] {"temp": 72, "humidity": 45}
[weather] {"temp": 73, "humidity": 44}

# 4. Subscriber using POSIX message queue
$ ./myipcsub --backend mqueue --channel weather
Connected to hub via message queue
[weather] {"temp": 72, "humidity": 45}

# 5. Performance benchmark
$ ./myipcbench
IPC Mechanism Benchmark (1M messages, 1KB each)

Mechanism          Throughput      Latency (avg)
-----------------------------------------------
Shared Memory      2.1 GB/sec      0.5 us
UNIX Socket        850 MB/sec      1.2 us
POSIX MQueue       420 MB/sec      2.4 us
Named Pipe (FIFO)  350 MB/sec      2.9 us
Anonymous Pipe     380 MB/sec      2.6 us

Recommendation:
  High throughput: Use shared memory
  Simplicity: Use UNIX sockets
  Message boundaries: Use POSIX message queues

3.5 Real World Outcome

What you will see:

  1. Central hub: Single process managing multiple IPC channels
  2. Publisher/subscriber: Processes pub/sub to named channels
  3. Multiple backends: Same API, different transport
  4. Performance comparison: Benchmark each mechanism

Success Indicators:

  • Messages flow from publishers to all subscribers
  • Different backends work with same client code
  • Shared memory achieves >1GB/sec throughput
  • Clean shutdown removes all IPC resources
  • No memory leaks (verified with Valgrind)

4. Solution Architecture

4.1 High-Level Design

Detailed Architecture

┌───────────────────────────────────────────────────────────────────────┐
│                             HUB PROCESS                                │
│                                                                        │
│  ┌────────────────────────────────────────────────────────────────┐   │
│  │                       Event Loop (select/poll)                  │   │
│  │                                                                 │   │
│  │   ┌───────────────────────────────────────────────────────┐    │   │
│  │   │              Connection Acceptor                       │    │   │
│  │   │                                                        │    │   │
│  │   │   UNIX Socket Listener   FIFO Listeners               │    │   │
│  │   │   /tmp/ipchub.sock       /tmp/ipchub.fifo.*           │    │   │
│  │   └───────────────────────────────────────────────────────┘    │   │
│  │                              │                                  │   │
│  │                              v                                  │   │
│  │   ┌───────────────────────────────────────────────────────┐    │   │
│  │   │              Connection Registry                       │    │   │
│  │   │                                                        │    │   │
│  │   │   ┌──────────┐  ┌──────────┐  ┌──────────┐           │    │   │
│  │   │   │ Client 1 │  │ Client 2 │  │ Client 3 │  ...      │    │   │
│  │   │   │ socket   │  │ shm      │  │ mqueue   │           │    │   │
│  │   │   │ pub      │  │ sub      │  │ sub      │           │    │   │
│  │   │   │ weather  │  │ weather  │  │ stocks   │           │    │   │
│  │   │   └──────────┘  └──────────┘  └──────────┘           │    │   │
│  │   └───────────────────────────────────────────────────────┘    │   │
│  │                              │                                  │   │
│  │                              v                                  │   │
│  │   ┌───────────────────────────────────────────────────────┐    │   │
│  │   │              Message Router                            │    │   │
│  │   │                                                        │    │   │
│  │   │   Publisher msg ──> Find channel ──> Fan-out to subs  │    │   │
│  │   └───────────────────────────────────────────────────────┘    │   │
│  │                                                                 │   │
│  └────────────────────────────────────────────────────────────────┘   │
│                                                                        │
│  ┌────────────────────────────────────────────────────────────────┐   │
│  │                    Shared Memory Manager                        │   │
│  │                                                                 │   │
│  │   /dev/shm/ipchub.weather   ┌─────────────────────────────┐   │   │
│  │                             │ Ring Buffer                  │   │   │
│  │   sem: /ipchub.weather.mtx  │ ┌─────────────────────────┐ │   │   │
│  │   sem: /ipchub.weather.cnt  │ │ head │ tail │ data...   │ │   │   │
│  │                             │ └─────────────────────────┘ │   │   │
│  │                             └─────────────────────────────┘   │   │
│  └────────────────────────────────────────────────────────────────┘   │
│                                                                        │
└───────────────────────────────────────────────────────────────────────┘

4.2 Key Components

  1. Backend Interface (ipc_backend_t)
    • Abstract interface all backends implement
    • connect(), send(), recv(), close() operations
    • Backend-specific initialization
  2. Channel Manager
    • Track channels and their subscribers
    • Fan-out messages to all channel subscribers
    • Handle subscription/unsubscription
  3. Connection Registry
    • Track all connected clients
    • Store client metadata (backend, role, channels)
    • Clean up on disconnect
  4. Shared Memory Manager
    • Create and manage shared memory regions
    • Implement ring buffer for messages
    • Handle semaphore synchronization
  5. Message Router
    • Receive messages from publishers
    • Look up channel subscribers
    • Deliver to each subscriber via their backend

4.3 Data Structures

/* Backend abstraction */
typedef struct {
    const char *name;
    int (*init)(void);
    int (*connect)(const char *channel);
    int (*send)(int fd, const void *buf, size_t len);
    int (*recv)(int fd, void *buf, size_t len);
    void (*close)(int fd);
    void (*cleanup)(void);
} ipc_backend_t;

/* Message format */
typedef struct {
    uint32_t magic;           /* Message identifier */
    uint32_t type;            /* MSG_PUBLISH, MSG_SUBSCRIBE, etc. */
    uint32_t channel_len;     /* Channel name length */
    uint32_t payload_len;     /* Payload length */
    /* Followed by: channel name + payload */
} ipc_message_header_t;

typedef enum {
    MSG_SUBSCRIBE = 1,
    MSG_UNSUBSCRIBE,
    MSG_PUBLISH,
    MSG_DATA,
    MSG_ACK,
    MSG_ERROR
} message_type_t;

/* Client connection */
typedef struct {
    int fd;                   /* Connection file descriptor */
    ipc_backend_t *backend;   /* Which backend */
    int is_publisher;         /* Publisher or subscriber */
    char channels[16][64];    /* Subscribed channels */
    int channel_count;
    void *backend_data;       /* Backend-specific data */
} client_t;

/* Channel */
typedef struct {
    char name[64];
    client_t *subscribers[256];
    int subscriber_count;
} channel_t;

/* Shared memory ring buffer */
typedef struct {
    sem_t mutex;              /* Protect buffer access */
    sem_t items;              /* Count of items in buffer */
    sem_t spaces;             /* Count of empty slots */
    uint32_t head;            /* Write position */
    uint32_t tail;            /* Read position */
    uint32_t size;            /* Buffer size */
    char data[];              /* Flexible array for data */
} shm_ring_t;

/* Benchmark results */
typedef struct {
    const char *backend_name;
    size_t messages_sent;
    size_t bytes_sent;
    double elapsed_seconds;
    double throughput_mbps;
    double avg_latency_us;
    double p99_latency_us;
} benchmark_result_t;

4.4 Algorithm Overview

Message Flow:

Publisher                    Hub                         Subscriber
    │                         │                              │
    │  ─── CONNECT ────────>  │                              │
    │  <── ACK ────────────   │                              │
    │                         │                              │
    │                         │  <─── SUBSCRIBE(weather) ─── │
    │                         │  ─── ACK ──────────────────> │
    │                         │                              │
    │  ─── PUBLISH ────────>  │                              │
    │     channel: weather    │                              │
    │     data: {...}         │                              │
    │                         │  ─── DATA ─────────────────> │
    │                         │     channel: weather         │
    │                         │     data: {...}              │
    │  <── ACK(1 sub) ─────   │                              │
    │                         │                              │

Shared Memory Ring Buffer Algorithm:

Producer (write):
    1. sem_wait(spaces)      // Wait for empty slot
    2. sem_wait(mutex)       // Lock buffer
    3. Copy data to buffer[head]
    4. head = (head + 1) % size
    5. sem_post(mutex)       // Unlock buffer
    6. sem_post(items)       // Signal item available

Consumer (read):
    1. sem_wait(items)       // Wait for item
    2. sem_wait(mutex)       // Lock buffer
    3. Copy data from buffer[tail]
    4. tail = (tail + 1) % size
    5. sem_post(mutex)       // Unlock buffer
    6. sem_post(spaces)      // Signal slot available

Invariants:
- items + spaces = size
- head points to next write position
- tail points to next read position
- Buffer is full when head == tail && items == size
- Buffer is empty when head == tail && items == 0

5. Implementation Guide

5.1 Development Environment Setup

# Install required packages (Ubuntu/Debian)
sudo apt-get update
sudo apt-get install -y build-essential gdb valgrind strace

# For POSIX message queues, may need rt library
# (usually included by default)

# Create project directory
mkdir -p ~/projects/myipchub
cd ~/projects/myipchub

# Create initial files
touch myipchub.c myipcpub.c myipcsub.c myipcbench.c
touch backend_pipe.c backend_fifo.c backend_socket.c
touch backend_mqueue.c backend_shm.c
touch ipc_common.h Makefile

# Check system limits for IPC
cat /proc/sys/fs/mqueue/msg_max      # Max messages per queue
cat /proc/sys/kernel/shmmax          # Max shared memory segment size

Makefile:

CC = gcc
CFLAGS = -Wall -Wextra -g -O2 -std=c11
LDFLAGS = -lpthread -lrt

TARGETS = myipchub myipcpub myipcsub myipcbench

all: $(TARGETS)

myipchub: myipchub.c backend_*.c ipc_common.h
	$(CC) $(CFLAGS) -o $@ myipchub.c backend_*.c $(LDFLAGS)

myipcpub: myipcpub.c ipc_common.h
	$(CC) $(CFLAGS) -o $@ $< $(LDFLAGS)

myipcsub: myipcsub.c ipc_common.h
	$(CC) $(CFLAGS) -o $@ $< $(LDFLAGS)

myipcbench: myipcbench.c backend_*.c ipc_common.h
	$(CC) $(CFLAGS) -o $@ myipcbench.c backend_*.c $(LDFLAGS)

clean:
	rm -f $(TARGETS)
	# Clean up any leftover IPC resources
	rm -f /tmp/ipchub.sock /tmp/ipchub.fifo.*
	rm -f /dev/shm/ipchub.*

.PHONY: all clean

5.2 Project Structure

myipchub/
├── Makefile
├── ipc_common.h           # Shared definitions
├── myipchub.c             # Hub daemon
├── myipcpub.c             # Publisher client
├── myipcsub.c             # Subscriber client
├── myipcbench.c           # Benchmark tool
├── backend_pipe.c         # Pipe backend
├── backend_fifo.c         # FIFO backend
├── backend_socket.c       # UNIX socket backend
├── backend_mqueue.c       # Message queue backend
├── backend_shm.c          # Shared memory backend
└── tests/
    ├── test_backends.c
    ├── test_pubsub.sh
    └── test_cleanup.sh

5.3 The Core Question You’re Answering

“What are all the ways processes can communicate on a UNIX system, and when should you use each?”

This is a comprehensive question about IPC. Each mechanism has different performance characteristics, complexity, and use cases. Understanding all of them makes you a complete UNIX programmer.

5.4 Concepts You Must Understand First

Stop and research these before coding:

  1. Pipes and FIFOs
    • Anonymous pipes: parent-child only (file descriptors inherited)
    • Named pipes (FIFOs): unrelated processes can connect
    • Unidirectional, byte-stream semantics
    • Book Reference: “APUE” by Stevens Ch. 15.2-15.5
  2. UNIX Domain Sockets
    • AF_UNIX address family
    • SOCK_STREAM (connection-oriented) vs SOCK_DGRAM (datagram)
    • Can pass file descriptors between processes (SCM_RIGHTS)
    • Book Reference: “The Linux Programming Interface” by Kerrisk Ch. 57
  3. POSIX Message Queues
    • mq_open(), mq_send(), mq_receive(), mq_close(), mq_unlink()
    • Priority-based ordering (not FIFO for different priorities!)
    • Persist in kernel until explicitly removed
    • Book Reference: “APUE” by Stevens Ch. 15.7
  4. Shared Memory + Semaphores
    • shm_open(), mmap(), shm_unlink()
    • sem_open(), sem_wait(), sem_post(), sem_close(), sem_unlink()
    • Fastest IPC but requires careful synchronization
    • Book Reference: “APUE” by Stevens Ch. 15.9

5.5 Questions to Guide Your Design

Before implementing, think through these:

  1. Unified API
    • How do you abstract over different mechanisms?
    • What operations are common to all backends?
    • How do you handle backend-specific features?
  2. Shared Memory Challenges
    • How do you handle variable-size messages in a ring buffer?
    • How do you coordinate multiple readers/writers?
    • What happens if a reader is slow?
  3. Resource Cleanup
    • What happens if a process crashes without cleanup?
    • How do you detect stale IPC resources?
    • What cleanup runs on SIGINT/SIGTERM?
  4. Message Format
    • How do you frame messages (length-prefix? delimiter?)?
    • How do you handle binary vs text data?
    • What metadata do messages need?

5.6 Thinking Exercise

Compare IPC Mechanisms

Fill in this table and understand why each cell has its value:

Mechanism      | Relationship | Direction | Message   | Persist | Speed
---------------|--------------|-----------|-----------|---------|--------
Pipe           | Parent-child | One-way   | Stream    | No      | Fast
FIFO           | Any          | One-way   | Stream    | File    | Fast
UNIX Socket    | Any          | Two-way   | Both      | File    | Fast
Msg Queue      | Any          | Both      | Message   | Kernel  | Medium
Shared Mem     | Any          | Both      | Custom    | Name    | Fastest

Discussion Questions:
- Why is shared memory fastest?
  (No kernel copy for data transfer---just synchronization)

- Why do pipes only work parent-child?
  (FDs must be inherited through fork())

- What makes UNIX sockets preferred for daemons?
  (Bidirectional, can pass FDs, familiar API, decent performance)

- When would you choose message queues over sockets?
  (Need priority ordering, want kernel-managed persistence,
   need notification on message arrival via mq_notify)

5.7 Hints in Layers

Hint 1: Abstract Interface

typedef struct {
    int (*connect)(const char *channel);
    int (*send)(int fd, const void *buf, size_t len);
    int (*recv)(int fd, void *buf, size_t len);
    void (*close)(int fd);
} ipc_backend_t;

/* Usage */
ipc_backend_t *backend = get_backend("socket");
int fd = backend->connect("weather");
backend->send(fd, msg, len);

Hint 2: Shared Memory Ring Buffer

typedef struct {
    sem_t mutex;
    sem_t items;      /* Count of items in buffer */
    sem_t spaces;     /* Count of empty slots */
    size_t head;
    size_t tail;
    char data[RING_SIZE];
} shm_ring_t;

/* Create shared memory */
int fd = shm_open("/ipchub.channel", O_CREAT | O_RDWR, 0666);
ftruncate(fd, sizeof(shm_ring_t));
shm_ring_t *ring = mmap(NULL, sizeof(shm_ring_t),
                         PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);

Hint 3: UNIX Domain Socket Setup

/* Server side */
int fd = socket(AF_UNIX, SOCK_STREAM, 0);
struct sockaddr_un addr;
addr.sun_family = AF_UNIX;
strcpy(addr.sun_path, "/tmp/myipc.sock");
unlink(addr.sun_path);  /* Remove any existing socket */
bind(fd, (struct sockaddr *)&addr, sizeof(addr));
listen(fd, 10);

/* Client side */
int fd = socket(AF_UNIX, SOCK_STREAM, 0);
struct sockaddr_un addr;
addr.sun_family = AF_UNIX;
strcpy(addr.sun_path, "/tmp/myipc.sock");
connect(fd, (struct sockaddr *)&addr, sizeof(addr));

Hint 4: Passing File Descriptors

Use sendmsg()/recvmsg() with SCM_RIGHTS control message:

/* Send a file descriptor */
struct msghdr msg = {0};
struct cmsghdr *cmsg;
char buf[CMSG_SPACE(sizeof(int))];
int fd_to_pass = ...;

msg.msg_control = buf;
msg.msg_controllen = sizeof(buf);

cmsg = CMSG_FIRSTHDR(&msg);
cmsg->cmsg_level = SOL_SOCKET;
cmsg->cmsg_type = SCM_RIGHTS;
cmsg->cmsg_len = CMSG_LEN(sizeof(int));
memcpy(CMSG_DATA(cmsg), &fd_to_pass, sizeof(int));

sendmsg(socket_fd, &msg, 0);

5.8 The Interview Questions They’ll Ask

  1. “Compare pipes, sockets, and shared memory for IPC.”
    • Pipes: Simple, unidirectional, parent-child only
    • Sockets: Bidirectional, any processes, can pass FDs
    • Shared memory: Fastest, requires synchronization, any processes
  2. “How do you synchronize shared memory access?”
    • Use semaphores (sem_wait/sem_post) or mutexes (pthread_mutex)
    • Need mutex for buffer access
    • Need counting semaphores for producer-consumer coordination
  3. “What’s the advantage of UNIX domain sockets over TCP localhost?”
    • No network stack overhead (no checksums, routing, etc.)
    • Can pass file descriptors between processes
    • Filesystem-based access control
    • About 2x faster in practice
  4. “How would you pass a file descriptor to another process?”
    • Use sendmsg() with SCM_RIGHTS ancillary data
    • Only works over UNIX domain sockets
    • Kernel duplicates the FD into receiving process’s table
  5. “What IPC mechanism would you choose for a database connection pool?”
    • UNIX domain sockets for the control channel
    • Shared memory for the actual data buffers
    • Semaphores for coordination
    • Example: PostgreSQL does exactly this
  6. “How does a ring buffer work for shared memory IPC?”
    • Circular buffer with head (write) and tail (read) pointers
    • Producer waits for space, writes at head, signals item available
    • Consumer waits for item, reads at tail, signals space available
    • Mutex protects pointer updates
  7. “What are the cleanup requirements for each IPC mechanism?”
    • Pipes: Close file descriptors (automatic on exit)
    • FIFOs: Close FDs + unlink file if creator
    • Sockets: Close FD + unlink socket file if server
    • MQueues: mq_close + mq_unlink if creator
    • Shared memory: munmap + shm_unlink if creator

5.9 Books That Will Help

Topic Book Chapter
All IPC mechanisms “APUE” by Stevens Ch. 15
POSIX IPC “The Linux Programming Interface” by Kerrisk Ch. 51-55
UNIX sockets “UNIX Network Programming, Vol 1” by Stevens Ch. 15
Shared memory “The Linux Programming Interface” Ch. 48-49
Semaphores “The Linux Programming Interface” Ch. 53
Ring buffers Any operating systems textbook Producer-Consumer chapter

5.10 Implementation Phases

Phase 1: UNIX Socket Backend (Days 1-2)

  • Implement socket creation, bind, listen, accept
  • Implement connect, send, recv, close
  • Test with simple echo client/server
  • Add message framing (length-prefix)

Phase 2: Hub Core (Days 3-4)

  • Create hub daemon with event loop
  • Accept connections, track clients
  • Implement subscribe/unsubscribe
  • Implement publish routing

Phase 3: FIFO Backend (Day 5)

  • Create named pipes for channels
  • Handle open ordering (reader before writer)
  • Test with hub

Phase 4: Message Queue Backend (Days 6-7)

  • Implement mq_* wrapper functions
  • Handle priority (use 0 for FIFO order)
  • Test with hub

Phase 5: Shared Memory Backend (Days 8-10)

  • Implement ring buffer with semaphores
  • Handle variable-size messages
  • Test synchronization thoroughly

Phase 6: Benchmark Tool (Days 11-12)

  • Measure throughput for each backend
  • Measure latency (using timestamps)
  • Generate comparison report

Phase 7: Polish & Cleanup (Days 13-14)

  • Signal handling for cleanup
  • Error handling everywhere
  • Documentation and testing

5.11 Key Implementation Decisions

Decision Option A Option B Recommendation
Event loop select() poll() or epoll() poll() for simplicity, epoll() for scale
Message framing Length prefix Delimiter Length prefix (handles binary)
Ring buffer size Fixed Configurable Configurable with sensible default (1MB)
Semaphore type Named (sem_open) Unnamed (sem_init) Named for cross-process visibility
Error handling Return codes errno + return errno + return (POSIX convention)
Channel names Arbitrary strings Validated Validate (alphanumeric, max length)

6. Testing Strategy

6.1 Unit Tests

/* Test ring buffer */
void test_ring_buffer(void) {
    shm_ring_t *ring = create_ring_buffer(1024);

    /* Test empty */
    assert(ring_empty(ring) == true);
    assert(ring_full(ring) == false);

    /* Test single write/read */
    char msg[] = "Hello";
    assert(ring_write(ring, msg, sizeof(msg)) == 0);
    assert(ring_empty(ring) == false);

    char buf[64];
    size_t len = ring_read(ring, buf, sizeof(buf));
    assert(len == sizeof(msg));
    assert(strcmp(buf, "Hello") == 0);
    assert(ring_empty(ring) == true);

    /* Test wrap-around */
    for (int i = 0; i < 200; i++) {
        snprintf(msg, sizeof(msg), "Message %d", i);
        ring_write(ring, msg, strlen(msg) + 1);
        ring_read(ring, buf, sizeof(buf));
        assert(strcmp(buf, msg) == 0);
    }

    destroy_ring_buffer(ring);
}

/* Test UNIX socket backend */
void test_socket_backend(void) {
    ipc_backend_t *backend = get_backend("socket");

    /* Start server in child process */
    pid_t pid = fork();
    if (pid == 0) {
        int server_fd = backend->init_server("/tmp/test.sock");
        int client_fd = accept(server_fd, NULL, NULL);
        char buf[64];
        backend->recv(client_fd, buf, sizeof(buf));
        backend->send(client_fd, buf, strlen(buf) + 1);
        backend->close(client_fd);
        _exit(0);
    }

    /* Client */
    usleep(100000);  /* Wait for server */
    int fd = backend->connect("/tmp/test.sock");
    backend->send(fd, "Hello", 6);
    char buf[64];
    backend->recv(fd, buf, sizeof(buf));
    assert(strcmp(buf, "Hello") == 0);
    backend->close(fd);

    waitpid(pid, NULL, 0);
    unlink("/tmp/test.sock");
}

6.2 Integration Tests

#!/bin/bash
# test_pubsub.sh

set -e

# Start hub
./myipchub &
HUB_PID=$!
sleep 1

# Start subscriber
./myipcsub --backend socket --channel test > /tmp/sub.out &
SUB_PID=$!
sleep 0.5

# Publish message
echo "Hello World" | ./myipcpub --backend socket --channel test

# Wait for message to be received
sleep 0.5

# Check output
if grep -q "Hello World" /tmp/sub.out; then
    echo "PASS: Message received"
else
    echo "FAIL: Message not received"
    exit 1
fi

# Cleanup
kill $SUB_PID $HUB_PID 2>/dev/null
rm -f /tmp/sub.out

echo "All pubsub tests passed!"

6.3 Edge Cases to Test

  1. Connection Handling
    • Client connects before hub is ready
    • Client disconnects mid-message
    • Hub restarts with clients connected
    • Maximum connections reached
  2. Message Handling
    • Empty message
    • Maximum size message
    • Binary data with null bytes
    • Rapid fire messages
  3. Shared Memory
    • Ring buffer full (producer blocks)
    • Ring buffer empty (consumer blocks)
    • Multiple producers to same buffer
    • Multiple consumers from same buffer
  4. Resource Limits
    • Maximum message queue depth
    • Maximum shared memory size
    • Maximum file descriptors
  5. Cleanup
    • SIGINT during operation
    • SIGTERM during operation
    • Process crash (kill -9)
    • Check for orphaned IPC resources

6.4 Verification Commands

# Trace system calls for hub
strace -f ./myipchub 2>&1 | tee strace.log

# Check for IPC resource usage
ipcs -a  # List all IPC resources
ipcs -m  # Shared memory
ipcs -s  # Semaphores
ipcs -q  # Message queues

# List POSIX IPC objects
ls -la /dev/shm/             # Shared memory
ls -la /dev/mqueue/          # Message queues (if mounted)

# Check for socket files
ls -la /tmp/*.sock

# Memory leak detection
valgrind --leak-check=full ./myipchub

# Check file descriptor usage
ls -la /proc/$(pgrep myipchub)/fd/

# Performance testing
./myipcbench --iterations 100000 --message-size 1024

7. Common Pitfalls & Debugging

Problem 1: “Shared memory corruption”

  • Why: Missing synchronization between producers/consumers
  • Fix: Use semaphores around all shared memory access
  • Test: Run with TSAN (Thread Sanitizer): gcc -fsanitize=thread ...

Problem 2: “FIFO blocks forever”

  • Why: No reader when opening for write (or vice versa)
  • Fix: Open with O_NONBLOCK, or ensure reader opens first
  • Debug: Check with ls -la /tmp/ipchub.fifo.* and lsof

Problem 3: “Message queue full”

  • Why: mq_maxmsg limit hit
  • Fix: Increase limit (requires root) or use mq_send with timeout
  • Check: cat /proc/sys/fs/mqueue/msg_max

Problem 4: “Orphaned IPC resources”

  • Why: Process crashed without cleanup
  • Fix: Use shm_unlink(), sem_unlink(), mq_unlink(), unlink() on exit
  • Cleanup: Run ipcrm or manually delete from /dev/shm/

Problem 5: “Permission denied on shared memory”

  • Why: Created with wrong permissions
  • Fix: Use 0666 mode in shm_open() or set umask appropriately
  • Debug: Check ls -la /dev/shm/

Problem 6: “mq_open fails with ENOSYS”

  • Why: POSIX message queues not enabled in kernel
  • Fix: Mount mqueue filesystem: mount -t mqueue none /dev/mqueue
  • Alt: Use System V message queues instead

Problem 7: “Semaphore not shared between processes”

  • Why: Using sem_init() without shared memory, or wrong flags
  • Fix: Use named semaphores (sem_open) or place in shared memory with PTHREAD_PROCESS_SHARED

Problem 8: “Ring buffer reads corrupted data”

  • Why: Reading more data than was written, or pointer wrap issue
  • Fix: Store message length in buffer, careful modulo arithmetic
  • Debug: Add logging for head/tail positions

8. Extensions & Challenges

8.1 Easy Extensions

  1. Add persistence
    • Save messages to disk for durability
    • Replay messages to new subscribers
  2. Add channel patterns
    • Support wildcard subscriptions: weather.*
    • Implement pattern matching in router
  3. Add authentication
    • Verify client identity using SO_PEERCRED
    • Implement simple token-based auth
  4. Add statistics
    • Track messages per channel
    • Track bandwidth per backend
    • Expose via stats command

8.2 Advanced Challenges

  1. Zero-copy message passing
    • Use vmsplice() and splice() for large messages
    • Compare performance with memcpy approach
  2. Multi-process hub with shared state
    • Multiple hub processes for redundancy
    • Shared subscriber list in shared memory
    • Leader election
  3. Protocol buffer serialization
    • Define message schema
    • Generate C code with protobuf-c
    • Efficient binary serialization
  4. Network-transparent IPC
    • Extend to TCP sockets
    • Same API for local and remote
  5. Reliable delivery
    • Implement acknowledgment protocol
    • Retry failed deliveries
    • Dead letter queue

8.3 Research Topics

  1. D-Bus architecture
    • How does D-Bus handle message routing?
    • What are bus names and object paths?
  2. ZeroMQ patterns
    • REQ/REP, PUB/SUB, PUSH/PULL, DEALER/ROUTER
    • How does ZeroMQ abstract transports?
  3. Lock-free ring buffers
    • Single-producer single-consumer without mutexes
    • Memory barriers and atomic operations
  4. io_uring for IPC
    • Can io_uring improve IPC performance?
    • Submission/completion queue design

9. Real-World Connections

9.1 Production Systems Using This

System IPC Mechanisms Why
D-Bus UNIX sockets Standard Linux desktop IPC
PostgreSQL UNIX sockets + shared memory Client connections + buffer pool
Redis UNIX sockets 2x faster than TCP for local
Docker daemon UNIX sockets dockerd to CLI communication
nginx Shared memory Worker coordination
Chrome Pipes + shared memory Multi-process architecture
systemd UNIX sockets Service activation, journald

9.2 How the Pros Do It

D-Bus (Desktop Bus):

D-Bus Architecture

┌────────────────────────────────────────────────────────┐
│                    D-Bus Daemon                        │
│                                                        │
│  ┌──────────────────────────────────────────────────┐ │
│  │               Message Bus                         │ │
│  │                                                   │ │
│  │   Bus Names: org.freedesktop.Notifications       │ │
│  │              org.gnome.Shell                     │ │
│  │              org.kde.StatusNotifierWatcher       │ │
│  │                                                   │ │
│  │   Object Paths: /org/freedesktop/Notifications   │ │
│  │   Interfaces: org.freedesktop.Notifications      │ │
│  │                                                   │ │
│  └──────────────────────────────────────────────────┘ │
│                        ▲                               │
│                        │ UNIX Domain Sockets           │
│            ┌───────────┼───────────┐                  │
│            │           │           │                  │
└────────────┼───────────┼───────────┼──────────────────┘
             ▼           ▼           ▼
        ┌─────────┐ ┌─────────┐ ┌─────────┐
        │  App 1  │ │  App 2  │ │  App 3  │
        │ (sends) │ │(receives)│ │(receives)│
        └─────────┘ └─────────┘ └─────────┘

Features:
- Type-safe messages with introspection
- Security policy enforcement
- Automatic service activation
- Signal (broadcast) and method call semantics

PostgreSQL shared memory:

PostgreSQL Shared Buffers

┌─────────────────────────────────────────────────────────────┐
│                    Shared Memory Region                      │
│                                                              │
│  ┌──────────────────────────────────────────────────────┐   │
│  │              Buffer Pool (shared_buffers)             │   │
│  │                                                       │   │
│  │   ┌─────┐ ┌─────┐ ┌─────┐ ┌─────┐ ┌─────┐          │   │
│  │   │ Buf │ │ Buf │ │ Buf │ │ Buf │ │ ... │          │   │
│  │   │  1  │ │  2  │ │  3  │ │  4  │ │     │          │   │
│  │   └─────┘ └─────┘ └─────┘ └─────┘ └─────┘          │   │
│  │                                                       │   │
│  └──────────────────────────────────────────────────────┘   │
│                                                              │
│  ┌─────────────┐  ┌─────────────┐  ┌─────────────────────┐  │
│  │ Lock Table  │  │ Proc Array  │  │ WAL Buffers         │  │
│  └─────────────┘  └─────────────┘  └─────────────────────┘  │
│                                                              │
└─────────────────────────────────────────────────────────────┘
        ▲                    ▲                    ▲
        │                    │                    │
   ┌────┴────┐          ┌────┴────┐          ┌────┴────┐
   │ Backend │          │ Backend │          │ Backend │
   │ Process │          │ Process │          │ Process │
   └─────────┘          └─────────┘          └─────────┘

Synchronization:
- LWLocks (lightweight locks) for buffer access
- Spin locks for short critical sections
- Semaphores for process waiting

9.3 Reading the Source

Start with:

  1. libevent/libuv: Event loop libraries using multiple I/O mechanisms
    • src/evutil.c: Socket utilities
    • Shows portable IPC abstraction
  2. Redis: Simple codebase with UNIX socket support
    • src/anet.c: Networking abstraction
    • src/unix.c: UNIX socket specifics
  3. PostgreSQL: Comprehensive shared memory usage
    • src/backend/storage/ipc/: IPC subsystem
    • src/backend/storage/lmgr/: Lock manager
  4. systemd: Modern Linux IPC
    • src/libsystemd/sd-bus/: D-Bus implementation
    • src/journal/: journald IPC

10. Resources

10.1 Man Pages

# Pipes and FIFOs
man 2 pipe
man 2 pipe2
man 3 mkfifo
man 7 fifo
man 7 pipe

# UNIX Domain Sockets
man 2 socket
man 2 bind
man 2 connect
man 2 accept
man 7 unix
man 7 socket

# POSIX Message Queues
man 7 mq_overview
man 3 mq_open
man 3 mq_send
man 3 mq_receive
man 3 mq_close
man 3 mq_unlink

# POSIX Shared Memory
man 7 shm_overview
man 3 shm_open
man 3 shm_unlink
man 2 mmap
man 2 munmap

# POSIX Semaphores
man 7 sem_overview
man 3 sem_open
man 3 sem_wait
man 3 sem_post
man 3 sem_close
man 3 sem_unlink

# Passing File Descriptors
man 2 sendmsg
man 2 recvmsg
man 7 unix  # Look for SCM_RIGHTS

10.2 Online Resources

  • Beej’s Guide to UNIX IPC: https://beej.us/guide/bgipc/
  • Linux man-pages project: https://man7.org/linux/man-pages/
  • LWN.net IPC articles: Search for “IPC” on lwn.net
  • D-Bus specification: https://dbus.freedesktop.org/doc/dbus-specification.html

10.3 Book Chapters

Book Chapters Focus
“APUE” by Stevens Ch. 15 All IPC mechanisms
“The Linux Programming Interface” by Kerrisk Ch. 44-55 Comprehensive POSIX IPC
“UNIX Network Programming, Vol 1” by Stevens Ch. 15 UNIX domain sockets
“UNIX Network Programming, Vol 2” by Stevens All IPC deep dive
“Operating Systems: Three Easy Pieces” Concurrency chapters Synchronization concepts

11. Self-Assessment Checklist

Before considering this project complete, verify:

  • I can explain when to use each IPC mechanism
  • I understand why shared memory is fastest
  • I can implement a thread-safe ring buffer
  • I know how to pass file descriptors between processes
  • I understand semaphore semantics (wait/post/value)
  • I can diagnose and clean up orphaned IPC resources
  • My implementations handle all error conditions
  • I can answer all the interview questions confidently
  • My code passes all tests with zero Valgrind errors
  • All backends achieve expected throughput in benchmarks
  • Resource cleanup works on normal exit and signals
  • The hub handles client disconnects gracefully

12. Submission / Completion Criteria

This project is complete when:

  1. Functional Requirements Met:
    • Hub daemon starts and manages channels
    • All 5 backends implemented and working
    • Publishers can send to channels
    • Subscribers receive from channels
    • Multiple subscribers per channel work
  2. Backends Working:
    • Pipe backend (for internal use)
    • FIFO backend
    • UNIX socket backend
    • POSIX message queue backend
    • Shared memory backend with ring buffer
  3. Performance Requirements Met:
    • Shared memory: >1GB/sec
    • UNIX sockets: >500MB/sec
    • Benchmark tool generates comparison report
  4. Resource Management:
    • All IPC resources cleaned up on exit
    • SIGINT/SIGTERM handled
    • No orphaned resources after crash (manual cleanup script OK)
  5. Quality Requirements Met:
    • No memory leaks (verified with Valgrind)
    • No race conditions (verified with TSAN)
    • Compiles without warnings
  6. Documentation:
    • README with architecture description
    • Usage examples for all tools
    • Benchmark results documented

Stretch Goals (Optional):

  • File descriptor passing support
  • Channel patterns (wildcards)
  • Persistence layer
  • Web dashboard for monitoring