Project 11: Inter-Process Communication Toolkit
Implement pipes, shared memory, and Unix domain sockets, plus a local chat demo.
Quick Reference
| Attribute | Value |
|---|---|
| Difficulty | Intermediate |
| Time Estimate | 12-18 hours |
| Main Programming Language | C |
| Alternative Programming Languages | Rust, Go |
| Coolness Level | High |
| Business Potential | Medium (systems tooling) |
| Prerequisites | process model, synchronization |
| Key Topics | pipes, shared memory, unix sockets |
1. Learning Objectives
By completing this project, you will:
- Implement IPC demos using pipes, shared memory, and Unix sockets.
- Compare latency and throughput across IPC mechanisms.
- Build a multi-client Unix-domain socket chat server.
- Explain synchronization needs for shared memory.
2. All Theory Needed (Per-Concept Breakdown)
IPC Mechanisms and Synchronization
Fundamentals
Processes are isolated, so they need IPC mechanisms to communicate. Pipes provide a byte stream between related processes. Shared memory allows two processes to access the same memory region, but it requires synchronization to avoid races. Unix domain sockets provide bidirectional communication between unrelated processes using the socket API. Each mechanism has trade-offs in performance, complexity, and flexibility. Synchronization primitives like mutexes or semaphores are required when multiple processes write to shared memory.
Deep Dive into the concept
Pipes are the simplest IPC: a kernel-managed buffer with a read end and a write end. When a process writes to the pipe, the data is copied into the kernel buffer and later copied into the reader’s buffer. Pipes are unidirectional and usually used between parent and child processes. They are easy to use but involve copying and limited buffering. When the write end is closed, the read end returns EOF.
Shared memory is the fastest IPC because it avoids copying. The OS maps the same physical pages into multiple processes. You can create a shared memory object with shm_open, size it with ftruncate, and map it with mmap. But because multiple processes can write concurrently, you must synchronize access. POSIX semaphores or mutexes in shared memory are common choices. Without synchronization, you can get corrupted data or inconsistent reads.
Unix domain sockets use the same API as TCP sockets but operate entirely within the kernel without network overhead. They support stream or datagram semantics. A server binds to a filesystem path (e.g., /tmp/chat.sock) and listens for connections. Clients connect and send messages. The kernel handles buffering and wakeups, making sockets a flexible choice for multi-client communication. They are often used by system services (systemd, Docker) for local IPC.
Benchmarking IPC highlights trade-offs. Shared memory offers the lowest latency but highest complexity due to synchronization. Pipes are easy but limited to related processes. Unix sockets offer flexibility and easy multi-client handling, but they incur more overhead due to socket buffering and context switching.
How this fit on projects
This concept informs Section 3.2 and Section 3.7 and is used in Project 7 (shell pipelines) and Project 13 (containers).
Definitions & key terms
- Pipe: unidirectional byte stream IPC.
- Shared memory: memory mapped into multiple processes.
- Unix domain socket: local IPC using socket API.
- Semaphore: synchronization primitive for shared state.
Mental model diagram (ASCII)
Process A --pipe--> kernel buffer --pipe--> Process B
Process A <--> shared memory <--> Process B
Process A <--> unix socket <--> Process B
How it works (step-by-step)
- Set up IPC mechanism (pipe, shm, socket).
- Exchange data between processes.
- Synchronize if shared memory.
- Measure latency/throughput.
Minimal concrete example
int fds[2];
pipe(fds);
write(fds[1], "hi", 2);
read(fds[0], buf, 2);
Common misconceptions
- “Shared memory is always safe”: without locks it is unsafe.
- “Unix sockets are network sockets”: they are local and use filesystem paths.
Check-your-understanding questions
- Why is shared memory faster than pipes?
- When would you choose Unix sockets over pipes?
- What does closing the write end of a pipe do?
Check-your-understanding answers
- It avoids copying through the kernel buffer.
- For unrelated processes or multi-client servers.
- It signals EOF to the reader.
Real-world applications
- Database shared memory buffers.
- Local IPC for system daemons.
Where you’ll apply it
- This project: Section 3.2, Section 3.7, Section 5.10 Phase 2.
- Also used in: Project 7, Project 13.
References
- TLPI Ch. 44-56
- UNIX Network Programming Vol. 1
Key insights
IPC choice is a trade-off between speed, safety, and flexibility.
Summary
By implementing multiple IPC mechanisms, you see how the OS supports different communication patterns.
Homework/Exercises to practice the concept
- Add message framing to the pipe demo.
- Implement shared memory ring buffer.
- Add client disconnect handling in chat server.
Solutions to the homework/exercises
- Prefix messages with a length header.
- Use head/tail indices with mutex.
- Remove client from list on read=0.
3. Project Specification
3.1 What You Will Build
A toolkit with three demos (pipe, shared memory, unix socket) plus a simple chat server/client. Includes a benchmark mode.
3.2 Functional Requirements
- Pipe demo between parent and child.
- Shared memory demo with checksum validation.
- Unix socket chat with multiple clients.
- Benchmark mode comparing latency.
3.3 Non-Functional Requirements
- Performance: benchmark results in <5 seconds.
- Reliability: server handles client disconnects.
- Usability:
./ipc_demo pipe|shm|sock.
3.4 Example Usage / Output
$ ./ipc_demo pipe
parent -> child: hello
child -> parent: ack
3.5 Data Formats / Schemas / Protocols
- Chat messages: newline-delimited UTF-8 text.
3.6 Edge Cases
- Client disconnect during broadcast.
- Shared memory read before write.
- Pipe closed early.
3.7 Real World Outcome
3.7.1 How to Run (Copy/Paste)
./ipc_demo pipe --seed 42
./chat_server
./chat_client
3.7.2 Golden Path Demo (Deterministic)
- Use fixed message sequence and seed=42 for benchmark.
3.7.3 If CLI: exact terminal transcript
$ ./ipc_demo shm --seed 42
writer: wrote 4096 bytes
reader: checksum OK
Failure demo (deterministic):
$ ./ipc_demo shm --size 0
error: size must be > 0
Exit codes:
0success2invalid args3IPC error
4. Solution Architecture
4.1 High-Level Design
Demo runner -> IPC backend (pipe/shm/socket) -> metrics
4.2 Key Components
| Component | Responsibility | Key Decisions | |———–|—————-|—————| | Pipe demo | parent/child exchange | fixed message size | | SHM demo | shared buffer + checksum | POSIX shm + mutex | | Socket chat | server/client | select() loop |
4.3 Data Structures (No Full Code)
struct shm_block {
pthread_mutex_t lock;
size_t len;
char data[4096];
};
4.4 Algorithm Overview
Key Algorithm: chat broadcast
- accept new client
- read message
- write to all clients
Complexity Analysis:
- Time: O(n) clients per broadcast
- Space: O(n) clients
5. Implementation Guide
5.1 Development Environment Setup
sudo apt-get install build-essential
5.2 Project Structure
project-root/
|-- ipc_demo.c
|-- chat_server.c
|-- chat_client.c
`-- Makefile
5.3 The Core Question You’re Answering
“When processes are isolated, how can they communicate efficiently and safely?”
5.4 Concepts You Must Understand First
- Pipe semantics and EOF.
- Shared memory synchronization.
- Unix socket server/client flow.
5.5 Questions to Guide Your Design
- What message sizes should you use for benchmarks?
- How will you handle partial reads/writes?
- How will you handle client disconnects?
5.6 Thinking Exercise
Compare copy paths for pipe vs shared memory. Where does copying happen?
5.7 The Interview Questions They’ll Ask
- When is shared memory preferred?
- Why are Unix sockets used by daemons?
5.8 Hints in Layers
Hint 1: Start with a pipe demo.
Hint 2: Add shared memory with mutex.
Hint 3: Implement socket chat server.
5.9 Books That Will Help
| Topic | Book | Chapter | |——-|——|———| | IPC | TLPI | 44-56 | | Unix sockets | UNIX Network Programming | 3-5 |
5.10 Implementation Phases
Phase 1: Pipes (3-4 hours)
Goals: parent/child exchange.
Phase 2: Shared memory (4-6 hours)
Goals: shared buffer with checksum.
Phase 3: Socket chat (5-8 hours)
Goals: multi-client server.
5.11 Key Implementation Decisions
| Decision | Options | Recommendation | Rationale | |———-|———|—————-|———–| | Server loop | select vs threads | select | simpler | | SHM sync | mutex vs semaphore | mutex | straightforward |
6. Testing Strategy
6.1 Test Categories
| Category | Purpose | Examples | |———-|———|———-| | Unit | checksum | known payload | | Integration | chat | two clients | | Stress | benchmark | 10k messages |
6.2 Critical Test Cases
- Client disconnect mid-message.
- SHM reader before writer.
- Pipe closed on write side.
6.3 Test Data
message="hello" size=5
7. Common Pitfalls & Debugging
7.1 Frequent Mistakes
| Pitfall | Symptom | Solution | |——–|———|———-| | Deadlock in SHM | hangs | ensure lock ordering | | Partial writes | truncated messages | loop until complete | | Leaked sockets | too many fds | close on disconnect |
7.2 Debugging Strategies
- Use
stracefor pipe/socket calls. - Add verbose logs with timestamps.
7.3 Performance Traps
- Using huge message sizes for benchmark.
8. Extensions & Challenges
8.1 Beginner Extensions
- Add datagram Unix socket demo.
8.2 Intermediate Extensions
- Add shared memory ring buffer.
8.3 Advanced Extensions
- Add authentication to chat server.
9. Real-World Connections
9.1 Industry Applications
- Local IPC for services (systemd, Docker).
9.2 Related Open Source Projects
- Redis uses shared memory + sockets.
9.3 Interview Relevance
- IPC trade-offs and synchronization questions.
10. Resources
10.1 Essential Reading
- TLPI Ch. 44-56
10.2 Video Resources
- IPC lectures
10.3 Tools & Documentation
man pipe,man shm_open,man unix
10.4 Related Projects in This Series
11. Self-Assessment Checklist
11.1 Understanding
- I can explain IPC trade-offs.
- I can explain shared memory synchronization.
11.2 Implementation
- Demos and chat server work.
11.3 Growth
- I can discuss IPC design choices.
12. Submission / Completion Criteria
Minimum Viable Completion:
- Pipe + shared memory demos.
Full Completion:
- Unix socket chat server.
Excellence (Going Above & Beyond):
- Benchmarking suite and advanced sync.