Project 11: Event-Driven TCP Server with epoll
Build a high-performance event-driven TCP server using Linux epoll that can handle 10,000+ concurrent connections with a single thread, implementing the reactor pattern used by nginx and Redis.
Quick Reference
| Attribute | Value |
|---|---|
| Difficulty | Expert (Level 4) |
| Time Estimate | 2-3 weeks |
| Language | C |
| Prerequisites | Socket programming, non-blocking I/O, TCP fundamentals |
| Key Topics | epoll API, edge/level triggering, reactor pattern, C10K problem, connection state machines |
1. Learning Objectives
After completing this project, you will:
- Understand the C10K problem and why thread-per-connection doesn’t scale
- Master the epoll API (epoll_create, epoll_ctl, epoll_wait)
- Distinguish between edge-triggered (ET) and level-triggered (LT) modes
- Implement the reactor pattern used by production servers like nginx
- Handle non-blocking I/O correctly with EAGAIN/EWOULDBLOCK
- Build per-connection state machines for protocol handling
- Write code that scales to 10,000+ concurrent connections
2. Theoretical Foundation
2.1 Core Concepts
The C10K Problem
In 1999, Dan Kegel posed “The C10K Problem”: how do you handle 10,000 concurrent connections on a single server? The naive approach of one-thread-per-connection fails because:
- Each thread consumes 1-8 MB of stack space
- 10,000 threads = 10-80 GB of memory just for stacks
- Context switching between threads is expensive (1-10 microseconds each)
- Thread scheduling becomes O(n) with thousands of threads
Thread-Per-Connection Model (Doesn't Scale)
┌─────────────────────────────────────────────────────────┐
│ Server │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ Thread 1 │ │ Thread 2 │ │Thread N │ ...10,000 │
│ │ read() │ │ read() │ │ read() │ │
│ │ blocked │ │ blocked │ │ blocked │ │
│ └────┬─────┘ └────┬─────┘ └────┬─────┘ │
│ │ │ │ │
│ Connection 1 Connection 2 Connection N │
│ │
│ Memory: 10-80 GB just for thread stacks! │
│ Context switches: O(n) per event │
└─────────────────────────────────────────────────────────┘
I/O Multiplexing Evolution
UNIX solved this with I/O multiplexing: one thread monitors many file descriptors simultaneously.
Evolution of I/O Multiplexing
┌──────────────────────────────────────────────────────────────────┐
│ │
│ select() (1983) poll() (1986) epoll() (2002) │
│ ┌─────────────────┐ ┌─────────────────┐ ┌────────────────┐│
│ │ fd_set bitmaps │ │ pollfd array │ │ Kernel event ││
│ │ Limited to 1024 │ │ No limit │ │ queue ││
│ │ Copy ALL fds │ │ Copy ALL fds │ │ Copy only ││
│ │ each call │ │ each call │ │ ready fds ││
│ │ O(n) scanning │ │ O(n) scanning │ │ O(1) events ││
│ └─────────────────┘ └─────────────────┘ └────────────────┘│
│ │
│ Slow with many fds Scales to 100K+ │
└──────────────────────────────────────────────────────────────────┘
epoll Architecture
epoll uses a kernel-based event queue with a red-black tree for registered file descriptors:
epoll Internal Structure
┌────────────────────────────────────────────────────────────────┐
│ Kernel │
│ │
│ epoll instance (created by epoll_create) │
│ ┌──────────────────────────────────────────────────────────┐ │
│ │ │ │
│ │ Red-Black Tree Ready List │ │
│ │ (interest list) (ready events) │ │
│ │ ┌─────────┐ ┌───────────────────┐ │ │
│ │ │ fd=5 │ │ fd=5: EPOLLIN │ │ │
│ │ │ / \ │ ───────────> │ fd=8: EPOLLIN │ │ │
│ │ │fd=3 fd=8│ (when events │ fd=12: EPOLLOUT │ │ │
│ │ │ / \ │ occur) └───────────────────┘ │ │
│ │ │ fd=7 fd=12 │ │
│ │ └─────────┘ │ │
│ │ │ │
│ │ O(log n) add/remove O(1) retrieval │ │
│ └──────────────────────────────────────────────────────────┘ │
│ │
└────────────────────────────────────────────────────────────────┘
Edge-Triggered vs Level-Triggered
This is the most critical concept in epoll:
Level-Triggered (LT) - Default Mode
─────────────────────────────────────────────────────────
Data arrives: [=====DATA=====]
↑ ↑ ↑ ↑
│ │ │ │
epoll_wait(): READY READY READY READY (keeps firing
while data present)
─────────────────────────────────────────────────────────
Edge-Triggered (ET) - EPOLLET Flag
─────────────────────────────────────────────────────────
Data arrives: [=====DATA=====]
↑
│
epoll_wait(): READY (fires ONCE on
state change)
─────────────────────────────────────────────────────────
Edge-Triggered REQUIREMENT:
You MUST read until EAGAIN, or you'll never get notified again!
The Reactor Pattern
The reactor pattern is the architecture that epoll-based servers use:
Reactor Pattern
┌─────────────────────────────────────────────────────────────────┐
│ │
│ Event Demultiplexer │
│ (epoll_wait) │
│ │ │
│ ┌───────────────┼───────────────┐ │
│ │ │ │ │
│ v v v │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ Accept │ │ Read │ │ Write │ │
│ │ Handler │ │ Handler │ │ Handler │ │
│ └────┬─────┘ └────┬─────┘ └────┬─────┘ │
│ │ │ │ │
│ v v v │
│ New Socket Process Data Send Response │
│ Add to epoll State Machine Remove EPOLLOUT │
│ │
└─────────────────────────────────────────────────────────────────┘
2.2 Why This Matters
- nginx handles 10,000+ connections per worker process using epoll
- Redis processes 100K+ operations/second with a single-threaded event loop
- Node.js built its entire async model on libuv, which uses epoll on Linux
- Every high-performance network service relies on I/O multiplexing
Understanding epoll is essential for:
- Building scalable network services
- Understanding how async frameworks work internally
- Debugging performance issues in production systems
- Systems programming interviews at top companies
2.3 Historical Context
- 1983: select() introduced in BSD 4.2 - limited to 1024 fds
- 1986: poll() added to System V - no fd limit but still O(n)
- 2000: kqueue added to FreeBSD - O(1) event notification
- 2002: epoll added to Linux 2.5.44 - Linux’s answer to kqueue
- 2003: nginx 0.1 released using epoll - proved the model scales
- Today: epoll powers the majority of high-traffic Linux servers
2.4 Common Misconceptions
Misconception 1: “Edge-triggered is always better” False. Edge-triggered is more efficient but more complex. Level-triggered is safer for simple servers and easier to debug. Many production systems use LT mode.
Misconception 2: “epoll is only for sockets” False. epoll works with any file descriptor that supports poll: pipes, FIFOs, device files, inotify, signalfd, timerfd, eventfd.
Misconception 3: “One epoll instance is enough for everything” Not always. For multi-threaded servers, you might want one epoll per thread (with SO_REUSEPORT) to avoid lock contention.
Misconception 4: “EPOLLONESHOT is like edge-triggered” No. EPOLLONESHOT disables the fd after one event, requiring re-arming with EPOLL_CTL_MOD. ET still fires on each state change.
3. Project Specification
3.1 What You Will Build
A TCP echo server that:
- Uses epoll for I/O multiplexing
- Handles 10,000+ concurrent connections with a single thread
- Implements edge-triggered mode with proper EAGAIN handling
- Maintains per-connection state for partial reads/writes
- Provides real-time statistics (connections, throughput, latency)
- Handles graceful shutdown on SIGINT/SIGTERM
3.2 Functional Requirements
| ID | Requirement |
|---|---|
| F1 | Accept TCP connections on a configurable port |
| F2 | Echo back any data received from clients |
| F3 | Use epoll in edge-triggered mode |
| F4 | Handle partial reads and writes correctly |
| F5 | Support at least 10,000 concurrent connections |
| F6 | Print statistics: active connections, requests/sec |
| F7 | Graceful shutdown on SIGINT (close all connections cleanly) |
| F8 | Command-line options for port and max events |
3.3 Non-Functional Requirements
| ID | Requirement |
|---|---|
| N1 | Sub-millisecond average latency under load |
| N2 | Handle 50,000+ requests/second |
| N3 | No memory leaks (verified with Valgrind) |
| N4 | Clean file descriptor handling (no leaks) |
| N5 | Minimal CPU usage when idle |
3.4 Example Usage / Output
# 1. Start the server
$ ./myepollserver -p 8080
Starting epoll server on port 8080
Using edge-triggered mode
Max events per wait: 1024
# 2. Run a load test (from another terminal)
$ ./loadtest -c 10000 -r 100000 localhost:8080
Connections: 10000
Requests: 100000
Concurrency: 10000
Results:
Total time: 2.34 seconds
Requests/sec: 42,735
Avg latency: 0.23 ms
P99 latency: 1.2 ms
Errors: 0
# 3. Server output during load test
$ ./myepollserver -p 8080 --stats
[10:00:01] Connections: 10000 active, 0 pending
[10:00:01] Requests: 42735/sec, Bytes: 4.2 MB/sec
[10:00:02] Connections: 10000 active, 0 pending
[10:00:02] Requests: 43102/sec, Bytes: 4.3 MB/sec
...
# 4. Graceful shutdown
^C
Received SIGINT, shutting down...
Closing 10000 connections...
All connections closed.
Final stats: 100000 requests served, 0 errors
3.5 Real World Outcome
After completing this project, you will have:
- A server that rivals nginx’s raw connection handling
- Deep understanding of Linux network performance
- Knowledge that directly applies to Redis, nginx, and Node.js internals
- Interview-ready skills for systems programming roles
4. Solution Architecture
4.1 High-Level Design
+------------------------+
Client --------> | Listen Socket |
Connections | (accept new) |
+------------------------+
|
v
+------------------------+
| epoll Instance |
| (event demux) |
+------------------------+
|
+---------------+---------------+
| | |
v v v
+----------+ +----------+ +----------+
| Accept | | Read | | Write |
| Handler | | Handler | | Handler |
+----------+ +----------+ +----------+
| | |
v v v
New fd added Update conn Send from
to epoll state buffer
|
v
+------------------------+
| Connection State |
| (per-fd structure) |
+------------------------+
4.2 Key Components
Connection Structure
typedef struct {
int fd; // Socket file descriptor
int state; // Connection state (reading/writing)
char recv_buf[BUFSIZE]; // Receive buffer
size_t recv_len; // Bytes in receive buffer
char send_buf[BUFSIZE]; // Send buffer
size_t send_len; // Bytes to send
size_t send_offset; // Bytes already sent
time_t last_active; // For timeout handling
} connection_t;
Event Loop Structure
// Main event loop
while (running) {
int nfds = epoll_wait(epfd, events, MAX_EVENTS, timeout);
if (nfds == -1) {
if (errno == EINTR) continue;
perror("epoll_wait");
break;
}
for (int i = 0; i < nfds; i++) {
if (events[i].data.fd == listen_fd) {
handle_accept(epfd, listen_fd);
} else {
handle_client(epfd, events[i].data.fd, events[i].events);
}
}
}
4.3 Data Structures
| Structure | Purpose | Implementation |
|---|---|---|
| Connection table | Map fd to connection state | Array indexed by fd |
| Event array | Hold events from epoll_wait | Fixed-size array |
| Statistics | Track performance metrics | Atomic counters |
4.4 Algorithm Overview
Accept Handler
1. Loop: accept() until EAGAIN (edge-triggered requires draining)
2. For each new connection:
a. Set socket non-blocking: fcntl(fd, F_SETFL, O_NONBLOCK)
b. Allocate connection structure
c. Add to epoll: EPOLLIN | EPOLLET
3. Return
Read Handler (Edge-Triggered)
1. Loop: read() until EAGAIN or error
a. If read returns 0: client closed, cleanup
b. If read returns -1 and errno != EAGAIN: error, cleanup
c. Otherwise: append to recv_buf
2. Process complete messages in recv_buf
3. If response ready: copy to send_buf, add EPOLLOUT
Write Handler
1. Loop: write() from send_buf until EAGAIN
2. If all data sent:
a. Clear send_buf
b. Remove EPOLLOUT from events
3. If partial write: track offset, keep EPOLLOUT
5. Implementation Guide
5.1 Development Environment Setup
# Create project directory
mkdir epoll-server && cd epoll-server
# Verify Linux kernel (epoll is Linux-specific)
uname -r # Should be 2.6+ (any modern Linux)
# Create source files
touch main.c server.h server.c connection.h connection.c
# Compile with optimization for benchmarking
gcc -O2 -Wall -Wextra -o myepollserver main.c server.c connection.c
# For debugging
gcc -g -Wall -Wextra -fsanitize=address -o myepollserver-debug \
main.c server.c connection.c
5.2 Project Structure
epoll-server/
├── Makefile
├── main.c # Entry point, argument parsing
├── server.h # Server API declarations
├── server.c # Event loop, accept, epoll management
├── connection.h # Connection structure and API
├── connection.c # Read/write handlers, buffer management
├── stats.h # Statistics tracking
├── stats.c # Atomic counters, reporting
├── loadtest.c # Simple load testing tool
└── tests/
└── test_server.c
5.3 The Core Question You’re Answering
“How do you handle thousands of concurrent network connections efficiently without creating thousands of threads?”
This question decomposes into:
- How do you monitor many file descriptors simultaneously? (epoll)
- How do you avoid blocking on any single connection? (non-blocking I/O)
- How do you handle partial reads/writes? (per-connection buffers)
- How do you know when to read vs write? (event-driven state machine)
5.4 Concepts You Must Understand First
Before starting, verify you can answer:
| Question | Book Reference |
|---|---|
| What does socket(), bind(), listen(), accept() each do? | UNIX Network Programming Ch. 4 |
| What is the difference between blocking and non-blocking I/O? | TLPI Ch. 63 |
| What does O_NONBLOCK do and how do you set it? | APUE Ch. 14 |
| What is EAGAIN/EWOULDBLOCK and when is it returned? | TLPI Ch. 63 |
| How do file descriptors work in UNIX? | APUE Ch. 3 |
5.5 Questions to Guide Your Design
epoll Management
- When do you add a fd to epoll vs modify vs remove?
- What happens if you call epoll_ctl on a closed fd?
- Should you use EPOLLONESHOT or EPOLLET or both?
Connection Lifecycle
- What state does a connection need to track?
- When does a connection move from “reading” to “writing”?
- How do you detect a half-closed connection?
Buffer Management
- What if the send buffer fills up?
- What if a message spans multiple recv() calls?
- How do you handle buffer exhaustion?
Error Handling
- What errors are recoverable vs fatal?
- How do you handle EINTR during epoll_wait?
- What if accept() fails?
5.6 Thinking Exercise
Trace this scenario step by step:
Client connects, sends "Hello", expects "Hello" echo back
Step 1: epoll_wait() returns
- events[0].data.fd = listen_fd
- events[0].events = EPOLLIN
Step 2: handle_accept()
- accept() returns new_fd = 5
- fcntl(5, F_SETFL, O_NONBLOCK)
- epoll_ctl(epfd, EPOLL_CTL_ADD, 5, {EPOLLIN|EPOLLET, .data.fd=5})
- accept() returns EAGAIN (no more connections)
Step 3: epoll_wait() returns
- events[0].data.fd = 5
- events[0].events = EPOLLIN
Step 4: handle_read(5)
- read(5, buf, sizeof(buf)) returns 5 ("Hello")
- read(5, buf, sizeof(buf)) returns -1, errno=EAGAIN
- Copy "Hello" to send_buf
- epoll_ctl(epfd, EPOLL_CTL_MOD, 5, {EPOLLIN|EPOLLOUT|EPOLLET})
Step 5: epoll_wait() returns
- events[0].data.fd = 5
- events[0].events = EPOLLOUT
Step 6: handle_write(5)
- write(5, "Hello", 5) returns 5
- All data sent
- epoll_ctl(epfd, EPOLL_CTL_MOD, 5, {EPOLLIN|EPOLLET})
Questions:
- What if write() only sent 3 bytes?
- What if the client sends more data before we finish writing?
- What if the client closes before we send?
5.7 Hints in Layers
Hint 1: Basic epoll Setup (Conceptual)
int epfd = epoll_create1(0); // Create epoll instance
struct epoll_event ev;
ev.events = EPOLLIN; // Want to know when readable
ev.data.fd = listen_fd; // Store fd in event data
epoll_ctl(epfd, EPOLL_CTL_ADD, listen_fd, &ev);
Hint 2: Event Loop Structure (More Specific)
#define MAX_EVENTS 1024
struct epoll_event events[MAX_EVENTS];
while (running) {
int n = epoll_wait(epfd, events, MAX_EVENTS, -1);
if (n == -1) {
if (errno == EINTR) continue;
break;
}
for (int i = 0; i < n; i++) {
if (events[i].data.fd == listen_fd)
handle_accept();
else
handle_client(events[i].data.fd, events[i].events);
}
}
Hint 3: Edge-Triggered Reading (Technical Details)
// CRITICAL: With EPOLLET, you MUST read until EAGAIN
void handle_read(int fd) {
connection_t *conn = get_connection(fd);
while (1) {
ssize_t n = read(fd, conn->recv_buf + conn->recv_len,
sizeof(conn->recv_buf) - conn->recv_len);
if (n == 0) {
// Client closed connection
close_connection(fd);
return;
}
if (n == -1) {
if (errno == EAGAIN || errno == EWOULDBLOCK) {
// All data read, process it
break;
}
// Real error
close_connection(fd);
return;
}
conn->recv_len += n;
}
// Process received data...
}
Hint 4: Handling Partial Writes
void handle_write(int fd) {
connection_t *conn = get_connection(fd);
while (conn->send_offset < conn->send_len) {
ssize_t n = write(fd,
conn->send_buf + conn->send_offset,
conn->send_len - conn->send_offset);
if (n == -1) {
if (errno == EAGAIN || errno == EWOULDBLOCK) {
// Socket buffer full, wait for EPOLLOUT
return;
}
close_connection(fd);
return;
}
conn->send_offset += n;
}
// All data sent, remove EPOLLOUT interest
conn->send_len = 0;
conn->send_offset = 0;
modify_epoll(fd, EPOLLIN | EPOLLET);
}
5.8 The Interview Questions They’ll Ask
- “What’s the difference between select, poll, and epoll?”
- select: fd_set bitmap, limited to FD_SETSIZE (1024), copies entire set each call
- poll: pollfd array, no limit, but still copies and scans entire array
- epoll: kernel event queue, O(1) for events, only returns ready fds
- “Explain edge-triggered vs level-triggered epoll.”
- Level-triggered: notifies as long as condition is true (data available)
- Edge-triggered: notifies only on state change (new data arrived)
- ET requires reading until EAGAIN; LT can read partial data
- ET is more efficient but more complex to program correctly
- “How would you handle a slow client?”
- Buffer outgoing data per-connection
- Add EPOLLOUT when buffer has data
- If buffer fills up, either drop connection or implement backpressure
- Consider timeouts for idle connections
- “What happens if you don’t read all data in edge-triggered mode?”
- epoll_wait will NOT notify you again until NEW data arrives
- Old data sits in the kernel buffer, never read
- Connection effectively deadlocks
- Solution: always read until EAGAIN
- “How does nginx handle 10,000 concurrent connections?”
- Single-threaded event loop with epoll (per worker)
- Edge-triggered mode for efficiency
- Per-connection state machines
- sendfile() for static content (zero-copy)
- Multiple workers with SO_REUSEPORT to scale to multiple cores
5.9 Books That Will Help
| Topic | Book | Chapter |
|---|---|---|
| epoll in depth | The Linux Programming Interface | Ch. 63 |
| Non-blocking I/O | APUE by Stevens | Ch. 14 |
| Socket programming | UNIX Network Programming Vol 1 | Ch. 6 |
| High-performance servers | High Performance Browser Networking | Ch. 1-4 |
6. Testing Strategy
6.1 Unit Tests
// Test connection structure
void test_connection_init(void) {
connection_t conn;
connection_init(&conn, 5);
assert(conn.fd == 5);
assert(conn.recv_len == 0);
assert(conn.send_len == 0);
}
// Test buffer operations
void test_buffer_append(void) {
connection_t conn;
connection_init(&conn, 5);
connection_recv(&conn, "Hello", 5);
assert(conn.recv_len == 5);
assert(memcmp(conn.recv_buf, "Hello", 5) == 0);
}
6.2 Integration Tests
# Test basic echo
echo "Hello" | nc localhost 8080
# Should output: Hello
# Test multiple connections
for i in {1..100}; do
echo "Message $i" | nc localhost 8080 &
done
wait
# Test with netcat keepalive
nc localhost 8080
Hello
Hello
World
World
^C
6.3 Load Testing
# Using wrk (recommended)
wrk -t4 -c1000 -d30s http://localhost:8080/
# Using custom loadtest tool
./loadtest -c 10000 -r 100000 localhost 8080
# Using Apache Bench
ab -n 100000 -c 1000 http://localhost:8080/
6.4 Edge Cases to Test
| Scenario | Expected Behavior |
|---|---|
| Client closes mid-send | Server cleans up without error |
| Client sends 0 bytes | Connection remains open |
| Rapid connect/disconnect | No fd leak |
| Large message (>buffer) | Handle gracefully |
| epoll_wait interrupted by signal | Returns EINTR, continues |
7. Common Pitfalls & Debugging
| Problem | Symptom | Root Cause | Fix |
|---|---|---|---|
| Missing events | Data never arrives | Forgot EPOLLET requires read until EAGAIN | Always loop read() until EAGAIN |
| Connections stuck | Can’t send data | Forgot to add EPOLLOUT after buffering | Add EPOLLOUT when send_buf non-empty |
| fd leak | Too many open files | Forgot to close() or epoll_ctl(DEL) | Always cleanup both |
| Double-close | Crash/EBADF | Closed fd without removing from epoll | Remove from epoll before close |
| Thundering herd | CPU spike on accept | Multiple threads on same listen socket | Use EPOLLEXCLUSIVE or SO_REUSEPORT |
Debugging Tips
- Use strace to see system calls
strace -f -e epoll_wait,read,write,close ./myepollserver - Check open file descriptors
ls -la /proc/$(pgrep myepollserver)/fd/ - Print events for debugging
void print_events(uint32_t events) { if (events & EPOLLIN) printf("EPOLLIN "); if (events & EPOLLOUT) printf("EPOLLOUT "); if (events & EPOLLERR) printf("EPOLLERR "); if (events & EPOLLHUP) printf("EPOLLHUP "); printf("\n"); } - Verify non-blocking mode
int flags = fcntl(fd, F_GETFL, 0); assert(flags & O_NONBLOCK);
8. Extensions & Challenges
| Extension | Difficulty | Concepts Learned |
|---|---|---|
| Add timeout handling | Easy | timerfd with epoll |
| Implement HTTP | Medium | Protocol parsing |
| Multi-threaded with SO_REUSEPORT | Medium | Thread-per-core scaling |
| SSL/TLS support | Hard | OpenSSL with non-blocking |
| Implement kqueue version for BSD | Medium | Cross-platform I/O |
| Add connection pooling | Medium | Resource management |
| Implement EPOLLONESHOT mode | Easy | Alternative event semantics |
9. Real-World Connections
How Production Systems Solve This
| System | Approach |
|---|---|
| nginx | epoll + edge-triggered + worker processes |
| Redis | epoll + single-threaded + pipelining |
| Node.js | libuv wrapping epoll (Linux) / kqueue (BSD) |
| HAProxy | epoll + multi-process + seamless reload |
| memcached | epoll + multi-threaded with libevent |
Industry Relevance
- Every high-traffic web service uses epoll or similar
- Understanding epoll is required for Linux systems roles
- The patterns apply to all event-driven programming
- Directly relevant for performance engineering
10. Resources
Primary References
- Kerrisk, M. “The Linux Programming Interface” - Chapter 63
- Stevens, W.R. “UNIX Network Programming, Vol 1” - Chapter 6
- The C10K Problem: http://www.kegel.com/c10k.html
Online Resources
Source Code to Study
- nginx event module:
src/event/modules/ngx_epoll_module.c - Redis ae_epoll:
src/ae_epoll.c - libuv:
src/unix/epoll.c
11. Self-Assessment Checklist
Before considering this project complete, verify:
- Server accepts connections and echoes data correctly
- Uses edge-triggered epoll (EPOLLET flag)
- Reads until EAGAIN in all read handlers
- Handles partial writes with EPOLLOUT
- No fd leaks (check with lsof)
- No memory leaks (check with Valgrind)
- Graceful shutdown on SIGINT
- Handles 1,000+ concurrent connections
- Sub-millisecond latency under load
- Can explain ET vs LT tradeoffs
- Code compiles with -Wall -Wextra without warnings
12. Completion Criteria
This project is complete when:
- All functional requirements (F1-F8) are implemented
- Server handles 10,000+ concurrent connections
- Load test shows >10,000 requests/second
- No resource leaks after extended testing
- You can explain the reactor pattern and epoll semantics
- You can extend it to handle a simple protocol (HTTP/1.0)
Deliverables:
- Source code with clear comments
- Makefile with debug and release targets
- README with usage instructions
- Load test results demonstrating performance
- Brief writeup explaining your design decisions
This project teaches the foundation of all high-performance Linux network programming. The patterns you learn here power nginx, Redis, Node.js, and virtually every other high-traffic network service. Master this, and you understand how the internet scales.