Project 3: The TCP Echo Server

Build a TCP server that handles multiple concurrent clients, learning stream handles, buffer allocation strategies, and the complete connection lifecycle.

Quick Reference

Attribute Value
Difficulty Expert
Time Estimate 1-2 weeks
Language C
Prerequisites Project 1, basic TCP/socket knowledge
Key Topics TCP networking, streams, concurrent connections, memory management

1. Learning Objectives

By completing this project, you will:

  1. Create a TCP listening server using libuv
  2. Handle multiple concurrent client connections
  3. Implement the alloc_cb/read_cb pattern for stream reading
  4. Write data back to clients using uv_write()
  5. Manage per-client memory allocation and cleanup
  6. Handle client disconnection and errors gracefully
  7. Understand the relationship between uv_tcp_t and uv_stream_t

2. Theoretical Foundation

2.1 Core Concepts

Stream Handles

In libuv, TCP sockets are a type of stream. The inheritance hierarchy:

                    uv_handle_t
                         │
                         ▼
                    uv_stream_t     (abstract base for streams)
                    /    |    \
                   /     |     \
            uv_tcp_t  uv_pipe_t  uv_tty_t
            (TCP)     (pipes)    (terminal)

This means functions like uv_read_start(), uv_write(), and uv_shutdown() work on any stream type.

The Server Lifecycle

┌─────────────────────────────────────────────────────────────────────┐
│                     TCP Server Lifecycle                             │
├─────────────────────────────────────────────────────────────────────┤
│                                                                      │
│  1. CREATE SERVER SOCKET                                            │
│     ┌─────────────────────┐                                         │
│     │ uv_tcp_init()       │ ──► Initialize uv_tcp_t handle          │
│     └──────────┬──────────┘                                         │
│                │                                                     │
│                ▼                                                     │
│  2. BIND TO ADDRESS                                                 │
│     ┌─────────────────────┐                                         │
│     │ uv_tcp_bind()       │ ──► Associate with IP:port              │
│     └──────────┬──────────┘                                         │
│                │                                                     │
│                ▼                                                     │
│  3. START LISTENING                                                 │
│     ┌─────────────────────┐                                         │
│     │ uv_listen()         │ ──► Begin accepting connections         │
│     └──────────┬──────────┘                                         │
│                │                                                     │
│                ▼                                                     │
│  4. CONNECTION ARRIVES (callback fired)                             │
│     ┌─────────────────────┐                                         │
│     │ on_new_connection() │                                         │
│     │   - uv_tcp_init()   │ ──► Create client handle                │
│     │   - uv_accept()     │ ──► Accept the connection               │
│     │   - uv_read_start() │ ──► Start reading from client           │
│     └──────────┬──────────┘                                         │
│                │                                                     │
│                ▼                                                     │
│  5. DATA ARRIVES (callback fired)                                   │
│     ┌─────────────────────┐                                         │
│     │ on_read()           │                                         │
│     │   - Process data    │ ──► For echo: just send it back         │
│     │   - uv_write()      │ ──► Write response                      │
│     └──────────┬──────────┘                                         │
│                │                                                     │
│                ▼                                                     │
│  6. CLIENT DISCONNECTS (nread < 0 in on_read)                       │
│     ┌─────────────────────┐                                         │
│     │ uv_close()          │ ──► Close client handle                 │
│     │ on_close()          │ ──► Free client memory                  │
│     └─────────────────────┘                                         │
│                                                                      │
└─────────────────────────────────────────────────────────────────────┘

The alloc_cb and read_cb Pattern

Reading from streams requires two callbacks:

// Called when libuv needs a buffer to read into
void alloc_cb(uv_handle_t* handle, size_t suggested_size, uv_buf_t* buf) {
    buf->base = malloc(suggested_size);
    buf->len = suggested_size;
}

// Called when data arrives
void on_read(uv_stream_t* stream, ssize_t nread, const uv_buf_t* buf) {
    if (nread > 0) {
        // Process buf->base (nread bytes)
    }
    free(buf->base);  // Always free the buffer!
}

Why two callbacks?

  • Gives you control over memory allocation strategy
  • Can use pre-allocated pools, stack memory, etc.
  • libuv doesn’t know your memory needs

Per-Client State

Each connected client needs its own:

  • uv_tcp_t handle (for the socket)
  • Read buffer (provided via alloc_cb)
  • Any application-specific state
┌─────────────────────────────────────────────────────────────────┐
│                    Server with 3 Clients                         │
├─────────────────────────────────────────────────────────────────┤
│                                                                  │
│  ┌──────────────────┐                                           │
│  │  Server Handle   │ ◄── Listening on port 7000                │
│  │   uv_tcp_t       │                                           │
│  └────────┬─────────┘                                           │
│           │                                                      │
│           │ (accepts connections)                                │
│           │                                                      │
│     ┌─────┴─────┬─────────────┐                                 │
│     ▼           ▼             ▼                                 │
│  ┌──────┐   ┌──────┐     ┌──────┐                               │
│  │Client│   │Client│     │Client│                               │
│  │  1   │   │  2   │     │  3   │                               │
│  │ tcp_t│   │ tcp_t│     │ tcp_t│                               │
│  └──────┘   └──────┘     └──────┘                               │
│  IP: A      IP: B        IP: C                                   │
│                                                                  │
│  Each has own handle, own callbacks, own buffers                 │
│                                                                  │
└─────────────────────────────────────────────────────────────────┘

2.2 Why This Matters

TCP servers are the backbone of:

  • Web servers (HTTP)
  • Database connections
  • Game servers
  • Message queues
  • Microservice communication

Understanding libuv’s TCP model helps you:

  • Build high-performance servers
  • Understand Node.js networking
  • Debug production connection issues

2.3 Historical Context

  • 1980s: BSD sockets API created
  • 1990s: Thread-per-connection model dominates
  • 1999: C10K problem identified
  • 2000s: Event-driven servers emerge (nginx, lighttpd)
  • 2009: Node.js brings event-driven to mainstream
  • Today: Event-driven is the standard for high-performance servers

2.4 Common Misconceptions

Misconception Reality
“Each client runs in a thread” All clients share one thread (event loop)
“I need to call recv() directly” libuv handles it; you get callbacks
“Callbacks run in parallel” One callback at a time, in order
“The buffer I malloc is kept by libuv” You must free it in on_read
“uv_write is synchronous” It’s async; data might not be sent yet

3. Project Specification

3.1 What You Will Build

A TCP server that:

  1. Listens on a configurable port (default: 7000)
  2. Accepts any number of concurrent clients
  3. Echoes any data received back to the sender
  4. Handles client disconnection gracefully

3.2 Functional Requirements

  1. Bind to 0.0.0.0:7000 (all interfaces)
  2. Accept incoming TCP connections
  3. For each client:
    • Read data as it arrives
    • Write the same data back immediately
    • Handle disconnection without crashing
  4. Print status messages (connected, disconnected)

3.3 Non-Functional Requirements

  1. Handle 100+ concurrent connections
  2. No memory leaks on client disconnect
  3. Graceful handling of read/write errors
  4. Clean compilation (no warnings)

3.4 Example Usage / Output

Terminal 1 (Server):

$ ./echo-server
Echo server listening on port 7000...
Client connected
Client connected
Client disconnected
Client connected

Terminal 2 (Client 1):

$ nc localhost 7000
hello
hello
world
world
^C

Terminal 3 (Client 2):

$ nc localhost 7000
testing
testing
123
123

3.5 Real World Outcome

A working multi-client TCP server demonstrating:

  • libuv’s stream abstraction
  • Concurrent connection handling
  • Proper memory management
  • Production-quality server patterns

4. Solution Architecture

4.1 High-Level Design

┌────────────────────────────────────────────────────────────────────┐
│                         main()                                      │
│  ┌───────────────────────────────────────────────────────────────┐ │
│  │ 1. uv_tcp_init(&server)                                       │ │
│  │ 2. uv_tcp_bind(&server, addr)                                 │ │
│  │ 3. uv_listen(&server, backlog, on_new_connection)             │ │
│  │ 4. uv_run(loop)                                               │ │
│  └───────────────────────────────────────────────────────────────┘ │
│                              │                                      │
│                              ▼                                      │
│  ┌───────────────────────────────────────────────────────────────┐ │
│  │ on_new_connection(server, status)                             │ │
│  │   - Allocate new uv_tcp_t for client                          │ │
│  │   - uv_tcp_init(&client)                                      │ │
│  │   - uv_accept(server, client)                                 │ │
│  │   - uv_read_start(client, alloc_buffer, on_read)              │ │
│  └───────────────────────────────────────────────────────────────┘ │
│                              │                                      │
│            ┌─────────────────┴─────────────────┐                   │
│            ▼                                   ▼                   │
│  ┌──────────────────────┐           ┌──────────────────────┐      │
│  │ alloc_buffer()       │           │ on_read()            │      │
│  │   - malloc buffer    │           │   - nread < 0: close │      │
│  │   - return in *buf   │           │   - nread > 0: echo  │      │
│  └──────────────────────┘           │     - uv_write()     │      │
│                                     └──────────┬───────────┘      │
│                                                │                   │
│                                                ▼                   │
│                                     ┌──────────────────────┐      │
│                                     │ on_write()           │      │
│                                     │   - Free write_req   │      │
│                                     │   - Free buffer copy │      │
│                                     └──────────────────────┘      │
│                                                                    │
└────────────────────────────────────────────────────────────────────┘

4.2 Key Components

Component Type Purpose
server uv_tcp_t Listening socket
client uv_tcp_t* Per-client socket (heap allocated)
write_req uv_write_t* Per-write request (heap allocated)
alloc_buffer uv_alloc_cb Provides buffers for reading
on_read uv_read_cb Handles incoming data
on_write uv_write_cb Cleans up after write

4.3 Data Structures

// Client context (optional, for complex servers)
typedef struct {
    uv_tcp_t handle;      // Must be first for casting
    char client_ip[64];   // Client address
    int message_count;    // Messages received
} client_t;

// Write request with buffer (for cleanup)
typedef struct {
    uv_write_t req;
    uv_buf_t buf;
} write_req_t;

4.4 Algorithm Overview

ALGORITHM: TCP Echo Server

1. SETUP
   - Initialize TCP handle
   - Bind to address
   - Start listening with backlog

2. CONNECTION LOOP (event-driven)
   FOR each connection event:
     - Allocate client handle
     - Accept connection
     - Start reading

3. READ LOOP (per-client, event-driven)
   FOR each read event:
     IF error or EOF:
       - Close client handle
       - Free client memory
     ELSE:
       - Create write request
       - Copy data
       - Write back to client

4. WRITE COMPLETE (per-write)
   - Free write request
   - Free buffer copy

5. Implementation Guide

5.1 Development Environment Setup

# Create project
mkdir echo-server && cd echo-server

# Create Makefile
cat > Makefile << 'EOF'
CC = gcc
CFLAGS = -Wall -Wextra -g $(shell pkg-config --cflags libuv)
LDFLAGS = $(shell pkg-config --libs libuv)

echo-server: main.c
	$(CC) $(CFLAGS) -o $@ $^ $(LDFLAGS)

clean:
	rm -f echo-server

.PHONY: clean
EOF

touch main.c

5.2 Project Structure

echo-server/
├── Makefile
└── main.c

5.3 The Core Question You’re Answering

How do you handle multiple concurrent network connections in a single-threaded event loop?

The answer: Each client gets its own handle, and the event loop multiplexes between them.

5.4 Concepts You Must Understand First

  1. What’s the difference between the server handle and client handles?
    • Server: listens, never sends/receives data
    • Client: represents one connection, reads/writes data
  2. Why do you need to allocate client handles on the heap?
    • Each client needs its own handle
    • Stack allocation would go out of scope
    • Must free() when client disconnects
  3. What does uv_accept() do?
    • Takes a pending connection from the listen backlog
    • Associates it with the client handle
    • Does NOT allocate the client handle (you do that)

5.5 Questions to Guide Your Design

Memory Management:

  • Who allocates the client handle? (You, in on_new_connection)
  • Who frees the client handle? (You, in the close callback)
  • Who allocates the read buffer? (You, in alloc_buffer)
  • Who frees the read buffer? (You, in on_read)

Write Handling:

  • Can you reuse the read buffer for writing? (Careful! It might be freed)
  • Who owns the write request? (You allocate, free in write callback)
  • What if the write fails? (Still need to free request)

Error Handling:

  • What if uv_accept() fails? (Free client, don’t crash)
  • What if uv_write() fails? (Close client, free resources)
  • What does UV_EOF mean? (Client closed connection)

5.6 Thinking Exercise

Trace two concurrent clients:

Time T0: Server starts, listening on port 7000

Time T1: Client A connects
         - on_new_connection() fires
         - client_a handle created
         - uv_accept() links socket to client_a
         - uv_read_start(client_a) begins listening

Time T2: Client B connects
         - on_new_connection() fires (again!)
         - client_b handle created
         - uv_accept() links socket to client_b
         - uv_read_start(client_b) begins listening

Time T3: Client A sends "hello\n"
         - alloc_buffer() called, allocates buffer_a
         - on_read(client_a, 6, buffer_a) fires
         - Echo: uv_write(client_a, "hello\n")
         - free(buffer_a)

Time T4: Client B sends "world\n"
         - alloc_buffer() called, allocates buffer_b
         - on_read(client_b, 6, buffer_b) fires
         - Echo: uv_write(client_b, "world\n")
         - free(buffer_b)

Time T5: Client A disconnects
         - on_read(client_a, UV_EOF, ...) fires
         - uv_close(client_a, on_close)
         - on_close: free(client_a)

Questions:

  1. Are client_a and client_b the same handle? (No!)
  2. What happens if A sends while B’s callback is running? (Queued)
  3. When is buffer_a freed? (In on_read, immediately after use)

5.7 Hints in Layers

Hint 1: Starting Structure

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <uv.h>

#define DEFAULT_PORT 7000
#define DEFAULT_BACKLOG 128

uv_loop_t *loop;

// Forward declarations
void on_new_connection(uv_stream_t *server, int status);
void alloc_buffer(uv_handle_t *handle, size_t suggested_size, uv_buf_t *buf);
void on_read(uv_stream_t *client, ssize_t nread, const uv_buf_t *buf);
void on_close(uv_handle_t *handle);

Hint 2: Main Function

int main() {
    loop = uv_default_loop();

    uv_tcp_t server;
    uv_tcp_init(loop, &server);

    struct sockaddr_in addr;
    uv_ip4_addr("0.0.0.0", DEFAULT_PORT, &addr);

    uv_tcp_bind(&server, (const struct sockaddr*)&addr, 0);

    int r = uv_listen((uv_stream_t*)&server, DEFAULT_BACKLOG, on_new_connection);
    if (r) {
        fprintf(stderr, "Listen error: %s\n", uv_strerror(r));
        return 1;
    }

    printf("Echo server listening on port %d...\n", DEFAULT_PORT);
    return uv_run(loop, UV_RUN_DEFAULT);
}

Hint 3: Connection Handling

void on_new_connection(uv_stream_t *server, int status) {
    if (status < 0) {
        fprintf(stderr, "Connection error: %s\n", uv_strerror(status));
        return;
    }

    // Allocate client handle on heap (freed in on_close)
    uv_tcp_t *client = (uv_tcp_t*)malloc(sizeof(uv_tcp_t));
    uv_tcp_init(loop, client);

    if (uv_accept(server, (uv_stream_t*)client) == 0) {
        printf("Client connected\n");
        uv_read_start((uv_stream_t*)client, alloc_buffer, on_read);
    } else {
        uv_close((uv_handle_t*)client, on_close);
    }
}

Hint 4: Reading and Echoing

void alloc_buffer(uv_handle_t *handle, size_t suggested_size, uv_buf_t *buf) {
    (void)handle;  // Unused
    buf->base = (char*)malloc(suggested_size);
    buf->len = suggested_size;
}

void on_read(uv_stream_t *client, ssize_t nread, const uv_buf_t *buf) {
    if (nread > 0) {
        // Echo back
        uv_write_t *req = (uv_write_t*)malloc(sizeof(uv_write_t));
        uv_buf_t wrbuf = uv_buf_init(buf->base, nread);
        req->data = buf->base;  // Save for freeing later
        uv_write(req, client, &wrbuf, 1, on_write);
        return;  // Don't free buf->base yet!
    }

    if (nread < 0) {
        if (nread != UV_EOF) {
            fprintf(stderr, "Read error: %s\n", uv_strerror(nread));
        }
        printf("Client disconnected\n");
        uv_close((uv_handle_t*)client, on_close);
    }

    free(buf->base);  // Free for error/EOF case
}

void on_write(uv_write_t *req, int status) {
    if (status) {
        fprintf(stderr, "Write error: %s\n", uv_strerror(status));
    }
    free(req->data);  // Free the buffer we saved
    free(req);        // Free the request
}

void on_close(uv_handle_t *handle) {
    free(handle);  // Free the client handle
}

5.8 The Interview Questions They’ll Ask

  1. “How does libuv handle concurrent connections without threads?”
    • Single event loop polls all sockets
    • When data available, fires callback
    • Callbacks run one at a time (no race conditions)
  2. “What’s the backlog parameter in uv_listen()?”
    • Queue size for pending connections
    • OS accepts before you call uv_accept()
    • Too small: clients get “connection refused”
  3. “Why do you malloc the client handle?”
    • Each client needs its own handle
    • Must persist across callbacks
    • Stack would go out of scope
  4. “What happens if a callback blocks?”
    • All other clients are blocked
    • No new connections accepted
    • Never block in callbacks!
  5. “How would you limit concurrent connections?”
    • Track count in on_new_connection
    • Refuse if over limit
    • Decrement in on_close

5.9 Books That Will Help

Topic Book Chapter
libuv networking An Introduction to libuv Networking chapter
TCP fundamentals TCP/IP Illustrated Chapters 17-18
Socket programming UNIX Network Programming Chapters 4-6
Event-driven design The Art of Unix Programming Chapter 7

5.10 Implementation Phases

Phase 1: Accept Connections (2 hours)

Goal: Server accepts connections, prints message, immediately closes.

  • Set up server socket
  • Bind and listen
  • Accept connections
  • Close immediately

Test: nc localhost 7000 connects and immediately disconnects.

Phase 2: Read Data (2 hours)

Goal: Server reads and prints data from clients.

  • Start reading after accept
  • Print received data to server console
  • Don’t echo yet

Test: Send data with nc, see it on server.

Phase 3: Echo Data (3 hours)

Goal: Server echoes data back to client.

  • Implement on_write callback
  • Proper buffer management
  • Handle write errors

Test: Full echo functionality works.

Phase 4: Cleanup (3 hours)

Goal: No memory leaks, graceful disconnect.

  • Close handles properly
  • Free all allocations
  • Test with Valgrind

Test: Connect/disconnect 100 clients, no leaks.

5.11 Key Implementation Decisions

Decision Options Recommendation
Client handle allocation Stack / heap Heap (required)
Buffer allocation Pool / per-read Per-read (simpler)
Write buffer Copy / reuse Copy (safer)
Error handling Close / reconnect Close (simpler)
Port Hardcoded / argv Hardcoded (for now)

6. Testing Strategy

Manual Testing

# Terminal 1: Start server
./echo-server

# Terminal 2: Connect with netcat
nc localhost 7000
hello
# Should see "hello" echoed back

# Terminal 3: Another client
nc localhost 7000
world
# Should see "world" echoed back independently

Load Testing

# Multiple concurrent connections
for i in {1..100}; do
    echo "test $i" | nc localhost 7000 &
done
wait

# Stress test with large data
dd if=/dev/urandom bs=1M count=10 | nc localhost 7000 > /dev/null

Memory Testing

# Run under valgrind
valgrind --leak-check=full ./echo-server &
sleep 1

# Connect and disconnect several times
for i in {1..10}; do
    echo "test" | nc localhost 7000
done

# Kill server and check output
kill %1

7. Common Pitfalls & Debugging

Problem Symptom Root Cause Fix
Double free Crash Freeing buffer twice Track ownership
Memory leak Grows over time Not freeing on disconnect Check all paths
No echo Data lost Buffer freed before write Copy for write
Crash on disconnect Segfault Using freed handle Close properly
Port in use Bind error Previous instance running Kill or change port
Accept fails Log error Handle not initialized Init before accept

Debugging Checklist

# Check if port is free
lsof -i :7000

# Check socket options
ss -tlnp | grep 7000

# Trace system calls
strace -e network ./echo-server

# Memory debugging
valgrind --track-origins=yes ./echo-server

8. Extensions & Challenges

Extension 1: Chat Server

Broadcast messages to all connected clients.

Challenge: Need to track all clients in a list.

Extension 2: Line Buffering

Only echo complete lines (ending with \n).

Challenge: Need per-client buffer for partial lines.

Extension 3: Graceful Shutdown

Handle SIGINT to close all clients cleanly.

Challenge: Need uv_signal_t and client tracking.

Extension 4: TLS Support

Add TLS encryption using OpenSSL.

Challenge: Significant complexity increase.

Extension 5: Connection Limiting

Limit to N concurrent connections.

Challenge: Track count, handle edge cases.


9. Real-World Connections

How Node.js Uses This

const net = require('net');

const server = net.createServer((socket) => {
    // Each socket is a libuv stream handle
    socket.on('data', (data) => {
        socket.write(data);  // uv_write under the hood
    });
});

server.listen(7000);  // uv_listen under the hood

Production Server Patterns

Pattern Description Use Case
Worker pool Multiple event loops CPU-bound work
Prefork Fork before listen Share listening socket
Connection pool Reuse connections Database clients
Backpressure Pause reading Slow consumers

10. Resources

Documentation

Reference Implementations


11. Self-Assessment Checklist

Before moving to Project 4, verify:

  • Server starts without errors
  • Multiple clients can connect simultaneously
  • Data is echoed correctly
  • Clients can disconnect without crashing server
  • No memory leaks (Valgrind clean)
  • You can explain the alloc_cb pattern
  • You understand handle vs request lifecycle
  • You can trace connection flow on paper

12. Submission / Completion Criteria

Your project is complete when:

  1. Functional: Correctly echoes data to multiple clients
  2. Robust: Handles disconnects and errors gracefully
  3. Clean: No warnings, no memory leaks
  4. Tested: Works with netcat, passes stress test

Bonus: Implement at least one extension.


Previous Up Next
P02: Async Cat Clone README P04: Async HTTP Client