Project 16: Teensy Data Logger - SD Card + Sensor Archive

Build a reliable data logger that records sensor data to SD card with integrity checks.

Quick Reference

Attribute Value
Difficulty Level 3: Advanced
Time Estimate 20-30 hours
Main Programming Language C++
Alternative Programming Languages C
Coolness Level Level 4: Hardcore Tech Flex
Business Potential 3. The “Service” Model
Prerequisites C/C++ basics, Teensyduino setup, basic electronics, ability to use a multimeter/logic analyzer
Key Topics SD logging, buffering, data integrity

1. Learning Objectives

By completing this project, you will:

  1. Explain the core question for this project in your own words.
  2. Implement the main workflow and validate it with measurements.
  3. Handle at least two failure modes and document recovery.
  4. Produce a deterministic report that matches hardware behavior.

2. All Theory Needed (Per-Concept Breakdown)

SD Card Logging, Filesystems, and Data Integrity

Fundamentals SD cards are block devices with unpredictable latency. Writing data safely requires buffering, filesystem management, and power-loss handling. Teensy 4.1 can use SDIO for speed, but write stalls still occur. This project teaches logging that survives real-world failures.

Deep Dive into the concept SD cards write in large erase blocks. Small writes can trigger long erase operations, causing millisecond stalls. Logging directly from an ISR risks losing data. A proper pipeline uses RAM buffers, writes in larger chunks, and decouples capture from storage. Filesystems add risk: metadata updates can corrupt if power fails mid-write. A robust log format includes headers, lengths, and checksums so partial logs can be recovered. Pre-allocation and periodic flushes reduce risk. This project builds a deterministic logger with recovery logic and verifies it by simulating power loss.

How this fit on projects This concept directly powers the implementation choices and validation steps in this project.

Definitions & key terms

  • SDIO: High-speed interface for SD cards.
  • Erase block: Large internal block that must be erased before write.
  • Flush: Operation that forces buffered data to be written.
  • Checksum: Value used to verify data integrity.

Mental model diagram (ASCII)

Sensors -> Buffer -> File writer -> SD card

How it works (step-by-step)

  1. Design a record format with header and checksum.
  2. Implement double-buffered logging.
  3. Write records to SD and flush periodically.
  4. Simulate power loss and verify recovery.

Minimal concrete example

RECORD: [ts][sensor_id][value][crc]

Common misconceptions

  • SD card writes are instant.
  • Filesystem corruption only happens on bad cards.
  • Flushing after every write is always safe.

Check-your-understanding questions

  • Why do SD writes stall?
  • How does buffering reduce data loss?
  • What is a recoverable log format?

Check-your-understanding answers

  • Internal erase and wear-leveling cause long write delays.
  • Buffers decouple capture from slow writes.
  • A format with headers and checksums can be scanned after a crash.

Real-world applications

  • Data loggers
  • Flight recorders
  • IoT edge devices

Where you’ll apply it

References

  • SD card physical layer spec
  • Embedded logging best practices

Key insights

Reliable logging requires designing for SD card latency and failure.

Summary

Buffering and recovery logic turn unreliable storage into a dependable log.

Homework/Exercises to practice the concept

  • Measure SD write latency distribution with small vs large writes.
  • Recover a log after simulated power loss.

Solutions to the homework/exercises

  • Log write durations and compute percentiles.
  • Scan file for valid headers and rebuild index.

3. Project Specification

3.1 What You Will Build

Build a reliable data logger that records sensor data to SD card with integrity checks.

3.2 Functional Requirements

  1. Capture sensor data at fixed rate.
  2. Buffer data and write to SD safely.
  3. Include checksums and recovery logic.
  4. Generate log summary report.

3.3 Non-Functional Requirements

  • Performance: Meet the target timing/throughput for the project.
  • Reliability: Detect errors and recover without undefined behavior.
  • Usability: Provide clear logs and a repeatable workflow.

3.4 Example Usage / Output

./P16-teensy-data-logger-sd-card-sensor-archive --run

3.5 Data Formats / Schemas / Protocols

Binary or CSV log with record header + checksum

3.6 Edge Cases

  • SD write latency spikes
  • Power loss during write
  • File system full

3.7 Real World Outcome

You will run the project and see deterministic logs and measurements that match physical hardware behavior.

3.7.1 How to Run (Copy/Paste)

cd project-root
make
./P16-teensy-data-logger-sd-card-sensor-archive --run

3.7.2 Golden Path Demo (Deterministic)

Use a fixed input configuration and a known test signal. Capture output for 60 seconds and verify it matches expected values.

3.7.3 If CLI: exact terminal transcript

$ ./P16-teensy-data-logger-sd-card-sensor-archive --run --seed 42
[INFO] Teensy Data Logger - SD Card + Sensor Archive starting
[INFO] Report saved to data/report.csv
[INFO] Status: OK
$ echo $?
0

Failure Demo (Deterministic)

$ ./P16-teensy-data-logger-sd-card-sensor-archive --run --missing-device
[ERROR] Device not detected
$ echo $?
2

4. Solution Architecture

4.1 High-Level Design

Inputs -> Acquisition -> Processing -> Output/Log

4.2 Key Components

Component Responsibility Key Decisions
Acquisition Configure peripherals and capture data Use stable clock settings
Processing Convert raw data to meaningful values Apply calibration/filters
Output/Log Emit reports and logs CSV for reproducibility

4.3 Data Structures (No Full Code)

struct Sample {
    uint32_t timestamp_us;
    uint32_t value;
    uint32_t flags;
};

4.4 Algorithm Overview

Key Algorithm: Measurement + Report

  1. Initialize hardware and verify configuration.
  2. Capture data and record timestamps.
  3. Compute metrics and write report.

Complexity Analysis:

  • Time: O(n) in samples
  • Space: O(n) for log storage

5. Implementation Guide

5.1 Development Environment Setup

# Arduino IDE + Teensyduino must be installed
# Optional CLI workflow
arduino-cli core update-index
arduino-cli core install teensy:avr

5.2 Project Structure

project-root/
├── src/
│   ├── main.ino
│   ├── hw_config.h
│   └── measurements.cpp
├── tools/
│   └── analyze.py
├── data/
│   └── samples.csv
└── README.md

5.3 The Core Question You’re Answering

“How do I log sensor data without corruption or loss?”

5.4 Concepts You Must Understand First

Stop and research these before coding:

  1. SD logging, buffering, data integrity
  2. Data logging and measurement techniques
  3. Basic timing math and error analysis

5.5 Questions to Guide Your Design

  1. What buffer size prevents data loss?
  2. How will you recover after power loss?
  3. How often will you flush data?

5.6 Thinking Exercise

Design a log record format that can be scanned after a crash.

5.7 The Interview Questions They’ll Ask

  1. Why are SD writes unpredictable?
  2. What is wear leveling?
  3. How do you design for power loss?

5.8 Hints in Layers

  • Pre-allocate log files.
  • Write in large blocks.
  • Add checksums to every record.

5.9 Books That Will Help

| Topic | Book | Chapter | |——-|——|———| | Storage | Making Embedded Systems | Ch. 6 | | File systems | Embedded Linux Primer | Ch. 7 | | Reliability | Embedded Systems Handbook | Ch. 9 |

5.10 Implementation Phases

Phase 1: Foundation (6 hours)

Goals:

  • Capture sensor data
  • Write basic log

Tasks:

  1. Capture sensor data
  2. Write basic log

Checkpoint: Logger runs

Phase 2: Core Functionality (10 hours)

Goals:

  • Add buffering
  • Add checksums

Tasks:

  1. Add buffering
  2. Add checksums

Checkpoint: Reliable logging

Phase 3: Polish (6 hours)

Goals:

  • Recovery tests
  • Report summary

Tasks:

  1. Recovery tests
  2. Report summary

Checkpoint: Final logger

5.11 Key Implementation Decisions

| Decision | Options | Recommendation | Rationale | |———-|———|—————-|———–| | Buffering | Single buffer, double buffer | Double buffer | Avoids data loss during processing | | Logging format | CSV, binary | CSV | Human-readable while still scriptable | | Clock speed | Default, overclock | Default | Keeps peripherals in spec |


6. Testing Strategy

6.1 Test Categories

| Category | Purpose | Examples | |———-|———|———-| | Unit Tests | Validate math, parsing, and conversions | Timer math, CRC checks | | Integration Tests | Verify peripherals and pipelines | DMA -> buffer -> log | | Edge Case Tests | Handle boundary conditions | Brownout, missing sensor |

6.2 Critical Test Cases

{test_cases}

6.3 Test Data

Use a fixed test input pattern and record outputs to data/report.csv

7. Common Pitfalls & Debugging

7.1 Frequent Mistakes

| Pitfall | Symptom | Solution | |———|———|———-| {pitfalls}

7.2 Debugging Strategies

{debug_strats}

7.3 Performance Traps

Large buffers improve stability but increase latency. Measure both throughput and jitter to choose the right size.


8. Extensions & Challenges

8.1 Beginner Extensions

{ex_begin}

8.2 Intermediate Extensions

{ex_inter}

8.3 Advanced Extensions

{ex_adv}


9. Real-World Connections

9.1 Industry Applications

{industry_apps}

{open_source}

9.3 Interview Relevance

{interview_rel}


10. Resources

10.1 Essential Reading

{resources}

10.2 Video Resources

  • Embedded systems timing walkthrough (YouTube)
  • Teensy hardware deep dive (Conference talk)

10.3 Tools & Documentation

  • Teensyduino: Toolchain for Teensy boards
  • Logic Analyzer: Timing verification
  • Multimeter: Voltage and current measurement

{related_projects}


11. Self-Assessment Checklist

11.1 Understanding

  • I can explain the main concept without notes.
  • I can explain why the measurements match (or do not match) expectations.
  • I understand at least one tradeoff made in this project.

11.2 Implementation

  • All functional requirements are met.
  • All critical test cases pass.
  • Logs and reports are reproducible.
  • Edge cases are handled.

11.3 Growth

  • I documented lessons learned.
  • I can explain this project in a job interview.
  • I identified one improvement for next iteration.

12. Submission / Completion Criteria

Minimum Viable Completion: {comp_min}

Full Completion: {comp_full}

Excellence (Going Above & Beyond): {comp_ex}