Project 7: Dual-Core Weather Station

Build a dual-core weather station that samples sensors on one core and renders/streams data on the other for smooth, responsive performance.

Quick Reference

Attribute Value
Difficulty Level 3: Advanced
Time Estimate 2-3 weeks
Main Programming Language C (Pico SDK)
Alternative Programming Languages MicroPython (limited multicore support)
Coolness Level Level 4: Parallel Power
Business Potential 3. The “Smart Instrument” Tier
Prerequisites Sensor interfacing, timers, basic data structures
Key Topics Multicore synchronization, sensor sampling, data logging, display output

1. Learning Objectives

By completing this project, you will:

  1. Split real-time tasks across both RP2040 cores.
  2. Implement safe inter-core communication.
  3. Sample sensors deterministically and compute derived metrics.
  4. Log data to storage and stream to a host.
  5. Build a responsive UI (OLED or serial dashboard).

2. All Theory Needed (Per-Concept Breakdown)

2.1 Multicore Synchronization on RP2040

Fundamentals

The RP2040 has two Cortex-M0+ cores that share SRAM and peripherals. You can run tasks on separate cores, but shared data must be synchronized to avoid races. The SDK provides multicore FIFO and spinlocks for communication and mutual exclusion.

Deep Dive into the concept

When two cores access shared memory without coordination, you can get data races, partial writes, or inconsistent reads. The RP2040 includes hardware spinlocks (32 locks) that allow safe critical sections. The multicore FIFO provides a simple message queue between cores. A common pattern is to dedicate core 0 to sensor sampling (real-time) and core 1 to UI/logging (non-real-time). Core 0 pushes new sensor frames into a queue; core 1 consumes them. Because there is no cache, memory coherence is simple, but you must still protect shared buffers when both cores may read/write. You also need to consider interrupt handling: interrupts are per-core, so you should decide which core handles which peripheral. For deterministic sampling, keep interrupts on the sampling core and avoid long blocking operations there.

How this fits on projects

Multicore design is central to §3.2 requirements and §5.10 Phase 2.

Definitions & key terms

  • Spinlock -> hardware mutex for short critical sections
  • FIFO -> inter-core message queue
  • Race condition -> inconsistent result due to concurrent access

Mental model diagram (ASCII)

Core0 (sensors) -> FIFO/queue -> Core1 (UI/log)

How it works (step-by-step)

  1. Initialize multicore FIFO.
  2. Core0 samples sensors and pushes data.
  3. Core1 reads data and updates UI/log.
  4. Spinlocks protect shared buffers if needed.

Minimal concrete example

multicore_fifo_push_blocking(frame_id);

Common misconceptions

  • “Two cores means double speed” -> only if tasks are independent.
  • “No cache means no races” -> shared memory can still be inconsistent.

Check-your-understanding questions

  1. Why use a FIFO instead of shared globals?
  2. What happens if both cores write the same buffer?
  3. Why keep sampling on a dedicated core?

Check-your-understanding answers

  1. FIFO provides ordered, atomic communication.
  2. Data races and corrupted values.
  3. It keeps sampling deterministic and jitter-free.

Real-world applications

  • Audio processing, sensor fusion, UI responsiveness.

Where you’ll apply it

References

  • RP2040 Datasheet: multicore and spinlocks

Key insights

Two cores are powerful only if you design clean boundaries and safe data exchange.

Summary

Use FIFO and spinlocks to coordinate cores and keep real-time tasks isolated.

Homework/Exercises to practice the concept

  1. Implement a producer/consumer queue between cores.
  2. Measure latency from core0 sample to core1 display.

Solutions to the homework/exercises

  1. Use FIFO or ring buffer with spinlock.
  2. Expect a few hundred microseconds depending on loop load.

2.2 Sensor Sampling and Derived Metrics

Fundamentals

A weather station typically measures temperature, humidity, pressure, and light. Each sensor has its own interface and sampling constraints. You must schedule reads, apply calibration, and compute derived metrics like dew point or heat index.

Deep Dive into the concept

Sensors update at different rates: temperature changes slowly, pressure changes moderately, and light can change quickly. Sampling all sensors at the same fast rate wastes power and introduces noise. Instead, choose per-sensor intervals and compute derived metrics from stable averages. For example, compute dew point from temperature and humidity using a standard formula; compute pressure trends by comparing moving averages over time. You should also track sensor validity: many sensors return error codes or require warm-up time. If you log data, include timestamps and a validity flag. Sampling on core0 ensures timing stability; core1 can format and display results without interfering. Calibration is critical: raw sensor readings may require offset corrections or linearization. Finally, you must ensure that I2C/SPI transactions do not overlap if sensors share a bus; you may need a bus lock or scheduling.

How this fits on projects

Sampling strategy affects §3.2 requirements and §5.10 Phase 1.

Definitions & key terms

  • Dew point -> temperature at which air becomes saturated
  • Sampling interval -> time between sensor reads
  • Validity flag -> indicates sensor data is trustworthy

Mental model diagram (ASCII)

Sensors -> Sample -> Calibrate -> Derived metrics -> Display/Log

How it works (step-by-step)

  1. Read raw sensor values.
  2. Apply calibration constants.
  3. Compute derived metrics.
  4. Package into a structured sample.

Minimal concrete example

float dew_point = temp_c - ((100 - humidity) / 5.0f);

Common misconceptions

  • “More samples = more accuracy” -> noise increases and power is wasted.
  • “All sensors update instantly” -> some require conversion time.

Check-your-understanding questions

  1. Why should temperature be averaged over multiple samples?
  2. How do you handle a sensor read failure?
  3. What is a derived metric and why compute it?

Check-your-understanding answers

  1. It reduces noise and jitter.
  2. Set a validity flag and keep last known value.
  3. It provides user-meaningful data like dew point.

Real-world applications

  • Weather stations, HVAC monitoring, environmental research.

Where you’ll apply it

References

  • Sensor datasheets (BME280, SHT31, etc.)

Key insights

Sampling strategy and calibration matter more than raw sensor specs.

Summary

Use appropriate intervals, calibrate readings, and compute meaningful metrics.

Homework/Exercises to practice the concept

  1. Compute dew point for temp=25C, humidity=50%.
  2. Design a sampling schedule for three sensors with different rates.

Solutions to the homework/exercises

  1. Approx dew point = 25 - (50/5) = 15C.
  2. Example: temp/humidity every 5s, pressure every 10s, light every 1s.

2.3 Data Logging and Display Pipelines

Fundamentals

Weather stations often log data to SD cards or stream to a host for visualization. Logging must be reliable and non-blocking so it does not interfere with sampling.

Deep Dive into the concept

Logging involves formatting data, storing it, and ensuring data integrity. For SD card logging, you must handle filesystem mounting, file writes, and potential write latency. You should buffer data and write in batches to reduce wear. For display, you can use an OLED or serial UI. Display updates should be rate-limited to avoid flicker and CPU load. A dual-core design helps: core0 samples and produces data frames; core1 logs and updates the display. The UI can present current values, trends, and status indicators (Wi-Fi, SD card). Always include timestamps and file headers so logs can be parsed later.

How this fits on projects

Logging and display are part of §3.2 and §5.10 Phase 3.

Definitions & key terms

  • Batch write -> writing multiple samples at once
  • Log schema -> consistent CSV/JSON format
  • UI refresh rate -> how often display updates

Mental model diagram (ASCII)

Samples -> Queue -> Logger -> SD/USB
                 -> Display

How it works (step-by-step)

  1. Receive sample frames from core0.
  2. Append to log buffer.
  3. Periodically flush to storage.
  4. Update display at fixed rate.

Minimal concrete example

fprintf(log, "%lu,%.2f,%.2f\n", ts, temp, hum);

Common misconceptions

  • “Write each sample immediately” -> causes slowdowns and wear.
  • “Update display every loop” -> wastes CPU and flickers.

Check-your-understanding questions

  1. Why batch writes for SD logging?
  2. What is a reasonable display refresh rate?
  3. How do you ensure log files are parsable?

Check-your-understanding answers

  1. It reduces write latency and wear.
  2. 1-2 Hz is usually enough for weather data.
  3. Use consistent headers and delimiters.

Real-world applications

  • Data loggers, industrial monitoring, scientific instruments.

Where you’ll apply it

References

  • FatFS documentation
  • SD card performance notes

Key insights

Logging is a throughput and reliability problem; treat it like a pipeline.

Summary

Separate sampling from logging/display to keep real-time data stable.

Homework/Exercises to practice the concept

  1. Design a CSV schema for weather data with timestamps.
  2. Estimate log size for 1 sample per minute over 30 days.

Solutions to the homework/exercises

  1. Example: ts,temp_c,humidity,pressure,light_lux
  2. 30 days * 1440 samples/day = 43,200 lines.

3. Project Specification

3.1 What You Will Build

A dual-core weather station that samples multiple sensors on one core and logs/displays data on the other. Output includes serial logs and optional OLED display.

3.2 Functional Requirements

  1. Multicore split: core0 sampling, core1 display/log.
  2. Sensor suite: at least 3 sensors (temp/humidity/pressure/light).
  3. Derived metrics: compute dew point or heat index.
  4. Logging: write CSV or JSON to SD/serial.
  5. UI: update display with current values and status.

3.3 Non-Functional Requirements

  • Performance: sampling jitter under 5 ms.
  • Reliability: log file integrity over 24 hours.
  • Usability: clear screen layout and readable logs.

3.4 Example Usage / Output

[WS] temp=23.4C hum=41.2% press=1012hPa
[DERIVED] dew=15.0C

3.5 Data Formats / Schemas / Protocols

ts,temp_c,humidity,pressure_hpa,light_lux,dew_point

3.6 Edge Cases

  • Sensor unavailable -> mark as “NA” with validity flag.
  • SD card removed -> fallback to serial logging.
  • Display missing -> continue logging.

3.7 Real World Outcome

A stable weather station dashboard with reliable logs and derived metrics.

3.7.1 How to Run (Copy/Paste)

cmake .. && make -j4
picotool load -f weather_station.uf2
minicom -b 115200 -o -D /dev/ttyACM0

3.7.2 Golden Path Demo (Deterministic)

  • Sample every 5 seconds, log to SD every 30 seconds.
  • Display updates at 1 Hz with stable values.

3.7.3 Failure Demo (Bad Input)

  • Scenario: unplug I2C sensor.
  • Expected result: log [ERROR] sensor missing and continue with others.

3.7.4 If GUI: ASCII wireframe

+---------------------+
| Temp: 23.4C         |
| Hum:  41%           |
| Press: 1012 hPa     |
| Dew:  15.0C         |
| SD: OK  WIFI: N/A   |
+---------------------+

4. Solution Architecture

4.1 High-Level Design

Core0: Sensors -> Sample -> FIFO -> Core1: Display + Logger

4.2 Key Components

| Component | Responsibility | Key Decisions | |———–|—————-|—————| | Core0 Sampler | Deterministic sensor reads | Fixed schedule | | Core1 UI | Display/log updates | 1 Hz refresh | | FIFO Queue | Core communication | Small ring buffer | | Logger | SD/serial output | Batch writes |

4.3 Data Structures (No Full Code)

typedef struct {
  float temp_c;
  float humidity;
  float pressure_hpa;
  float light_lux;
  float dew_point;
  uint32_t ts;
  uint8_t valid_mask;
} weather_frame_t;

4.4 Algorithm Overview

Key Algorithm: Producer/Consumer

  1. Core0 samples sensors, computes metrics, pushes frame.
  2. Core1 pops frame, logs to storage, updates display.

Complexity Analysis:

  • Time: O(1) per frame
  • Space: O(N) for queue

5. Implementation Guide

5.1 Development Environment Setup

# Pico SDK + SD card lib (FatFS)

5.2 Project Structure

weather-station/
├── firmware/
│   ├── main.c
│   ├── core0_sampler.c
│   ├── core1_ui.c
│   └── sensors.c
└── README.md

5.3 The Core Question You’re Answering

“How do you use both cores to keep real-time sampling stable while still providing a rich UI?”

5.4 Concepts You Must Understand First

  1. Multicore FIFO and spinlocks
  2. Sensor sampling schedules
  3. Logging and display pipelines

5.5 Questions to Guide Your Design

  1. What data should core0 send to core1?
  2. How will you handle queue overflow?
  3. What refresh rate is appropriate for display?

5.6 Thinking Exercise

Design a queue depth that can absorb 10 seconds of logging delay.

5.7 The Interview Questions They’ll Ask

  1. How do you avoid races between cores?
  2. Why separate sampling and logging tasks?
  3. How do you handle sensor errors without stopping the system?

5.8 Hints in Layers

Hint 1: Start with core0 sampling and core1 idle. Hint 2: Add FIFO communication with a simple counter. Hint 3: Add display updates on core1. Hint 4: Add logging and derived metrics.

5.9 Books That Will Help

| Topic | Book | Chapter | |——-|——|———| | Concurrency | “The Art of Multiprocessor Programming” | Ch. 1 | | Embedded systems | “Making Embedded Systems” | Ch. 8-10 | | Sensors | Sensor datasheets |

5.10 Implementation Phases

Phase 1: Foundation (3-5 days)

  • Bring up sensors and log values. Checkpoint: stable readings from all sensors.

Phase 2: Core Functionality (5-7 days)

  • Split tasks across cores and add queue. Checkpoint: core0 sampling unaffected by UI.

Phase 3: UI/Logging (4-6 days)

  • Add OLED display and SD logging. Checkpoint: stable display and log integrity.

5.11 Key Implementation Decisions

| Decision | Options | Recommendation | Rationale | |———-|———|—————-|———–| | Core split | Sample/UI vs alternate | Sample/UI | Clear separation | | Queue type | FIFO vs shared buffer | FIFO | Simpler and safe | | Log format | CSV vs JSON | CSV | Compact and easy to parse |


6. Testing Strategy

6.1 Test Categories

| Category | Purpose | Examples | |———-|———|———-| | Unit Tests | Metric calculation | Dew point formula | | Integration Tests | Multicore sync | FIFO load test | | Edge Case Tests | Sensor drop | Unplug sensor during run |

6.2 Critical Test Cases

  1. Core load: heavy UI updates do not affect sampling period.
  2. Queue overflow: log warning and drop oldest.
  3. Sensor error: continue with valid sensors.

6.3 Test Data

Temp=25C, Hum=50% -> Dew ~15C

7. Common Pitfalls & Debugging

7.1 Frequent Mistakes

| Pitfall | Symptom | Solution | |———|———|———-| | Shared buffer race | Corrupted readings | Use spinlocks or FIFO | | Slow logging | Sampling jitter | Move logging to core1 | | I2C bus contention | Read failures | Serialize sensor reads |

7.2 Debugging Strategies

  • Toggle GPIO in core0 to measure sampling jitter.
  • Use serial logs to verify queue depth.

7.3 Performance Traps

  • Display updates too frequent can starve logging.

8. Extensions & Challenges

8.1 Beginner Extensions

  • Add min/max tracking for temperature.
  • Add a button to switch display pages.

8.2 Intermediate Extensions

  • Stream data to MQTT in addition to logging.
  • Add wind speed sensor support.

8.3 Advanced Extensions

  • Implement rolling trend graphs on the display.
  • Add OTA update support (Pico W).

9. Real-World Connections

9.1 Industry Applications

  • Weather stations and environmental monitoring.
  • Industrial control dashboards.
  • OpenWeatherStation style projects

9.3 Interview Relevance

  • Multicore scheduling and synchronization are advanced topics.

10. Resources

10.1 Essential Reading

  • RP2040 datasheet: multicore and spinlocks
  • Sensor datasheets for calibration

10.2 Video Resources

  • Multicore programming tutorials (RP2040)

10.3 Tools & Documentation

  • FatFS for SD logging
  • Pico SDK multicore docs

11. Self-Assessment Checklist

11.1 Understanding

  • I can explain how the RP2040 cores communicate.
  • I can schedule sensor sampling intervals.
  • I can implement reliable logging.

11.2 Implementation

  • Sampling jitter stays under 5 ms.
  • Display updates are smooth.
  • Log files are consistent and parseable.

11.3 Growth

  • I can extend the station with new sensors.
  • I can explain how core separation improves reliability.

12. Submission / Completion Criteria

Minimum Viable Completion:

  • Sensor readings displayed and logged.

Full Completion:

  • Dual-core split with stable sampling and logging.

Excellence (Going Above & Beyond):

  • Trend graphs and network streaming.

13. Additional Content Rules

13.1 Determinism

Use fixed sampling intervals (e.g., 5 s). Log timestamps in UTC for reproducible datasets.

13.2 Outcome Completeness

  • Success demo: §3.7.2
  • Failure demo: §3.7.3
  • CLI exit codes: host log parser returns 0 success, 2 file open failure, 7 invalid row.

13.3 Cross-Linking

Concept references in §2.x and related projects in §10.4.

13.4 No Placeholder Text

All sections are fully specified for this project.