Project 15: On-Board Sensor Fusion Dashboard

A sensor fusion dashboard that calibrates onboard MEMS sensors and visualizes fused orientation data.

Quick Reference

Attribute Value
Difficulty Level 3: Advanced
Time Estimate 2+ weeks
Main Programming Language C (Alternatives: C++, Rust, Ada)
Alternative Programming Languages C++, Rust, Ada
Coolness Level Level 4: Hardcore Tech Flex
Business Potential 1. The “Resume Gold”
Prerequisites I2C sensor driver, calibration basics, fixed-rate sampling
Key Topics Sensor fusion, calibration, coordinate frames

1. Learning Objectives

By completing this project, you will:

  1. Read and calibrate accelerometer/gyro/magnetometer data.
  2. Implement a complementary filter for orientation estimation.
  3. Visualize raw vs fused data over UART or host dashboard.
  4. Validate stability and drift over time.

2. All Theory Needed (Per-Concept Breakdown)

Sensor Fusion, Calibration, and Coordinate Frames

Fundamentals Sensor fusion combines data from multiple sensors to produce a more reliable estimate of motion or orientation. On the STM32F3DISCOVERY, the onboard accelerometer, gyroscope, and magnetometer provide complementary information. Calibration removes bias and scale error, while coordinate frames define how sensor axes map to the physical world. A simple complementary filter can blend gyro and accelerometer data into a stable orientation estimate.

Deep Dive into the concept MEMS sensors output raw data in sensor-local axes. The axes are often printed on the board silkscreen but must be confirmed. Calibration involves computing offsets (bias) and scale factors so that stationary readings match expected values (e.g., 1 g on Z for an accelerometer). Gyroscopes drift over time, so their integrated angle will drift; accelerometers provide absolute orientation relative to gravity but are noisy under motion. A complementary filter combines these: high-pass the gyro (good for fast changes) and low-pass the accelerometer (good for long-term stability). Magnetometers provide heading relative to Earth’s magnetic field, but they require hard-iron and soft-iron calibration to correct distortions. Coordinate frames are critical: you must define the board frame, sensor frame, and world frame. A rotation matrix or quaternion converts between them. For this project, a simplified 2D or 3D orientation estimate is enough: compute pitch and roll from accelerometer, integrate yaw from gyro, optionally correct with magnetometer. Sensor fusion is as much about data integrity as math. If your I2C driver is unreliable or your timestamps jitter, the filter will be unstable. Thus, a fusion dashboard project forces you to integrate driver reliability, calibration routines, and real-time sampling into a single pipeline. When you visualize the results, you should show raw data, calibrated data, and fused estimates side by side. That transparency helps debug both algorithm and hardware issues. The goal is not to build a perfect IMU, but to build a measurable, explainable fusion system that you can trust.

How this fit on projects In On-Board Sensor Fusion Dashboard, you calibrate the onboard sensors, define coordinate frames, and implement a complementary filter to visualize fused orientation.

Definitions & key terms

  • Bias -> Constant offset error in sensor output.
  • Scale factor -> Multiplicative correction for sensor readings.
  • Complementary filter -> Filter that blends high-pass and low-pass signals.
  • Coordinate frame -> A reference axis system used to interpret sensor data.
  • Drift -> Accumulated error over time in integrated gyro data.

Mental model diagram (ASCII)

Accel (low-pass) ----> +
                 [Sum] -> Orientation
Gyro (high-pass) ----> +

How it works (step-by-step, with invariants and failure modes)

  1. Read raw accelerometer, gyro, and magnetometer data at fixed intervals.
  2. Calibrate by subtracting bias and applying scale factors.
  3. Compute pitch/roll from accelerometer; integrate gyro for rate.
  4. Blend signals with a complementary filter.
  5. Invariant: timestamps are consistent; failure mode: jitter or mismatched units destabilize the filter.

Minimal concrete example

// Complementary filter (simplified)
angle = alpha * (angle + gyro_rate * dt) + (1 - alpha) * accel_angle;

Common misconceptions

  • Sensor fusion is just math ignores driver correctness and calibration.
  • Gyro alone is enough ignores drift.
  • Calibration is one-time ignores temperature and bias changes.

Check-your-understanding questions

  1. Why does gyro integration drift over time?
  2. How does a complementary filter stabilize orientation?
  3. What is the difference between sensor frame and world frame?

Check-your-understanding answers

  1. Small bias errors accumulate when integrating rate to angle.
  2. It trusts the gyro for fast changes and the accelerometer for long-term stability.
  3. Sensor frame is the sensor’s axes; world frame is the physical reference axes.

Real-world applications

  • IMUs in drones and robotics.
  • Vehicle stability and navigation systems.
  • Mobile device orientation tracking.

Where you’ll apply it

References

  • ‘Small Unmanned Aircraft: Theory and Practice’ (sensor fusion basics).
  • Sensor datasheets for calibration procedures.
  • Open-source IMU filters (Madgwick, Mahony) for comparison.

Key insights

  • Fusion quality depends as much on timing and calibration as on the filter equation.

Summary Sensor fusion blends imperfect sensors into a stable estimate. By calibrating data and applying a complementary filter, you can build a reliable orientation dashboard.

Homework/Exercises to practice the concept

  1. Measure accelerometer bias by averaging stationary readings.
  2. Implement a complementary filter and tune alpha for stability.
  3. Plot raw vs calibrated vs fused angles and compare.

Solutions to the homework/exercises

  1. Bias is the average offset from expected 1 g or 0 deg/s values.
  2. Alpha near 0.98 favors gyro for short-term response; adjust for stability.
  3. The fused angle should be smoother than raw gyro integration and more stable than accelerometer alone.

I2C Protocol and Sensor Calibration

Fundamentals I2C is a two-wire serial bus used for connecting sensors and peripherals. It uses open-drain lines (SCL and SDA) with pull-up resistors and supports multiple devices on the same bus. Each transaction consists of a start condition, address, data bytes, and a stop condition. Sensors typically expose registers over I2C, so reliable driver code requires understanding the protocol and how to convert raw register values into physical units.

Deep Dive into the concept I2C is a synchronous, multi-master bus with arbitration and clock stretching. In practice on microcontrollers, it is usually used in a single-master configuration. The master generates the clock and the start/stop conditions; devices are addressed by a 7-bit or 10-bit address. Each byte is followed by an ACK/NACK bit. The bus is open-drain, so the lines are pulled high by resistors and devices only pull low. This allows multiple devices to share the same bus without contention. For sensor drivers, the typical sequence is: send a start, send device address with write bit, send register address, send a repeated start, send device address with read bit, then read one or more bytes. Timing matters. If the bus clock is too fast for a sensor, you may get NACKs. If pull-ups are too weak, rise times are too slow. The STM32F3 I2C peripheral supports hardware handling of start/stop and ACK, but you still need to handle error flags like NACK, bus error, or arbitration loss. Calibration adds another layer. Raw sensor data often includes offset, gain error, and temperature dependence. You must apply calibration constants and scaling factors to convert raw counts into real units (e.g., g, deg/s). A robust driver logs raw values, applies calibration, and provides both to the user. That makes debugging easier and validates the calibration pipeline. In embedded systems, I2C reliability issues are common. If you see bus lockup, you might need to manually toggle the clock line to recover from a stuck slave. Therefore, your project should include timeouts, retry logic, and a bus recovery procedure. This project teaches both protocol-level correctness and the data integrity pipeline needed to trust sensor values.

How this fit on projects In On-Board Sensor Fusion Dashboard, you implement an I2C sensor driver with error handling and calibration, then log calibrated measurements.

Definitions & key terms

  • Start/Stop -> Bus conditions indicating the beginning and end of a transaction.
  • ACK/NACK -> Acknowledgment bits that indicate successful reception.
  • Clock stretching -> Slave holds SCL low to delay master.
  • Register map -> Documentation of sensor registers and their meanings.
  • Calibration -> Process of correcting raw sensor values using offsets and scale factors.

Mental model diagram (ASCII)

START -> Addr(W) -> Reg -> RESTART -> Addr(R) -> Data -> STOP

How it works (step-by-step, with invariants and failure modes)

  1. Initialize I2C peripheral with correct clock speed and GPIO open-drain mode.
  2. Implement read/write routines with timeouts and error checks.
  3. Read sensor registers and log raw values.
  4. Apply calibration and scaling to produce physical units.
  5. Invariant: every transfer ends with a STOP; failure mode: bus lockup if STOP is missed.

Minimal concrete example

// Pseudocode: read sensor register
i2c_start();
i2c_write(ADDR << 1 | 0);
i2c_write(REG);
i2c_restart();
i2c_write(ADDR << 1 | 1);
uint8_t val = i2c_read_nack();
i2c_stop();

Common misconceptions

  • I2C is always reliable ignores bus errors and stuck lines.
  • Calibration is optional ignores sensor biases and drift.
  • Any pull-up value works ignores rise time constraints.

Check-your-understanding questions

  1. Why does I2C use open-drain signaling?
  2. What is the purpose of a repeated start?
  3. How do you recover a stuck I2C bus?

Check-your-understanding answers

  1. Open-drain allows multiple devices to share the bus without contention.
  2. It allows changing direction (write to read) without releasing the bus.
  3. Toggle SCL manually to release a stuck slave, then generate a STOP.

Real-world applications

  • Environmental sensors and IMUs.
  • EEPROM configuration storage.
  • Battery fuel gauge communication.

Where you’ll apply it

References

  • NXP I2C-bus specification and user manual.
  • STM32F3 Reference Manual (I2C chapter).
  • Sensor datasheets for register maps and calibration constants.

Key insights

  • Reliable I2C drivers are equal parts protocol correctness and calibrated data handling.

Summary I2C connects your MCU to sensors, but the raw bytes mean nothing without calibration. A robust driver validates the bus, handles errors, and produces trustworthy units.

Homework/Exercises to practice the concept

  1. Implement I2C read with timeout and retry logic.
  2. Capture raw sensor output and compute calibrated units.
  3. Measure rise time on SDA/SCL with different pull-up resistors.

Solutions to the homework/exercises

  1. A timeout prevents infinite blocking; retries can recover transient errors.
  2. Apply offset and scale from datasheet to convert raw counts to units.
  3. Stronger pull-ups reduce rise time but increase power draw.

Board Schematic Reading and Pin Constraints

Fundamentals A datasheet tells you what the MCU can do, but the board schematic tells you what you can actually use. The STM32F3DISCOVERY connects LEDs, sensors, and debug interfaces to specific pins. Those pins are no longer ‘free’ for arbitrary functions. Reading the schematic lets you see which nets go to headers, which are tied to components, and which are shared. A pin map is only correct when it accounts for these board constraints.

Deep Dive into the concept A schematic is a connectivity map, not just a drawing. Each symbol connects to nets that represent electrical nodes, and each net corresponds to a MCU pin or off-board connector. When you read a board schematic, you trace a pin to see all the components attached to it. If a pin drives an LED through a resistor, you know it has a load and might be shared with a timer output. If a pin connects to an accelerometer, you know that I2C or SPI peripheral is already in use. The STM32F3DISCOVERY uses a dedicated ST-LINK interface that can occupy SWD pins and sometimes UART pins for virtual COM. Schematic reading also reveals power domains: which pins are powered at 3.3 V, which are analog, and how ground is routed. This matters because analog performance depends on low-noise grounding and clean Vref pins. By combining the schematic with the datasheet, you can build a pin constraint table. This table contains each pin, its board usage, any alternate functions available, and the conflicts. In real projects, this process informs both firmware and hardware decisions. If you need an extra PWM output but all timer channels are used by LEDs, you must choose a different timer, reroute on hardware, or change the product requirement. The key is to treat the board as a fixed system with constraints, not a blank MCU. In the context of the STM32F3DISCOVERY, a simple example is the user LEDs on port E: they are easy to use for status, but they also consume GPIOs and timer channels. If you plan to use PWM or advanced timers, you must cross-reference those pins. Thus, schematic reading is not a separate activity; it is a part of reliable firmware design.

How this fit on projects In On-Board Sensor Fusion Dashboard, you trace board nets to understand which pins are usable and which are already assigned to onboard hardware, then verify those decisions in firmware.

Definitions & key terms

  • Net -> An electrical connection that ties multiple pins together.
  • Header -> A connector that exposes MCU pins for external use.
  • Pull-up resistor -> A resistor that biases a line high when not actively driven.
  • Load -> Anything attached to a pin that draws current or affects signal integrity.
  • Constraint -> A limitation imposed by board routing or attached components.

Mental model diagram (ASCII)

MCU Pin -> Net -> (LED + Resistor) -> GND
    -> Net -> Header Pin
(two loads share one MCU pin)

How it works (step-by-step, with invariants and failure modes)

  1. Open the board schematic and locate the MCU pinout page.
  2. Trace each pin of interest to see which nets and components it connects to.
  3. Mark pins that are shared with sensors, LEDs, or debug interfaces.
  4. Build a constraint-aware pin map that lists usable pins and conflicts.
  5. Invariant: each firmware pin decision matches the board wiring; failure mode: conflicts with onboard components.

Minimal concrete example

// Example pin map entry
// PE8: LED3 -> TIM1_CH1 possible, but LED load affects PWM brightness
// PA13/PA14: SWD -> reserved for debug

Common misconceptions

  • If the MCU datasheet lists it, I can use it ignores board wiring constraints.
  • LED pins are always free ignores timer channel sharing and current load.
  • Debug pins are optional ignores the need for SWD during development.

Check-your-understanding questions

  1. Why might a PWM output on an LED pin distort a sensor signal?
  2. How can the ST-LINK interface restrict your pin choices?
  3. What is the first step in creating a constraint-aware pin map?

Check-your-understanding answers

  1. Because the LED and resistor create an electrical load and can inject noise or limit voltage swing.
  2. ST-LINK uses SWD pins and sometimes a UART bridge, which reserves those pins.
  3. Start with the board schematic to identify which pins are already connected to components.

Real-world applications

  • Early-stage board bring-up when peripherals do not respond.
  • Pin budget planning in resource-constrained designs.
  • Avoiding electrical conflicts in mixed-signal systems.

Where you’ll apply it

References

  • STM32F3DISCOVERY board schematic (net mapping).
  • ‘Practical Electronics for Inventors’ (basic schematic reading).
  • ST application notes on board bring-up and pin multiplexing.

Key insights

  • The board schematic is your ground truth for what pins are truly available.

Summary You cannot assign pins from the datasheet alone. The schematic tells you what the board already uses, and a correct firmware design respects those constraints.

Homework/Exercises to practice the concept

  1. Identify three pins that are reserved by ST-LINK or sensors.
  2. Find one pin that is routed to a header and list two possible alternate functions.
  3. Document the power rail connected to the analog pins.

Solutions to the homework/exercises

  1. SWD pins and sensor bus pins are typically reserved; confirm in the schematic.
  2. A header pin might support UART or timer functions; confirm using the AF table.
  3. Analog pins share AVDD/AGND rails; note the decoupling components shown in the schematic.

3. Project Specification

3.1 What You Will Build

A fusion pipeline that reads onboard sensors, applies calibration, computes orientation, and outputs a real-time dashboard.

3.2 Functional Requirements

  1. Sensor Acquisition: Read accel/gyro/mag data at fixed intervals.
  2. Calibration: Apply bias and scale corrections.
  3. Fusion Filter: Compute orientation using complementary filter.
  4. Dashboard Output: Stream raw and fused data over UART.

3.3 Non-Functional Requirements

  • Performance: Update orientation at 50-100 Hz.
  • Reliability: Stable orientation with minimal drift over 5 minutes.
  • Usability: Clear log format or host-side visualization.

3.4 Example Usage / Output

RAW: ax=0.02 ay=-0.01 az=0.99
GYRO: gx=0.1 gy=0.0 gz=0.0
FUSED: pitch=1.2 roll=-0.4 yaw=10.0

3.5 Data Formats / Schemas / Protocols

Log format: RAW: ax=<g> ay=<g> az=<g> GYRO: gx=<dps> FUSED: pitch=<deg> roll=<deg> yaw=<deg>

3.6 Edge Cases

  • Magnetometer saturation due to nearby metal.
  • Gyro bias causing drift.
  • I2C read errors leading to missing data.
  • Inconsistent sampling intervals destabilizing filter.

3.7 Real World Outcome

You will see stable orientation estimates while moving the board.

3.7.1 How to Run (Copy/Paste)

$ make flash
$ screen /dev/tty.usbmodem* 115200

3.7.2 Golden Path Demo (Deterministic)

  • Keep board flat for 30 seconds; pitch/roll should be near 0.

3.7.3 CLI Transcript (Success)

RAW: ax=0.00 ay=0.01 az=1.00
GYRO: gx=0.0 gy=0.0 gz=0.0
FUSED: pitch=0.2 roll=-0.3 yaw=10.1
RESULT=PASS
# Exit code: 0

3.7.4 Failure Demo (Bad Calibration)

FUSED: pitch=15.0 roll=20.0 (board flat)
RESULT=FAIL
# Exit code: 2

4. Solution Architecture

Sensor drivers feed calibrated data into a complementary filter; results are streamed to a dashboard.

4.1 High-Level Design

Sensors -> Calibration -> Fusion Filter -> Dashboard Output

4.2 Key Components

Component Responsibility Key Decisions
Sensor Drivers Read accel/gyro/mag over I2C Reuse P09 driver patterns
Calibration Module Apply bias and scale factors Store constants in flash
Fusion Filter Compute orientation Complementary filter for simplicity

4.3 Data Structures (No Full Code)

typedef struct {
float ax, ay, az;
float gx, gy, gz;
float pitch, roll, yaw;
} fusion_state_t;

4.4 Algorithm Overview

Complementary Filter

  1. Integrate gyro rates to update angles.
  2. Compute accel-based pitch/roll.
  3. Blend with alpha weighting.

Complexity: O(1) per update.


5. Implementation Guide

5.1 Development Environment Setup

make init
make flash
screen /dev/tty.usbmodem* 115200

5.2 Project Structure

project-root/
|-- src/
|   |-- main.c
|   |-- drivers/
|   `-- app/
|-- include/
|-- Makefile
`-- README.md

5.3 The Core Question You’re Answering

“How do I combine multiple sensors into a stable orientation estimate?”

5.4 Concepts You Must Understand First

  1. Sensor calibration and bias correction.
  2. Coordinate frames and axis mapping.
  3. Complementary filtering.

5.5 Questions to Guide Your Design

  1. What sample rate gives stable orientation?
  2. How will you tune the filter alpha?
  3. How will you visualize raw vs fused data?

5.6 Thinking Exercise

Filter Tuning

alpha=0.98 -> gyro dominates short-term
alpha=0.90 -> accel influences more
Tune for stability vs responsiveness

5.7 The Interview Questions They’ll Ask

  1. Why do you need sensor fusion?
  2. What causes gyro drift?
  3. How does a complementary filter work?

5.8 Hints in Layers

Hint 1: Start with accelerometer-only pitch/roll. Hint 2: Integrate gyro and compare drift. Hint 3: Blend with complementary filter and tune alpha.

5.9 Books That Will Help

Topic Book Chapter
Sensor fusion Small Unmanned Aircraft Ch. 7
IMU calibration Sensor datasheet Calibration section

5.10 Implementation Phases

Phase 1: Sensor Acquisition (3-4 days)

Read and log raw sensor data.

Phase 2: Calibration (4 days)

Apply bias and scale corrections.

Phase 3: Fusion and Dashboard (5+ days)

Implement complementary filter and output dashboard.

5.11 Key Implementation Decisions

Decision Options Recommendation Rationale
Fusion Complementary vs Kalman Complementary Simpler and stable
Output UART log vs host UI UART log Minimal tooling required

6. Testing Strategy

6.1 Test Categories

Category Purpose Examples
Unit Tests Calibration math Bias correction checks
Integration Tests Fusion stability Flat board test
Edge Case Tests Magnetic interference Nearby magnet

6.2 Critical Test Cases

  1. Flat Board: Pitch/roll near 0 when flat.
  2. Rotation: Yaw changes with rotation.
  3. Drift: Orientation remains stable over 5 minutes.

6.3 Test Data

Pitch=0.5 deg, Roll=-0.3 deg (flat)

7. Common Pitfalls & Debugging

7.1 Frequent Mistakes

Pitfall Symptom Solution
Uncalibrated sensors Tilted output when flat Apply bias correction
Variable dt Unstable filter Use fixed sampling interval
Mag interference Yaw jumps Calibrate magnetometer and avoid metal

7.2 Debugging Strategies

  • Log raw and calibrated data side by side.
  • Verify axis orientation with known rotations.
  • Tune alpha to reduce drift without excessive lag.

7.3 Performance Traps

Overly complex filters can exceed CPU budget; start simple.


8. Extensions & Challenges

8.1 Beginner Extensions

  • Add a simple ASCII dashboard display.

8.2 Intermediate Extensions

  • Integrate a host-side plotting script.

8.3 Advanced Extensions

  • Implement quaternion-based fusion.

9. Real-World Connections

9.1 Industry Applications

  • Drones: Orientation estimation for flight control.
  • Robotics: IMU-based navigation and stabilization.
  • Madgwick filter: Open-source IMU fusion algorithm.
  • Mahony filter: Complementary filter variant for IMUs.

9.3 Interview Relevance

  • Sensor fusion and calibration questions.
  • Orientation estimation trade-offs.

10. Resources

10.1 Essential Reading

  • Small Unmanned Aircraft: Theory and Practice by Randal Beard - Sensor fusion fundamentals.
  • Sensor datasheets by Vendor - Calibration and axis definitions.

10.2 Video Resources

  • Complementary filter IMU tutorial.
  • IMU calibration walkthrough.

10.3 Tools & Documentation

  • Serial terminal: View fusion logs.
  • STM32CubeIDE: Build and debug.

11. Self-Assessment Checklist

11.1 Understanding

  • I can explain complementary filtering.
  • I can calibrate sensor biases.
  • I can map sensor axes to board frame.

11.2 Implementation

  • Fusion output stable and reasonable.
  • Raw/calibrated data logged clearly.
  • Sampling rate consistent.

11.3 Growth

  • I can upgrade to quaternion fusion.
  • I can visualize data on a host dashboard.
  • I can explain sensor fusion in interviews.

12. Submission / Completion Criteria

Minimum Viable Completion:

  • Sensor data read and logged.
  • Calibration applied.
  • Basic fusion output.

Full Completion:

  • Stable orientation over 5 minutes.
  • Dashboard output structured.

Excellence (Going Above & Beyond):

  • Quaternion fusion and host visualization.