Hardware Guide

STM32F7 for People Counting with TensorFlow Lite Micro

STMicroelectronics's STM32F7 is a solid choice for people counting using TFLite Micro. The cortex-m7 core at 216 MHz with 512 KB SRAM accommodates 200 KB models with room for application logic. DSP extensions available.

Hardware Specs

Spec STM32F7
Processor ARM Cortex-M7 @ 216 MHz
SRAM 512 KB
Flash 2 MB
Key Features Double-precision FPU, L1 cache (16 KB I + 16 KB D), ART Accelerator, Chrom-ART (DMA2D), TFT-LCD controller
Connectivity Ethernet, USB OTG HS/FS
Price Range $8 - $15 (chip), $25 - $60 (dev board)

Compatibility: Good

At 512 KB SRAM, the STM32F7 delivers 2.7x the 192 KB minimum needed for people counting. The 200 KB quantized model fits in the tensor arena with enough remaining capacity for input buffers and core application logic. More demanding features (multi-sensor fusion, large protocol stacks) may require careful allocation planning. For firmware and model storage, the 2 MB flash accommodates the TFLite Micro runtime and 200 KB model. Space remains for firmware and basic OTA capability. The STM32F7 at 216 MHz with Cortex-M7 instruction and data caches delivers near-real-time inference for mid-size models. Its 512 KB SRAM handles most sensor and audio ML workloads. The ART accelerator reduces flash access latency during inference. For people counting, connect a camera module (e.g., OV2640 via DVP/SPI) via SPI to the STM32F7. The camera interface supports QVGA (320×240) or lower resolution for on-device inference. Downsample to the model's input size (typically 96×96 or 128×128 pixels) before feeding the neural network. TFLite Micro's static memory allocation model maps well to the STM32F7's memory architecture — define a fixed tensor arena at compile time with no runtime heap fragmentation risk. The framework's operator coverage supports convolutional, depthwise-separable, and pooling layers needed for people counting. Model conversion uses the standard TFLite converter with int8 post-training quantization. At $8-15 per chip ($25-60 for dev boards), the STM32F7 is a reasonable investment for people counting deployments. Key STM32F7 features for this workload: Double-precision FPU, L1 cache (16 KB I + 16 KB D), ART Accelerator, Chrom-ART (DMA2D), TFT-LCD controller.

Getting Started

  1. 1

    Set up STM32F7 development environment

    Install STM32CubeIDE with the latest STM32Cube firmware package. Create a project targeting the STM32F7 and verify basic functionality (blink LED, serial output). For TFLite Micro, clone the framework repository and add it as a library dependency. Ensure the toolchain supports C++11 or later for the ML runtime.

  2. 2

    Collect camera training data

    Connect a camera module (e.g., OV2640 via DVP/SPI) to the STM32F7. Write a data logging sketch that captures camera readings at the target sample rate and outputs via serial/SD card. Collect 1000+ labeled samples across all classes. Capture images at the model input resolution (96×96 or lower).

  3. 3

    Train and quantize model for TFLite Micro

    Build a quantized MobileNet-SSD or YOLO-Tiny in TensorFlow or PyTorch. Apply int8 post-training quantization — this typically reduces model size by 4x with minimal accuracy loss. Convert to .tflite and generate a C array (xxd -i model.tflite > model_data.h). Target model size: under 200 KB to fit the STM32F7's 512 KB SRAM with room for application code.

  4. 4

    Deploy and validate on STM32F7

    Include the TFLite Micro runtime and compiled model in your STMicroelectronics project. Allocate a tensor arena of 300-500 KB in a static buffer. Run inference on live camera data and compare predictions against your test set. Log results to serial for desktop validation. Measure inference latency and peak RAM usage to verify they meet application requirements.

Alternatives

Explore More

FAQ

How do I update the people counting model on STM32F7 in production?
Without wireless connectivity, model updates require physical access via USB/JTAG. For field deployments, consider adding a wireless module or using an MCU with built-in connectivity. Always validate model integrity with a checksum before switching to the new version.
What size people counting model fits on STM32F7?
The STM32F7 has 512 KB SRAM and 2 MB flash. A typical people counting model is 200 KB after int8 quantization. The tensor arena needs 300-400 KB at runtime. After model allocation, approximately 112 KB remains for application logic, sensor drivers, and Ethernet stack.
Why choose TFLite Micro over other frameworks for STM32F7?
TFLite Micro has the widest operator coverage and largest community for cortex-m7 targets. It supports int8 and float32 models with a static memory allocation model that eliminates heap fragmentation. The STM32F7's 512 KB SRAM works well with TFLite Micro's predictable memory usage. Alternative: Edge Impulse wraps TFLite Micro with a simpler workflow if you prefer cloud-based training.

Build Vision AI Pipelines in ForestHub

Connect cameras to on-device inference — design detection workflows visually and compile to optimized firmware.

Get Started Free