Hardware Guide

STM32F7 for Image Classification with TensorFlow Lite Micro

For image classification, the STM32F7 with TFLite Micro scores Excellent. Its 512 KB internal SRAM (4.0x the required 128 KB) and 216 MHz clock ensure smooth real-time inference on 150 KB models. Hardware DSP extensions boost throughput.

Hardware Specs

Spec STM32F7
Processor ARM Cortex-M7 @ 216 MHz
SRAM 512 KB
Flash 2 MB
Key Features Double-precision FPU, L1 cache (16 KB I + 16 KB D), ART Accelerator, Chrom-ART (DMA2D), TFT-LCD controller
Connectivity Ethernet, USB OTG HS/FS
Price Range $8 - $15 (chip), $25 - $60 (dev board)

Compatibility: Excellent

Memory-wise, the STM32F7 offers 512 KB SRAM, which provides 4.0x the 128 KB minimum for image classification. This generous headroom means the 150 KB model tensor arena, sensor input buffers, and application logic (camera polling, Ethernet stack, state management) all fit without contention. The remaining 137 KB after model allocation supports complex application features. For firmware and model storage, the 2 MB flash comfortably houses the TFLite Micro runtime, the 150 KB model binary, application firmware, and OTA update partitions for field upgrades. Flash usage is well within budget for this configuration. The STM32F7 at 216 MHz with Cortex-M7 instruction and data caches delivers near-real-time inference for mid-size models. Its 512 KB SRAM handles most sensor and audio ML workloads. The ART accelerator reduces flash access latency during inference. For image classification, connect a camera module (e.g., OV2640 via DVP/SPI) via SPI to the STM32F7. The camera interface supports QVGA (320×240) or lower resolution for on-device inference. Downsample to the model's input size (typically 48×48 to 96×96 pixels) before feeding the neural network. TFLite Micro's static memory allocation model maps well to the STM32F7's memory architecture — define a fixed tensor arena at compile time with no runtime heap fragmentation risk. The framework's operator coverage supports convolutional, depthwise-separable, and pooling layers needed for image classification. Model conversion uses the standard TFLite converter with int8 post-training quantization. At $8-15 per chip ($25-60 for dev boards), the STM32F7 offers strong value for image classification deployments. Key STM32F7 features for this workload: Double-precision FPU, L1 cache (16 KB I + 16 KB D), ART Accelerator, Chrom-ART (DMA2D), TFT-LCD controller.

Getting Started

  1. 1

    Set up STM32F7 development environment

    Install STM32CubeIDE with the latest STM32Cube firmware package. Create a project targeting the STM32F7 and verify basic functionality (blink LED, serial output). For TFLite Micro, clone the framework repository and add it as a library dependency. Ensure the toolchain supports C++11 or later for the ML runtime.

  2. 2

    Collect camera training data

    Connect a camera module (e.g., OV2640 via DVP/SPI) to the STM32F7. Write a data logging sketch that captures camera readings at the target sample rate and outputs via serial/SD card. Collect 1000+ labeled samples across all classes. Capture images at the model input resolution (96×96 or lower).

  3. 3

    Train and quantize model for TFLite Micro

    Build a quantized MobileNetV2 or EfficientNet-Lite in TensorFlow or PyTorch. Apply int8 post-training quantization — this typically reduces model size by 4x with minimal accuracy loss. Convert to .tflite and generate a C array (xxd -i model.tflite > model_data.h). Target model size: under 150 KB to fit the STM32F7's 512 KB SRAM with room for application code.

  4. 4

    Deploy and validate on STM32F7

    Include the TFLite Micro runtime and compiled model in your STMicroelectronics project. Allocate a tensor arena of 225-375 KB in a static buffer. Run inference on live camera data and compare predictions against your test set. Log results to serial for desktop validation. Measure inference latency and peak RAM usage to verify they meet application requirements.

Alternatives

Explore More

FAQ

What camera resolution works for image classification on STM32F7?
On-device image classification models typically use 48×48 to 96×96 pixel grayscale input. The STM32F7's 512 KB SRAM constrains this: a 96×96 grayscale frame is ~9 KB, while 128×128 RGB would need ~49 KB. The native camera interface (DVP/DCMI) handles frame capture efficiently. Always downsample in firmware before inference.
How do I update the image classification model on STM32F7 in production?
Without wireless connectivity, model updates require physical access via USB/JTAG. For field deployments, consider adding a wireless module or using an MCU with built-in connectivity. Always validate model integrity with a checksum before switching to the new version.
What size image classification model fits on STM32F7?
The STM32F7 has 512 KB SRAM and 2 MB flash. A typical image classification model is 150 KB after int8 quantization. The tensor arena needs 225-300 KB at runtime. After model allocation, approximately 212 KB remains for application logic, sensor drivers, and Ethernet stack.

Build Image Classification in ForestHub

Design classification pipelines from camera input to edge inference — compile to firmware with ForestHub's visual workflow builder.

Get Started Free