Hardware Guide

ESP32 for Image Classification with TensorFlow Lite Micro

Running image classification on the ESP32 with TFLite Micro is practical. 520 KB SRAM meets the 128 KB minimum with 4.1x headroom. The 240 MHz xtensa-lx6 core supports real-time inference for this workload.

Hardware Specs

Spec ESP32
Processor Dual-core Xtensa LX6 @ 240 MHz
SRAM 520 KB
Flash Up to 16 MB (external)
Key Features Hardware crypto acceleration, Ultra-low-power co-processor (ULP)
Connectivity Wi-Fi 802.11 b/g/n, Bluetooth 4.2 BR/EDR + BLE
Price Range $2 - $5 (chip), $5 - $15 (dev board)

Compatibility: Good

With 520 KB of internal SRAM, the ESP32 provides 4.1x the 128 KB minimum for image classification. This generous headroom means the 150 KB model tensor arena, sensor input buffers, and application logic (camera polling, Wi-Fi 802.11 b/g/n stack, state management) all fit without contention. An additional 4 MB PSRAM is available for larger buffers or data logging. Flash storage at 16 MB comfortably houses the TFLite Micro runtime, the 150 KB model binary, application firmware, and OTA update partitions for field upgrades. Flash usage is well within budget for this configuration. The ESP32's dual-core Xtensa LX6 allows dedicating one core to inference while the other handles Wi-Fi/BLE communication and application logic. The ULP co-processor can handle simple sensor reads during deep sleep, reducing average power consumption in duty-cycled deployments. Image Classification requires camera input. The ESP32 lacks native peripheral support for some of these sensors, requiring external interface circuitry. A camera interface (DVP/DCMI) is not available — SPI-based camera modules may work but with reduced frame rates. Evaluate whether the peripheral gap justifies an alternative MCU with native support. TFLite Micro's static memory allocation model maps well to the ESP32's memory architecture — define a fixed tensor arena at compile time with no runtime heap fragmentation risk. The framework's operator coverage supports convolutional, depthwise-separable, and pooling layers needed for image classification. Model conversion uses the standard TFLite converter with int8 post-training quantization. At $2-5 per chip ($5-15 for dev boards), the ESP32 is a reasonable investment for image classification deployments. With 136 PlatformIO-listed boards, hardware availability is excellent. Key ESP32 features for this workload: Hardware crypto acceleration, Ultra-low-power co-processor (ULP).

Getting Started

  1. 1

    Set up ESP32 development environment

    Install ESP-IDF (recommended for production) or Arduino framework via PlatformIO. Create a project targeting the ESP32 and verify basic functionality (blink LED, serial output). For TFLite Micro, clone the framework repository and add it as a library dependency. Ensure the toolchain supports C++11 or later for the ML runtime.

  2. 2

    Collect camera training data

    Connect a camera module (e.g., OV2640 via DVP/SPI) to the ESP32. Write a data logging sketch that captures camera readings at the target sample rate and outputs via serial/SD card. Collect 1000+ labeled samples across all classes. Capture images at the model input resolution (96×96 or lower).

  3. 3

    Train and quantize model for TFLite Micro

    Build a quantized MobileNetV2 or EfficientNet-Lite in TensorFlow or PyTorch. Apply int8 post-training quantization — this typically reduces model size by 4x with minimal accuracy loss. Convert to .tflite and generate a C array (xxd -i model.tflite > model_data.h). Target model size: under 150 KB to fit the ESP32's 520 KB SRAM with room for application code.

  4. 4

    Deploy and validate on ESP32

    Include the TFLite Micro runtime and compiled model in your Espressif project. Allocate a tensor arena of 225-375 KB in a static buffer. Run inference on live camera data and compare predictions against your test set. Report results via MQTT or HTTP for remote validation. Measure inference latency and peak RAM usage to verify they meet application requirements.

Alternatives

Explore More

FAQ

Why choose TFLite Micro over other frameworks for ESP32?
TFLite Micro has the widest operator coverage and largest community for xtensa-lx6 targets. It supports int8 and float32 models with a static memory allocation model that eliminates heap fragmentation. The ESP32's 520 KB SRAM works well with TFLite Micro's predictable memory usage. Alternative: Edge Impulse wraps TFLite Micro with a simpler workflow if you prefer cloud-based training.
What size image classification model fits on ESP32?
The ESP32 has 520 KB SRAM and 16 MB flash. A typical image classification model is 150 KB after int8 quantization. The tensor arena needs 225-300 KB at runtime. After model allocation, approximately 220 KB remains for application logic, sensor drivers, and Wi-Fi 802.11 b/g/n stack.
How do I update the image classification model on ESP32 in production?
Over-the-air (OTA) updates via Wi-Fi: store the model in a dedicated flash partition and update it independently of the main firmware. The ESP32's 16 MB flash supports dual-partition OTA (A/B scheme) for safe rollback. Always validate model integrity with a checksum before switching to the new version.

Build Image Classification in ForestHub

Design classification pipelines from camera input to edge inference — compile to firmware with ForestHub's visual workflow builder.

Get Started Free