Hardware Guide

nRF52840 for Image Classification with TensorFlow Lite Micro

Running image classification on the nRF52840 with TFLite Micro is practical. 256 KB SRAM meets the 128 KB minimum with 2.0x headroom. The 64 MHz cortex-m4f core supports real-time inference for this workload.

Hardware Specs

Spec nRF52840
Processor ARM Cortex-M4F @ 64 MHz
SRAM 256 KB
Flash 1 MB
Key Features Built-in 9-axis IMU (LSM9DS1) on Arduino Nano 33 BLE, Arduino ecosystem, Ultra-low-power BLE, Built-in microphone (Sense variant)
Connectivity Bluetooth 5.0 LE, 802.15.4 (Thread/Zigbee), NFC, USB 2.0
Price Range $5 - $8 (chip), $20 - $35 (dev board)

Compatibility: Good

With 256 KB of internal SRAM, the nRF52840 delivers 2.0x the 128 KB minimum needed for image classification. The 150 KB quantized model fits in the tensor arena with enough remaining capacity for input buffers and core application logic. More demanding features (multi-sensor fusion, large protocol stacks) may require careful allocation planning. The nRF52840 provides 1 MB of flash memory, which accommodates the TFLite Micro runtime and 150 KB model. Space remains for firmware and basic OTA capability. The nRF52840 is widely used for BLE-connected ML applications. Its 256 KB SRAM handles keyword spotting, gesture recognition, and sensor anomaly detection models. Zephyr RTOS support and Edge Impulse's first-class nRF integration streamline the development workflow. Image Classification requires camera input. The nRF52840 lacks native peripheral support for some of these sensors, requiring external interface circuitry. A camera interface (DVP/DCMI) is not available — SPI-based camera modules may work but with reduced frame rates. Evaluate whether the peripheral gap justifies an alternative MCU with native support. TFLite Micro's static memory allocation model maps well to the nRF52840's memory architecture — define a fixed tensor arena at compile time with no runtime heap fragmentation risk. The framework's operator coverage supports convolutional, depthwise-separable, and pooling layers needed for image classification. Model conversion uses the standard TFLite converter with int8 post-training quantization. At $5-8 per chip ($20-35 for dev boards), the nRF52840 is a reasonable investment for image classification deployments. 22 PlatformIO-listed boards provide decent hardware selection. Key nRF52840 features for this workload: Built-in 9-axis IMU (LSM9DS1) on Arduino Nano 33 BLE, Arduino ecosystem, Ultra-low-power BLE, Built-in microphone (Sense variant).

Getting Started

  1. 1

    Set up nRF52840 development environment

    Install nRF Connect SDK (Zephyr-based) or Arduino via PlatformIO. Create a project targeting the nRF52840 and verify basic functionality (blink LED, serial output). For TFLite Micro, clone the framework repository and add it as a library dependency. Ensure the toolchain supports C++11 or later for the ML runtime.

  2. 2

    Collect camera training data

    Connect a camera module (e.g., OV2640 via DVP/SPI) to the nRF52840. Write a data logging sketch that captures camera readings at the target sample rate and outputs via serial/SD card. Collect 1000+ labeled samples across all classes. Capture images at the model input resolution (96×96 or lower).

  3. 3

    Train and quantize model for TFLite Micro

    Build a quantized MobileNetV2 or EfficientNet-Lite in TensorFlow or PyTorch. Apply int8 post-training quantization — this typically reduces model size by 4x with minimal accuracy loss. Convert to .tflite and generate a C array (xxd -i model.tflite > model_data.h). Target model size: under 150 KB to fit the nRF52840's 256 KB SRAM with room for application code.

  4. 4

    Deploy and validate on nRF52840

    Include the TFLite Micro runtime and compiled model in your Nordic Semiconductor project. Allocate a tensor arena of 225-375 KB in a static buffer. Run inference on live camera data and compare predictions against your test set. Log results to serial for desktop validation. Measure inference latency and peak RAM usage to verify they meet application requirements.

Alternatives

Explore More

FAQ

Why choose TFLite Micro over other frameworks for nRF52840?
TFLite Micro has the widest operator coverage and largest community for cortex-m4f targets. It supports int8 and float32 models with a static memory allocation model that eliminates heap fragmentation. The nRF52840's 256 KB SRAM works well with TFLite Micro's predictable memory usage. Alternative: Edge Impulse wraps TFLite Micro with a simpler workflow if you prefer cloud-based training.
Can nRF52840 run image classification inference in real time?
The nRF52840 runs at 64 MHz with DSP acceleration. Whether this enables real-time image classification depends on your specific model architecture and acceptable latency. A 150 KB int8 model is a reasonable target for this hardware class. Larger models may require duty-cycled inference or model optimization (pruning, distillation). Benchmark your specific model on hardware to validate timing.
What is the power consumption for image classification on nRF52840?
Power consumption during inference depends on clock configuration, active peripherals, and duty cycle. Consult the nRF52840 datasheet for detailed power profiles at 64 MHz. For battery-powered image classification, use duty cycling: run inference at intervals and enter low-power sleep mode between cycles. Profile your specific workload to estimate battery life accurately.

Build Image Classification in ForestHub

Design classification pipelines from camera input to edge inference — compile to firmware with ForestHub's visual workflow builder.

Get Started Free