Guide

TinyML Getting Started Guide

TinyML is machine learning on microcontrollers — models under 1 MB running inference in milliseconds on devices costing $2-25. Start with an Arduino Nano 33 BLE Sense and Edge Impulse, or an ESP32 with TFLite Micro.

Published 2026-04-01

What is TinyML

TinyML is machine learning that runs on microcontrollers — chips with kilobytes of RAM, megahertz-level clock speeds, and milliwatt power budgets. The “tiny” refers to the model size (typically 10 KB to 500 KB) and the hardware, not the impact.

Practical applications are already in production: keyword detection in smart speakers, gesture recognition in wearables, anomaly detection in industrial motors, and wildlife monitoring in remote sensors.

The key constraint is memory. Where a cloud ML model might use gigabytes of RAM, a TinyML model must fit its weights, activations, and runtime into 64-512 KB of SRAM. This requires aggressive model optimization — quantization, pruning, and architecture choices designed for small footprints.

Choose Your Hardware

For your first TinyML project, pick one of these proven starting platforms:

Path A: Arduino Nano 33 BLE Sense (Easiest Start)

Why: Built-in 9-axis IMU, microphone, temperature, humidity, pressure, light, and proximity sensors. No external wiring needed. The Arduino IDE is familiar to most hobbyists, and Edge Impulse has direct Arduino integration.

Best first project: Gesture recognition. Wave the board in a pattern, classify the motion. The IMU provides the input data, no additional hardware required.

Specs: ARM Cortex-M4F at 64 MHz, 256 KB SRAM, 1 MB flash. Enough for keyword spotting, gesture recognition, and simple anomaly detection. Not enough for vision.

Path B: ESP32 or ESP32-S3 (More Power, More Flexibility)

Why: More RAM (512 KB), Wi-Fi for sending results to a dashboard, and a large community. The ESP32-S3 adds SIMD instructions and a camera interface for vision tasks.

Best first project: Anomaly detection on ESP32. Read an analog sensor (vibration, current, temperature), detect when readings deviate from normal patterns.

Specs: Dual-core Xtensa at 240 MHz, 512-520 KB SRAM, up to 8 MB PSRAM (S3). Handles all TinyML use cases including basic image classification.

Path C: STM32 (Industrial / Professional)

Why: ARM Cortex-M ecosystem with professional tooling (STM32CubeIDE, STM32Cube.AI). Better for teams already in the ST ecosystem or building towards production hardware.

Best first project: Predictive maintenance on STM32F4. Connect a vibration sensor, detect bearing degradation patterns.

Specs: Ranges from Cortex-M4F at 80 MHz (STM32L4) to Cortex-M7 at 480 MHz (STM32H7). The F4 at 168 MHz with 192 KB RAM is a solid mid-range choice.

Choose Your Framework

Edge Impulse is a cloud platform that handles the entire TinyML pipeline:

  1. Data collection — record sensor data directly from your board or upload datasets
  2. Signal processing — built-in DSP blocks for audio (MFCC), motion (spectral analysis), and images
  3. Model training — select a neural network architecture, train in the cloud
  4. Optimization — automatic quantization and profiling for your target MCU
  5. Deployment — export as Arduino library, C++ SDK, or direct firmware binary

The free tier covers individual developers with projects up to 4 hours of training data. No Python or ML knowledge required to start.

TensorFlow Lite Micro (More Control)

TFLite Micro is Google’s open-source runtime for ML on microcontrollers:

  1. Train in Python using TensorFlow or Keras
  2. Convert to .tflite format with quantization
  3. Deploy the model as a C array alongside the TFLite Micro interpreter

You manage the entire pipeline yourself. This gives full control over model architecture, quantization parameters, and memory layout. Requires Python for training and C++ for the firmware.

Your First Project: Step by Step

This walkthrough uses the Arduino Nano 33 BLE Sense with Edge Impulse for gesture recognition. The same concepts apply to any MCU and framework combination.

Step 1: Set Up the Board

Install the Arduino IDE and add the Arduino Mbed OS Nano Boards package. Connect the Nano 33 BLE Sense via USB. Verify the connection:

arduino-cli board list

Step 2: Connect to Edge Impulse

Install the Edge Impulse CLI:

npm install -g edge-impulse-cli

Run the daemon to connect your board:

edge-impulse-daemon

This opens a serial connection and registers your board with your Edge Impulse project.

Step 3: Collect Training Data

In the Edge Impulse web UI, go to Data Acquisition. Record samples:

  • Hold the board and perform gesture A (e.g., circle motion). Record 20-30 samples, 2 seconds each.
  • Repeat for gesture B (e.g., figure-eight motion).
  • Record 20-30 “idle” samples (board sitting still).

Aim for at least 3 minutes of data per class. More data improves accuracy.

Step 4: Build the Model

In the Edge Impulse UI:

  1. Create Impulse: Add a Spectral Analysis processing block and a Classification learning block
  2. Generate Features: Run the spectral analysis — you should see clear clusters per gesture class in the feature explorer
  3. Train: Use the default neural network settings. Training takes 1-5 minutes.

Check the confusion matrix. If accuracy is below 85%, collect more data for the confused classes.

Step 5: Deploy to Your Board

Go to Deployment, select Arduino library, and download the .zip. In the Arduino IDE:

  1. Sketch → Include Library → Add .ZIP Library — select the downloaded file
  2. Open the example sketch included in the library
  3. Upload to the board

The serial monitor shows real-time predictions: gesture class and confidence score.

Step 6: Evaluate and Iterate

Test with gestures that were not in the training set. Common issues:

  • Low confidence on correct class: Collect more training data with varied gesture speeds and orientations
  • Confusion between classes: Make the gestures more distinct, or add more training samples for the confused pair
  • False positives during idle: Add more “idle” training data with different board orientations

What to Build Next

After your first project, try scaling complexity:

Next StepHardwareGuide
Audio keyword detectionArduino Nano 33 BLE SenseVoice recognition
Vibration anomaly detectionESP32 + accelerometerAnomaly detection
Visual inspectionESP32-S3 + cameraObject detection
Predictive maintenanceSTM32F4 + vibration sensorPredictive maintenance

Each step increases model complexity and hardware requirements. The workflow stays the same: collect data, train, quantize, deploy, evaluate.

Frequently Asked Questions

What programming language is used for TinyML?
C and C++. Models are trained in Python (TensorFlow, PyTorch) but deployed as C arrays on the microcontroller. The inference runtime (TFLite Micro) is written in C++. You do not need Python on the MCU.
How much does it cost to get started with TinyML?
Under $25. An Arduino Nano 33 BLE Sense costs $20-35 and includes built-in sensors. An ESP32 dev board costs $5-15. Edge Impulse has a free tier for individual developers. TFLite Micro is open source.
Is TinyML the same as edge AI?
TinyML is a subset of edge AI. Edge AI includes any on-device inference — from Raspberry Pi to NVIDIA Jetson to microcontrollers. TinyML specifically refers to ML on microcontrollers with milliwatt-level power budgets and kilobyte-level memory constraints.
Can TinyML do image recognition?
Yes, with the right hardware. The ESP32-S3 with its camera interface and PSRAM can run quantized MobileNet V2 for image classification. Simpler MCUs without camera hardware are limited to audio, motion, and other sensor data.
Do I need a machine learning background for TinyML?
Not to start. Edge Impulse abstracts model training behind a web UI. You upload data, select a model architecture, and export a ready-to-compile library. For custom models and optimization, ML knowledge becomes necessary.

Related Hardware Guides

Explore More

From Tutorial to Production

ForestHub turns your TinyML prototype into a deployable workflow. Visual builder, automatic code generation, multi-MCU support.

Get Started Free