Guide

Edge AI for Manufacturing

Edge AI in manufacturing runs ML models on microcontrollers at the production line — quality inspection, predictive maintenance, and energy monitoring without sending factory data to external servers. Processing stays local, latency stays under 100 ms, and the system works offline.

Published 2026-04-01

Why Manufacturing Needs Edge AI

Manufacturing runs on determinism. Production lines need predictable behavior, predictable timing, and predictable data handling. Cloud AI introduces three variables that conflict with these requirements:

Network latency. A quality inspection camera that sends images to the cloud for classification adds 200-2,000 ms per frame. At line speeds of 60+ parts per minute, the part has already passed the reject gate by the time the cloud responds.

Network reliability. A cloud-dependent quality system goes blind when the factory’s internet connection drops. An edge system keeps running. For manufacturers in rural industrial zones or multi-building campuses, connectivity is a real constraint.

Data sovereignty. Vibration patterns from a CNC machine encode spindle speed, feed rate, and tool engagement profiles. Current signatures reveal motor loading patterns. This is process IP. Many manufacturers — particularly in Germany’s Mittelstand — will not send operational data to external cloud services. Edge AI keeps it on-premises.

Use Case 1: Visual Quality Inspection

The Problem

Manual visual inspection at production line speeds is error-prone. Human inspectors miss 5-15% of defects during sustained operation. The cost of a missed defect escalates at each stage — catching it at station 3 costs $1, at final assembly costs $50, at the customer costs $500+.

The Edge AI Solution

A camera mounted above the production line captures images of each part. An MCU with a classification model classifies each image as pass or fail in real time. Failed parts trigger a reject mechanism.

Hardware: ESP32-S3 with a camera module. It is among the most cost-effective MCUs with integrated Wi-Fi, camera interface, and ML acceleration. For simple binary defect detection (part present/absent, label correct/incorrect, obvious surface defects), this is sufficient. Complex defects requiring high-resolution analysis need higher-end edge hardware.

Realistic expectations: MCU-based vision handles binary classification well. It does not handle subtle cosmetic defects, dimensional measurement, or sub-millimeter precision. Know the limits before committing to a deployment.

Performance

MetricESP32-S3STM32H7
Frame rate5-10 fps3-7 fps
Classification latency100-200 ms80-150 ms
Image resolution96x96 to 240x24096x96 (DCMI available)
Power consumption~240 mW~400 mW

Estimated ranges — benchmark on target hardware for production. Performance varies with model architecture and optimization.

The STM32H7 has a DCMI parallel camera interface with DMA support, but lacks the integrated PSRAM of the ESP32-S3, making large image buffers more constrained within its 1 MB SRAM.

Use Case 2: Predictive Maintenance

The most proven edge AI use case in manufacturing. See our detailed predictive maintenance guide for the full technical breakdown.

In brief: Mount vibration sensors on critical rotating equipment. Train an anomaly detection model on healthy vibration data. Deploy on an ESP32 or STM32F4. The model flags vibration patterns that deviate from the learned baseline, giving maintenance teams days to weeks of advance warning before failure.

Why manufacturers adopt this first: No process integration needed. The monitoring system is completely independent of the production process. You deploy it on one motor today without changing anything about how the factory operates.

Use Case 3: Energy Monitoring

The Problem

Manufacturing facilities consume significant energy, but most lack per-machine monitoring. Without granular data, efficiency improvements are guesswork.

The Edge AI Solution

Current transformers on individual motor supply cables feed into MCUs running load classification models. The edge device identifies operating states (idle, running, loaded, overloaded) and detects anomalies — excessive startup current, irregular load patterns indicating mechanical binding.

Hardware: STM32L4 for battery-powered nodes or ESP32-C3 for nodes with Wi-Fi reporting. Models are typically under 20 KB — well within the smallest ML-capable MCUs.

Practical benefit: Identifying one motor running inefficiently (wrong operating point, worn bearings increasing friction, oversized motor at partial load) can save 5-15% of that motor’s energy consumption. Across a facility with 100+ motors, the savings compound.

Why Edge Beats Cloud in Factories

RequirementEdge AICloud AI
Line-speed inspectionUnder 200 ms200-2,000 ms latency
Works during internet outageFully offlineSystem goes blind
Factory data stays on-premisesData never leaves deviceData transmitted to server
No per-inference cost at scaleHardware is sunk cost$0.001-0.10 per call
PLC integrationGPIO, Modbus, local MQTTRequires internet roundtrip

Cloud still has a role in manufacturing AI: model training, fleet-wide dashboarding, historical trend analysis, and model retraining on aggregated data. But the inference — the real-time decision at the production line — belongs at the edge.

Getting Started in a Factory Setting

Step 1: Identify the Highest-Cost Problem

Do not start with the most interesting technical challenge. Start with the problem that costs the most money. Talk to the maintenance manager: “What was your last unplanned downtime event, and what did it cost?”

Step 2: Start with One Machine

Deploy a single monitoring node on the identified machine. ESP32 with Edge Impulse is the fastest path to a working prototype. Edge Impulse handles training, quantization, and export as a ready-to-compile library.

Step 3: Prove the Signal

Collect 2-4 weeks of baseline data before expecting useful predictions. The model needs to learn the machine’s normal operating profile across different loads, shifts, and ambient conditions.

Step 4: Validate with Maintenance

When the model flags an anomaly, have maintenance inspect the machine. Track hit rate: how often does a flagged anomaly correspond to a real issue? Adjust detection thresholds based on this feedback.

Step 5: Scale Horizontally

Once validated on one machine, replicate to similar machines. The same model architecture works — you may need to retrain per machine if operating conditions differ significantly.

Step 6: Integrate with Existing Systems

Connect edge AI alerts to existing infrastructure:

  • PLC integration: Edge node sends anomaly alerts via Modbus TCP or GPIO signal. The PLC continues controlling the machine — edge AI provides advisory input, not control commands.
  • SCADA integration: Edge node publishes to a local MQTT broker. SCADA subscribes to relevant topics. No internet required.
  • CMMS integration: Anomaly events feed into the Computerized Maintenance Management System via REST API or database writes on the local network.
  • Standalone operation: If no integration is needed, the edge node can flash an LED or trigger a local buzzer. Not every deployment needs enterprise integration.

Common Misconceptions

“Edge AI replaces our PLC.” No. Edge AI is a monitoring and advisory layer. It does not control machinery. The PLC handles real-time control with its deterministic loop. Edge AI tells maintenance where to look — it does not actuate machines.

“We need thousands of failure examples to train.” No. Anomaly detection models train on normal data only. Collect 1-2 weeks of healthy vibration patterns and the model learns what “normal” looks like. Failures are detected as deviations from that baseline.

“Our machines are too old for AI.” If the machine has a motor, bearing, or any rotating component, a vibration sensor can be mounted externally. Age is irrelevant — the sensor reads the machine’s mechanical signature regardless of the machine’s vintage. Some of the best predictive maintenance ROI comes from monitoring legacy equipment that lacks built-in diagnostics.

Frequently Asked Questions

Is edge AI ready for production in manufacturing?
Yes, for specific use cases. Anomaly detection and vibration-based predictive maintenance are deployed in production today on MCUs like ESP32 and STM32. Visual quality inspection with MCUs is more limited — complex defect detection needs higher-end hardware. Start with sensor-based anomaly detection, not vision.
How does edge AI handle data security in factories?
Data never leaves the device. The MCU reads sensor data, runs inference locally, and outputs only the result — normal, anomaly, or classification label. Raw sensor data that could reveal production parameters or process IP stays on the microcontroller. No cloud transmission means no external attack surface.
Can edge AI work alongside existing SCADA and PLC systems?
Yes. Edge AI nodes are add-on monitoring points, not replacements for PLCs or SCADA. Communication uses standard industrial protocols: MQTT to a local broker, Modbus RTU/TCP for PLC integration, or GPIO signals for relay-based alerts. No changes to existing control systems required.
What is the ROI of edge AI in manufacturing?
Depends on the use case. Predictive maintenance on a single critical machine typically pays back the node cost ($50-300) with the first prevented unplanned downtime event ($5,000-50,000+). Start with the highest-cost failure mode and work backward from downtime costs.

Related Hardware Guides

Explore More

Deploy Edge AI on Your Factory Floor

ForestHub is designed to connect sensors to ML models to action outputs — in a visual workflow. One platform, any MCU, no cloud required for inference.

Get Started Free