Edge Impulse
Edge Impulse is the leading platform for building, training, and deploying machine learning (ML) models to edge devices. It enables developers — from beginners to experts — to collect sensor data, design ML models, and deploy them directly to hardware, including both microcontrollers and Linux-based devices.
Edge Impulse’s platform is optimized for TinyML, helping you run powerful inference workloads on constrained hardware — like a Particle device — with minimal power consumption and low latency.
What Edge Impulse Does
Edge Impulse offers a complete ML development pipeline for edge AI applications:
-
Data Acquisition
Collect data from sensors (audio, vibration, motion, camera, etc.) and upload it to the cloud. -
Signal Processing & Feature Engineering
Use DSP blocks to extract meaningful features from raw data (e.g. MFCC for audio, FFT for vibration). -
Model Training
Train ML models using classical ML, deep learning, or pre-built blocks tailored for the type of data. -
Model Deployment
Export your trained models in various formats, including firmware libraries for embedded devices or Docker containers for Linux. -
Live Testing & Validation
Test the models live on real devices and evaluate accuracy, latency, and power usage.
Edge Impulse with Particle
Edge Impulse integrates directly with Particle’s ecosystem to allow deployment of trained ML models to both microcontroller and Linux-class devices.
▶️ MCU Deployment (Particle Photon 2 / P2 / Boron)
For Particle’s embedded devices, Edge Impulse supports:
- Code generation as C++ firmware libraries
- Auto-generated integration code for Particle Workbench
- Real-time data logging with the Edge Impulse CLI
- Deployment via
particle flash
or OTA - Built-in support for TensorFlow Lite Micro runtime
Use cases include:
- Vibration anomaly detection
- Audio classification (e.g. machinery sounds)
- Gesture recognition using accelerometers
- Environmental sensing
🐧 Linux Deployment (Tachyon / Raspberry Pi / Jetson-class devices)
For Linux-based Particle devices, Edge Impulse supports:
- Exporting ML models as Docker containers
- Model runtimes for object detection, classification, regression
- Access to camera and microphone streams
- Native access to Edge Impulse’s runtime API for live inference
- Integration with Particle’s container-based Application Runtime
This allows developers to run models like:
- Object detection (YOLO, FOMO)
- Industrial condition monitoring
- Smart vision (e.g. detect safety vests or helmets)
- Custom image classification models
Model Types Supported
Edge Impulse supports a wide range of model workflows, making it suitable for both prototyping and production:
🧠 Pre-Built Base Models
- Optimized for common edge use cases
- Fast deployment with minimal configuration
- Models include audio classification, motion detection, object detection, etc.
🎯 Customized Models (Tuned from Base)
- Start from a base model and fine-tune it using your own dataset
- Adjust parameters and retrain on your own examples
- Useful for improving accuracy in specific environments or edge conditions
🛠 Fully Custom Models
- Upload your own training data
- Use the Edge Impulse Studio or CLI to build a completely new pipeline
- Train models from scratch using your domain-specific knowledge
- Use custom blocks and even import your own TensorFlow or PyTorch models
Output Formats
Edge Impulse supports exporting models in several formats, including:
- C++ SDKs (for MCU/embedded)
- TensorFlow Lite Micro models
- ONNX / TensorFlow models
- Docker Containers (for Linux)
- REST APIs via Edge Impulse Inference API (for serverless inference)
All these options make it easy to pick the deployment method that matches your hardware — and switch between them as your product evolves.
Edge Impulse in the Particle Ecosystem
Particle and Edge Impulse work together to bring AI to your edge deployments:
Platform | Supported Output | Runtime | Deployment Method |
---|---|---|---|
Photon 2 / P2 | C++ firmware library | TensorFlow Lite Micro | Flash via Workbench / CLI |
Boron / B-Series | C++ firmware library | TensorFlow Lite Micro | Flash via Workbench / CLI |
Tachyon / Linux | Docker container | Linux Application Runtime | particle app push |
Visual Overview
💡 Visit edgeimpulse.com to sign up and start building your first ML model today.