One CLI from 8 KB AVRs to Jetson Orin. No vendor SDK switching. TFLite, ONNX, scikit-learn in — flashable firmware out.
Built by an embedded engineer who got tired of writing the same toolchain glue for every project. Maintained out in the open.
Embedded engineers who already have a trained model and need it running on a real device. ML researchers shipping a demo to a chip. Teams replacing custom toolchain glue with one binary.
Training new models — bring your own. Cloud-only inference — we ship to silicon. AutoML — we don't pick architectures, we deploy yours.
AVR for the smallest sensors, Cortex-M for industrial control, Xtensa for connected devices, Linux-class for vision and audio. Edge Impulse covers some of these. STM32Cube.AI covers one. Darml covers them all — through a single uniform pipeline.
No notebooks. No configuration files. No vendor SDKs. The CLI takes your model file and your target board, tells you whether it'll fit, and hands you something flashable.
.tflite, .onnx, or .pkl — Darml auto-detects the format.
11 supported targets across 5 hardware tiers. darml targets lists them all.
Will it fit in flash? Will it fit in RAM? You find out before the toolchain spins up.
firmware.bin for MCUs, .tar.gz for Linux boards. Plus a manifest with target-specific flash instructions.
darml flash <artifact.zip> --port … — auto-detects the right tool: esptool, STM32_Programmer_CLI, avrdude, ssh+docker.
Every release runs through a 5-model × 11-target acceptance matrix (55 parse + size-check combinations) plus end-to-end firmware builds. These are the actual outputs from the latest passing run.
Plus 73 passing unit tests covering parser, size estimator,
pipeline, build cache, license validation, and inference roundtrip
(FP32 ONNX vs INT8 TFLite, argmax-stable on ≥ 80% of samples).
Run the same matrix yourself: ./scripts/pre_publish.sh.
Free to start. Self-host the same code your customers are running.
MIT — fully open source. Use it for anything, commercial or not.
We host the build farm. No PlatformIO, no toolchain setup, no maintenance.
air-gapped on-prem offline Run inside your network. For factory floors, secure dev environments, on-prem CI.
Early-access pricing — locked in for renewal. Industrial buyers get a real PO and a one-page security questionnaire response on request.
Dedicated build farm, signed firmware, SOC 2, fleet-management add-ons.
If your question isn't here, hello@darml.dev goes straight to the maintainer.
Edge Impulse is end-to-end (data + training + deployment) but biased toward their cloud and a curated subset of boards. STM32Cube.AI deploys to STM32 only. TFLite-Micro is a runtime, not a build tool — you wire the toolchain yourself. Darml is the missing piece: bring-your-own-model, span every reasonable target, one CLI. If you've already trained your model and don't want to live inside any vendor's ecosystem, that's the gap Darml fills.
We picked one board per performance tier — Cortex-M4 (stm32f4), Cortex-M7 (stm32h7), M55 + NPU (stm32n6); Arduino Uno-class (avr-mega328) and Mega-class (avr-mega2560) — rather than enumerate every variant. The full STM32 family has hundreds of chips (F0 / F1 / F3 / G0 / G4 / L0 / L4 / L5 / U5 / WL / WB / MP1 / N6 …) that differ in core, FPU presence, peripherals, and memory layout, so each one needs a verified runtime template and CI build, not just a board-ID change. We add on demand: open an issue with chip + dev-board name and we'll prioritize.
Yes. The output is a standard PlatformIO project (for MCU targets) or a Dockerfile + Python script (for Linux targets). No precompiled blobs, no obfuscation. The C/C++ glue is generated from templates you can read in the source tree (darml/infrastructure/templates/). Inspect, diff, fork as you want.
On the free Core tier, no — every build runs locally. On Pro Cloud, your model is uploaded to our build farm (EU/Frankfurt), held only for the duration of the build, and deleted when the artifact is downloaded. On Pro Self-Hosted, the entire stack runs in your network; nothing ever leaves.
We do post-training quantization (PTQ) only — no quantization-aware training, no fine-tuning, no knowledge distillation. Expect 1–3 % accuracy drop for dense networks, 3–8 % for typical CNNs, 5–15 % for larger CNNs without per-channel calibration. The verified-builds matrix above includes an inference roundtrip test (FP32 vs INT8) that asserts argmax-stable on ≥ 80 % of samples — if your model deviates more than that, the test fails loudly. Bring your own calibration data via --calibration-data for tighter results.
Yes. Darml auto-detects already-quantized .tflite and .onnx (QDQ format) inputs and skips its own quantize step — you'll see a "Model is already quantized; skipping." warning, then convert / compile / package run normally. If you've done QAT or your own PTQ with real calibration data in your training pipeline, hand us the result and we deploy it as-is. The free Core tier is enough to ship if you quantize upstream.
The runtime libraries we wrap (TFLite Micro, emlearn) have been used in regulated environments, but Darml itself is not certified. For functional-safety pipelines, the path is: use Pro Self-Hosted (for traceability), pin a Darml release, and keep the generated code under your normal change-control process. Talk to hello@darml.dev if you need a signed reproducibility statement.
A counter in ~/.darml/counter increments when a build hits the compile step. Failed pre-flight checks (won't-fit) don't count. Rolling 24-hour window. The cap exists to keep the free tier sustainable, not to gate features — every target, every model format, every architecture is available on Core.
Core is genuinely free software — fork it, ship it, sell derivatives, no strings. Pro contains the parts that take real ongoing work to maintain (auto-quantization across formats, ONNX→TFLite conversion, hosted build farm, license signing). Selling Pro is what funds Core. Standard open-core.
Core stays MIT — fork it, run it forever. Pro Self-Hosted licenses are HMAC-signed and validated locally; they keep working as long as your existing key hasn't expired. The license verification code is in the public Pro client, so you can audit it. We don't phone home.