The world is moving
on-device.

vl8 is an open-source on-device speech SDK with federated learning. Silicon-agnostic across NVIDIA, Qualcomm, Apple, Intel, and embedded ARM. Private beta with design partners.

An open-source speech SDK, silicon-agnostic by design.

On-device ASR

Whisper-class transcription, sub-100ms on mobile SoCs, fully offline. No cloud dependency, no per-inference cost.

TTS + voice agents

Speech synthesis and low-latency conversational AI on-device. Real-time voice interaction without a round-trip to the cloud.

Federated adaptation

Models personalize per-user without uploading audio. Privacy-preserving by architecture, not policy.

Silicon-agnostic runtime

Same SDK across Jetson, Snapdragon, Apple Silicon, Intel, embedded ARM. Our benchmarks don't care which chip wins.

Private beta. Public code shipping soon.

Developers

Join the waitlist for SDK early access.

Request early access
Enterprise / Design Partners

Book a scoping call.

Schedule a call
Research Partnerships

Vision, ranking, and model optimization.

vl8.ai/research →