Show HN: Ava – open-source AI voice assistant that runs in the browser Hi HN, I built a voice assistant that runs entirely in the browser. No backend, no API calls. Everything runs on your device. The goal was to see how far browser based AI has come, and whether a full voice pipeline can work client side with acceptable latency. How it works - Speech to text: Whisper tiny en via WebAssembly - LLM: Qwen 2.5 0.5B running via llama.cpp WASM port - Text to speech: Native browser SpeechSynthesis API Responses stream in real time. TTS starts speaking as soon as a sentence is generated instead of waiting for the full reply. After the first load, it works completely offline. Nothing leaves the device. Why this matters: - Shows what is now possible with modern browsers, small LLMs and WASM Caveats: - Requires Chrome or Edge 90+ due to SharedArrayBuffer for WASM threading - Around 380MB initial download, cached after - English only for now - The 0.5B model is limited, but small enough to run locally Tested on macOS and Linux desktop browsers. I could not get this working reliably on mobile yet due to memory and threading limits. Getting all of this working in the browser took far longer than expected due to many low level WASM and browser issues. Demo https://ava.muthu.co Source https://ift.tt/3491stH I would love feedback, especially from anyone experimenting with local AI or browser based ML, and ideas on improving performance or mobile support. https://ava.muthu.co/ December 21, 2025 at 12:14AM
0 Comments
Thanks for your interest