Accessibility AI
VisionSathi
Offline-first AI assistant for blind users. On-device Moondream 3 (2B params) runs without internet.
About the Project
VisionSathi ('Vision Friend' in Hindi) is an offline-first visual assistant for blind and visually impaired users. It runs a 2B parameter Moondream 3 vision-language model directly on mobile devices via ONNX Runtime, enabling scene description, text reading, and obstacle detection without internet. Designed accessibility-first with WCAG AAA compliance.
Key Features
- On-device inference with 2B parameter Moondream 3 model via ONNX Runtime
- WCAG AAA compliant with 7:1 contrast ratio and 64px touch targets
- Multiple modes: Quick Tap (scene), Conversation, OCR, Navigation Assist
- Works completely offline -- no internet dependency
- Full VoiceOver (iOS) and TalkBack (Android) support
- Cloud fallback after 5s timeout for reliability
Impact
AI accessibility for people who can't see and might not have internet. Demonstrates on-device ML inference for underserved populations.
Tech Stack
React NativeExpoONNX RuntimeMoondream 3FastAPIZustand
Metrics
WCAG AAA compliant
7:1 contrast ratio
On-device 2B VLM
Fully offline
Interested in this project?
Let's discuss how I can build something similar for you.