Skip to content

casual-user-asm/local-voice-assistant

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Voice Assistant – Local AI Chat with Speech

A simple and privacy-friendly voice assistant that runs entirely on your local machine. It uses powerful open-source tools for speech recognition, text-to-speech, and AI conversation — with no cloud dependencies.

🧩 Features

  • 🎙️ Voice input using OpenAI Whisper
  • 🧠 Text generation using local LLMs via Ollama
  • 🔊 Text-to-Speech with Coqui.ai
  • 🔐 100% local — no data leaves your machine

🚀 Quick Start

1. Install dependencies

Make sure Python 3.8+ is installed.

pip install -r requirements.txt

You’ll also need:

  • Ollama for local LLMs

  • ffmpeg and sox (check your system’s package manager)

2. Start Ollama with a model(choose any model you like -> https://ollama.com/search)

ollama run gemma3

3. Activate virtual environment in terminal

venv\Scripts\activate

4. Run the assistant

python main.py

About

A local, extensible voice assistant using Python, pyttsx3, and LLMs. Ideal for DIY automation and custom commands.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages