Making the AI write with you, not for you.
An AI-powered writing assistant that transforms how you create structured documents. This application combines intelligent content generation with intuitive document management, enabling writers to craft professional documents with contextually-aware AI assistance that understands your style, audience, and objectives.
Built on the TalkPipe framework, this tool helps you:
- Break writer's block: Generate initial drafts and ideas for any section
- Maintain consistency: AI understands your document's context, style, and tone across all sections
- Iterate quickly: Multiple generation modes (rewrite, improve, proofread, ideas) let you refine content efficiently
- Stay organized: Structure documents into sections with main points and supporting text
- Work offline: Use local LLMs via Ollama or cloud-based models via OpenAI, Anthropic, and more
- Multi-User Support: JWT-based authentication with per-user document isolation
- Structured Document Creation: Organize your writing into sections with main points and user text
- AI-Powered Generation: Generate contextually-aware paragraph content using advanced language models
- Multiple Generation Modes:
- Rewrite: Complete rewrite with new ideas and improved clarity
- Improve: Polish existing text while maintaining structure
- Proofread: Fix grammar and spelling errors only
- Ideas: Get specific suggestions for enhancement
- Real-time Editing: Dynamic web interface for seamless writing and editing
- Document Management: Save, load, and manage multiple documents with automatic snapshots
- User Preferences: Per-user AI settings, writing style, and environment variables
- Customizable Metadata: Configure writing style, tone, audience, and generation parameters
- Flexible AI Backend: Support for OpenAI (GPT-4, GPT-4o), Anthropic (Claude 3.5 Sonnet, Claude 3 Opus), and Ollama (llama3, mistral, etc.)
- Database Storage: SQLite database with configurable location for easy backup and deployment
- Async Processing: Efficient queuing system for AI generation requests
CI publishes images to GitHub Container Registry:
| Image | ghcr.io/sandialabs/talkpipe-writing-assistant |
| Platforms | Linux amd64 and arm64 (release builds). The app runs in a Linux container; on Windows and macOS use Docker Desktop or Podman (they run Linux containers under the hood). |
Tags (typical): latest โ stable GitHub Release (not marked pre-release); experimental โ pushes to the develop branch or a pre-release GitHub Release; branch names (e.g. main) and commit SHAs are also published. Check the package page for the exact tag after a workflow run.
Registry login: This package is public, so you can pull and run without logging in to GHCR. You only need docker login ghcr.io / podman login ghcr.io if the image is private, your organization requires it, or pull fails with an authentication error (use a GitHub Personal Access Token with read:packages as the password).
-
pullis optional.docker run/podman runpulls the image automatically if it is not already local (same on Windows). Use an explicitdocker pull/podman pullonly if you want to download separately.Typo: the subcommand appears only once โ use
podman pull ghcr.io/...orpodman run ..., neverpodman pull pull ...(the secondpullis treated as an image name and triggers errors aboutdocker.io/library/pull). -
Run โ persist the database under
/app/data(the image supplies defaults, including a JWT secret):docker run --rm -p 8001:8001 \ -v wa_data:/app/data \ ghcr.io/sandialabs/talkpipe-writing-assistant:latest
Open http://localhost:8001 or http://127.0.0.1:8001 (use
http, nothttps). Usepodman runwith the same flags if you use Podman.Volume syntax:
-vishost:container. The path after the second:must be an absolute path inside the container โ use/app/data, not.or a relative path.
You do not have to pull first โ run is enough (it will fetch the image if needed). Examples on one line:
docker run --rm -p 8001:8001 -v wa_data:/app/data ghcr.io/sandialabs/talkpipe-writing-assistant:experimentalpodman run --rm -p 8001:8001 -v wa_data:/app/data ghcr.io/sandialabs/talkpipe-writing-assistant:experimentalOptional: download the image ahead of time with docker pull โฆ or podman pull โฆ โ only one pull in the command (see typo note above).
Start Docker Desktop or your Podman machine before running. Stop the container with Ctrl+C in that terminal.
-
Confirm the app is reachable from the host (while the container is running):
curl http://127.0.0.1:8001/
If this fails, fix networking before blaming the browser. Check
docker psorpodman psand ensure the PORTS column shows something like8001->8001(or0.0.0.0:8001->8001/tcp). -
Bind the host port explicitly (helps some Windows / Podman setups):
docker run --rm -p 127.0.0.1:8001:8001 -v wa_data:/app/data ghcr.io/sandialabs/talkpipe-writing-assistant:experimental
Use
podman runwith the same-pand-vflags if you use Podman. -
Port already in use โ map a different host port (here 8080):
docker run --rm -p 127.0.0.1:8080:8001 -v wa_data:/app/data ghcr.io/sandialabs/talkpipe-writing-assistant:experimental
Then open http://127.0.0.1:8080.
-
Podman on Windows โ if
curlto127.0.0.1still fails, restart the VM (podman machine stopthenpodman machine start) or update Podman / Podman Desktop; older builds sometimes breaklocalhostport forwarding from the host into the machine. -
Firewall or VPN โ allow the container engine through Windows Defender Firewall (private networks) or briefly disconnect VPN to test.
To build and run from a local clone with Compose (including dev reload), see Using Docker below.
- Python 3.11 or higher
- An AI backend: OpenAI, Anthropic, or Ollama (local)
pip install talkpipe-writing-assistantAfter installation, you can start the application immediately:
writing-assistantThen navigate to http://localhost:8001 in your browser. See the Quick Start section below for next steps.
git clone https://github.com/sandialabs/talkpipe-writing-assistant.git
cd talkpipe-writing-assistant
pip install -e .git clone https://github.com/sandialabs/talkpipe-writing-assistant.git
cd talkpipe-writing-assistant
pip install -e .[dev]The repo includes uv.lock so CI and local installs can use the same resolved versions. Install uv, then:
git clone https://github.com/sandialabs/talkpipe-writing-assistant.git
cd talkpipe-writing-assistant
uv sync --frozen --extra devRun tests and tools via the project environment, for example uv run pytest, or activate the virtualenv (.venv on Unix: source .venv/bin/activate).
After changing dependencies in pyproject.toml, refresh the lockfile with uv lock and commit uv.lock. To bump versions, use uv lock --upgrade or uv lock --upgrade-package <name>.
Build and run from the repository (as opposed to the pre-built GHCR image above):
# Production deployment
docker-compose up talkpipe-writing-assistant
# Development with live reload
docker-compose --profile dev up talkpipe-writing-assistant-devTL;DR: After pip install talkpipe-writing-assistant, just run writing-assistant and open http://localhost:8001 in your browser!
After installing with pip, follow these steps to get started:
writing-assistantThe server will start on http://localhost:8001 and display:
๐ Writing Assistant Server - Multi-User Edition
๐ Access your writing assistant at: http://localhost:8001/
๐ Register a new account at: http://localhost:8001/register
๐ Login at: http://localhost:8001/login
๐ API documentation: http://localhost:8001/docs
๐พ Database: /home/user/.writing_assistant/writing_assistant.db
- Open your browser and navigate to
http://localhost:8001/register - Enter your email address and password
- Click "Register" to create your account
You need to configure one of the supported AI backends:
Option A: OpenAI (Cloud)
- Get an API key from OpenAI Platform
- Set your API key:
export OPENAI_API_KEY="sk-your-api-key-here"
- In the web interface: Settings โ AI Settings โ Set Source to
openaiand Model to your model of choice.
Option B: Anthropic (Cloud)
- Get an API key from Anthropic Console
- Set your API key:
export ANTHROPIC_API_KEY="sk-ant-your-api-key-here"
- In the web interface: Settings โ AI Settings โ Set Source to
anthropicand Model to your model of choice.
Option C: Ollama (Local, Free)
- Install Ollama from ollama.com
- Pull a model:
ollama pull [model name] - Start Ollama:
ollama serve - In the web interface: Settings โ AI Settings โ Set Source to
ollamaand Model to [model name]
- Click "Create New Document"
- Add a title and sections, leaving a blank line between sections.
- Click "Generate" on any section to create AI-assisted content
- Save your work with the "Save Document" button
That's it! You're ready to use the AI writing assistant.
# Default: http://localhost:8001
writing-assistant
# Custom port
writing-assistant --port 8080
# Custom host and port
writing-assistant --host 0.0.0.0 --port 8080
# Enable auto-reload for development
writing-assistant --reload
# Custom database location
writing-assistant --db-path /path/to/database.db
# Disable custom environment variables from UI (security)
writing-assistant --disable-custom-env-vars
# Initialize database without starting server
writing-assistant --init-db
# You can also use environment variables
WRITING_ASSISTANT_PORT=8080 writing-assistant
WRITING_ASSISTANT_RELOAD=true writing-assistant
WRITING_ASSISTANT_DB_PATH=/path/to/database.db writing-assistantWhen the server starts, it will display:
- The URL to access the application
- Registration and login URLs
- API documentation URL
- Database location
Authentication: The application uses JWT-based multi-user authentication with FastAPI Users. Each user has their own account with secure password storage. New users can register through the web interface at /register, and existing users log in at /login.
Configure the application with these environment variables:
| Variable | Description | Default |
|---|---|---|
WRITING_ASSISTANT_HOST |
Server host address | localhost |
WRITING_ASSISTANT_PORT |
Server port number | 8001 |
WRITING_ASSISTANT_RELOAD |
Enable auto-reload (development) | false |
WRITING_ASSISTANT_DB_PATH |
Database file location | ~/.writing_assistant/writing_assistant.db |
WRITING_ASSISTANT_SECRET |
JWT secret key for authentication | Auto-generated (change in production) |
TALKPIPE_OLLAMA_SERVER_URL |
Ollama server URL for local models | http://localhost:11434 |
Security Options:
--disable-custom-env-vars: Prevents users from configuring environment variables through the browser interface- Use this for shared deployments or when you want centralized credential management
- Environment variables must be set at the server level (via shell environment)
- The Environment Variables section will be hidden in the UI
Configure document metadata:
- AI Source:
openai,anthropic, orollama - Model: e.g.,
gpt-4,claude-3-5-sonnet-20241022, orllama3.1:8b - Writing style: formal, casual, technical, etc.
- Target audience: general public, experts, students, etc.
- Tone: neutral, persuasive, informative, etc.
- Word limit: approximate words per paragraph
Documents are stored in an SQLite database with multi-user isolation:
Default Location: ~/.writing_assistant/writing_assistant.db
Custom Location: Use --db-path or WRITING_ASSISTANT_DB_PATH to specify an alternative location
Features:
- Per-user document isolation (users only see their own documents)
- Automatic snapshot management (keeps 10 most recent versions)
- User-specific preferences (AI settings, writing style, etc.)
- Cascade deletion (removing a user deletes all their documents)
Backup: Simply copy the database file to create a backup. The database can be moved to a different location using the --db-path option.
src/writing_assistant/
โโโ __init__.py # Package initialization and version
โโโ core/ # Core business logic
โ โโโ __init__.py
โ โโโ callbacks.py # AI text generation functionality
โ โโโ definitions.py # Data models (Metadata)
โ โโโ segments.py # TalkPipe segment registration
โโโ app/ # Web application
โโโ __init__.py
โโโ main.py # FastAPI application and API endpoints
โโโ server.py # Application entry point
โโโ static/ # CSS and JavaScript assets
โโโ templates/ # Jinja2 HTML templates
- Metadata: Configuration for writing style, audience, tone, and AI settings
- Section: Individual document sections with async text generation and queuing
- Document: Complete document with sections, metadata, and snapshot management
- Callbacks: AI text generation using TalkPipe with context-aware prompting
"Port already in use"
- Change the port:
writing-assistant --port 8080 - Or kill the process using the port
"Cannot save document" or "Database error"
- Check write permissions to the database directory (default:
~/.writing_assistant/) - Ensure the directory exists:
mkdir -p ~/.writing_assistant - Try a different database location:
writing-assistant --db-path /tmp/test.db - Initialize the database manually:
writing-assistant --init-db
"Authentication failed" or "Invalid credentials"
- Double-check your email and password
- Register a new account if you haven't already
- The database may have been reset - check the database location
"Cannot connect to database"
- Verify the database file exists and is not corrupted
- Check file permissions on the database file
- Try initializing a new database:
writing-assistant --db-path /tmp/new.db --init-db
This project is licensed under the Apache License 2.0. See the LICENSE file for details.
Built with TalkPipe, a flexible framework for AI pipeline construction developed at Sandia National Laboratories.

