Skip to content

Docker + Ollama: decide local LLM architecture for self-hosted users #1590

@SebConejo

Description

@SebConejo

Context

Manifest's Docker image is our primary self-hosted distribution. Many users will want to run local models via Ollama alongside Manifest. Today we have a docker compose --profile ollama up option that installs an empty Ollama container (no models), which isn't very useful.

We need to decide the right architecture for enabling local LLMs while keeping Manifest's scope clean.

Current state

  • docker compose --profile ollama up -d starts an Ollama container with zero models
  • If no Ollama is detected, the UI suggests installing it via the profile — but users still need to manually pull models after
  • Ollama runs as a separate container on the same Docker network, Manifest connects via OLLAMA_HOST

The question

What's the right approach for helping users run Manifest + Ollama locally? The goal: users should be able to easily use Manifest with local Ollama models, while we stay firmly within Manifest's scope (routing, not LLM provider management).

Options to consider (non-exhaustive):

  1. Keep the profile, add model pulling--profile ollama installs Ollama and also pre-pulls a default model (e.g., llama3.2). More opinionated but works out of the box.

  2. Keep the profile empty, improve the UI — Keep installing Ollama empty, but improve the onboarding UX to guide users through pulling models (e.g., show a "run ollama pull llama3.2" command in the setup wizard).

  3. Remove the profile entirely, just document — Don't bundle Ollama at all. In the UI, when no local provider is detected, show a link to Ollama's Docker docs and explain how to add it to the same Docker network. Users who already have Ollama running just set OLLAMA_HOST.

  4. Something else? — e.g., a setup wizard step that asks "do you want local models?" and handles the docker-compose extension dynamically.

Constraints

  • Target users are technical (self-hosters running Docker)
  • We should stay as far from Ollama's scope as possible — Manifest routes, it doesn't manage LLM installations
  • The solution should work for users who already have Ollama running (not just fresh installs)
  • Keep it simple: fewer moving parts = fewer support issues

Assigned to

@brunobuddy — please share your recommendation on which direction to take.

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions