-
Notifications
You must be signed in to change notification settings - Fork 0
Models Reference
Arian Amiramjadi edited this page Dec 24, 2025
·
1 revision
All models are defined as constants in ai/models.go:
// OpenAI
ai.ModelGPT52 // "openai/gpt-5.2"
ai.ModelGPT52Pro // "openai/gpt-5.2-pro"
ai.ModelGPT51 // "openai/gpt-5.1"
ai.ModelGPT5 // alias of ModelGPT52 (back-compat)
ai.ModelGPT5Mini // "openai/gpt-5-mini"
ai.ModelGPT5Nano // "openai/gpt-5-nano"
ai.ModelGPT51Codex // "openai/gpt-5.1-codex"
ai.ModelGPT51CodexMax // "openai/gpt-5.1-codex-max"
ai.ModelGPT5Codex // alias of ModelGPT51CodexMax (back-compat)
ai.ModelGPT5CodexBase // "openai/gpt-5-codex"
ai.ModelGPT41 // "openai/gpt-4.1"
ai.ModelGPT4o // "openai/gpt-4o"
ai.ModelGPT4oMini // "openai/gpt-4o-mini"
ai.ModelO3 // "openai/o3"
ai.ModelO4Mini // "openai/o4-mini"
ai.ModelO1 // "openai/o1"
ai.ModelO1Mini // "openai/o1-mini"
// Anthropic
ai.ModelClaudeOpus // "anthropic/claude-opus-4.5"
ai.ModelClaudeSonnet // "anthropic/claude-sonnet-4.5"
ai.ModelClaudeHaiku // "anthropic/claude-haiku-4.5"
// Google
// Gemini 3 (preview)
ai.ModelGemini3Pro // "google/gemini-3-pro-preview"
ai.ModelGemini3Flash // "google/gemini-3-flash-preview"
// Gemini 2.5 (stable)
ai.ModelGemini25Pro // "google/gemini-2.5-pro"
ai.ModelGemini25Flash // "google/gemini-2.5-flash"
ai.ModelGemini25FlashLite // "google/gemini-2.5-flash-lite"
// Gemini 2.0 (OpenRouter uses -001 model IDs)
ai.ModelGemini2Flash // "google/gemini-2.0-flash-001"
ai.ModelGemini2FlashLite // "google/gemini-2.0-flash-lite-001"
// Back-compat alias (maps to Gemini 2.5 Pro)
ai.ModelGemini2Pro // alias of ModelGemini25Pro
// xAI
ai.ModelGrok41Fast // "x-ai/grok-4.1-fast"
ai.ModelGrok3 // "x-ai/grok-3"
ai.ModelGrok3Mini // "x-ai/grok-3-mini"
// Alibaba
ai.ModelQwen3Next // "qwen/qwen3-next"
ai.ModelQwen3 // "qwen/qwen-3-235b"
// Meta
ai.ModelLlama4 // "meta-llama/llama-4-maverick"
// Mistral
ai.ModelMistralLarge // "mistralai/mistral-large"Convenience functions that return a *Builder:
| Function | Model | Notes |
|---|---|---|
ai.GPT5() |
openai/gpt-5.2 | General purpose, 400K context |
ai.GPT5Codex() |
openai/gpt-5.1-codex-max | Most intelligent agentic coding model (Codex) |
ai.GPT4o() |
openai/gpt-4o | Multimodal |
ai.GPT4oMini() |
openai/gpt-4o-mini | Fast & cheap |
ai.O1() |
openai/o1 | Reasoning model |
ai.Claude() |
anthropic/claude-opus-4.5 | Top coding & safety |
ai.ClaudeSonnet() |
anthropic/claude-sonnet-4.5 | Best balance of intelligence/speed/cost |
ai.ClaudeHaiku() |
anthropic/claude-haiku-4.5 | Fastest Claude (near-frontier) |
ai.GeminiPro() |
google/gemini-3-pro-preview | Gemini 3 Pro (preview) |
ai.Gemini() |
google/gemini-3-flash-preview | Gemini 3 Flash (preview) |
ai.Gemini25Pro() |
google/gemini-2.5-pro | Gemini 2.5 Pro (stable) |
ai.Gemini25Flash() |
google/gemini-2.5-flash | Gemini 2.5 Flash (stable) |
ai.Gemini25FlashLite() |
google/gemini-2.5-flash-lite | Gemini 2.5 Flash-Lite (stable) |
ai.Gemini2Flash() |
google/gemini-2.0-flash-001 | Gemini 2.0 Flash (OpenRouter -001) |
ai.Gemini2FlashLite() |
google/gemini-2.0-flash-lite-001 | Gemini 2.0 Flash-Lite (OpenRouter -001) |
ai.GrokFast() |
x-ai/grok-4.1-fast | 2M context, optimized |
ai.Grok() |
x-ai/grok-3 | Real-time knowledge |
ai.Qwen() |
qwen/qwen3-next | Long context specialist |
ai.Llama() |
meta-llama/llama-4-maverick | Open weights |
ai.Mistral() |
mistralai/mistral-large | European AI |
All OpenAI model constants in models.go also have matching shortcut functions in ai.go:
ai.GPT52()
ai.GPT52Pro()
ai.GPT51()
ai.GPT5Mini()
ai.GPT5Nano()
ai.GPT51Codex()
ai.GPT51CodexMax()
ai.GPT5CodexBase()
ai.GPT5ChatLatest()
ai.ChatGPT4oLatest()
ai.GPT41()
ai.O1Mini()
ai.O1Pro()
ai.O1Preview()
ai.O3()
ai.O3Mini()
ai.O3Pro()
ai.O3DeepResearch()
ai.O4Mini()
ai.O4MiniDeepResearch()
ai.GPTRealtime()
ai.GPTRealtimeMini()
ai.GPTAudio()
ai.GPTAudioMini()
ai.GPT4oMiniTTS()
ai.GPT4oTranscribe()
ai.GPT4oMiniTranscribe()
ai.GPTImage15()
ai.GPTImage1()
ai.GPTImage1Mini()
ai.ChatGPTImageLatest()
ai.GPTOSS120B()
ai.GPTOSS20B()
ai.Sora2()
ai.Sora2Pro()For models not in the shortcuts:
// Use any OpenRouter model ID
ai.Use("openai/gpt-4-turbo").Ask("Hello")
// If you're using the OpenRouter provider, bare OpenAI model IDs also work:
ai.Use("gpt-5-mini").Ask("Hello") // normalized to openai/gpt-5-mini
// Or use Model() on existing builder
ai.New(ai.Model("anthropic/claude-3-opus")).Ask("Hello")Get metadata about a model:
info := ai.Models[ai.ModelGPT5]
fmt.Println(info.Name) // "GPT-5.2"
fmt.Println(info.Provider) // "OpenAI"
fmt.Println(info.Description) // "General purpose, 400K context"builder := ai.GPT5().System("You are helpful")
// Switch to Claude
builder.Model(ai.ModelClaudeOpus).Ask("Hello")
// Or by string
builder.UseModel("anthropic/claude-sonnet-4.5").Ask("Hello")