Skip to content

Commit 9c4d49b

Browse files
committed
chore: update default LLM models
Updated the default model versions for OpenAI to gpt-5.4-mini and Gemini to gemini-3.1-flash-lite-preview across the codebase and documentation. Signed-off-by: Chmouel Boudjnah <chmouel@redhat.com>
1 parent 9718d21 commit 9c4d49b

File tree

9 files changed

+20
-19
lines changed

9 files changed

+20
-19
lines changed

config/300-repositories.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -398,8 +398,8 @@ spec:
398398
Model specifies which LLM model to use for this role (optional).
399399
You can specify any model supported by your provider.
400400
If not specified, provider-specific defaults are used:
401-
- OpenAI: gpt-5-mini
402-
- Gemini: gemini-2.5-flash-lite
401+
- OpenAI: gpt-5.4-mini
402+
- Gemini: gemini-3.1-flash-lite-preview
403403
type: string
404404
name:
405405
description: Name is a unique identifier for this analysis role

docs/content/docs/api/settings.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -264,8 +264,8 @@ Defines the base prompt template that Pipelines-as-Code sends to the LLM.
264264
{{< param name="roles[].model" type="string" id="param-roles-model" >}}
265265
Specifies the LLM model for this role. If omitted, Pipelines-as-Code uses provider-specific defaults:
266266

267-
- OpenAI: `gpt-5-mini`
268-
- Gemini: `gemini-2.5-flash-lite`
267+
- OpenAI: `gpt-5.4-mini`
268+
- Gemini: `gemini-3.1-flash-lite-preview`
269269
{{< /param >}}
270270

271271
{{< param name="roles[].on_cel" type="string" id="param-roles-on-cel" >}}

docs/content/docs/guides/llm-analysis/_index.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -26,8 +26,8 @@ Additional output destinations (`check-run` and `annotation`) and structured JSO
2626

2727
Pipelines-as-Code supports two LLM providers:
2828

29-
- **OpenAI** -- Default model: `gpt-5-mini`
30-
- **Google Gemini** -- Default model: `gemini-2.5-flash-lite`
29+
- **OpenAI** -- Default model: `gpt-5.4-mini`
30+
- **Google Gemini** -- Default model: `gemini-3.1-flash-lite-preview`
3131

3232
You can specify any model your chosen provider supports. See [Model Selection]({{< relref "/docs/guides/llm-analysis/model-and-triggers#model-selection" >}}) for guidance on choosing the right model.
3333

@@ -53,7 +53,7 @@ spec:
5353
key: "token"
5454
roles:
5555
- name: "failure-analysis"
56-
model: "gpt-5-mini" # Optional: specify model (uses provider default if omitted)
56+
model: "gpt-5.4-mini" # Optional: specify model (uses provider default if omitted)
5757
prompt: |
5858
You are a DevOps expert. Analyze this failed pipeline and:
5959
1. Identify the root cause
@@ -129,3 +129,4 @@ When you set `commit_content: true`, Pipelines-as-Code includes the following fi
129129
- Pipelines-as-Code **intentionally excludes email addresses** from the commit context to protect personally identifiable information (PII) when sending data to external LLM providers.
130130
- Fields appear only if your Git provider makes them available. Some providers supply limited information (for example, Bitbucket Cloud provides only the author name).
131131
- Author and committer may be the same person or different (for example, when using `git commit --amend` or rebasing).
132+
asing).

docs/content/docs/guides/llm-analysis/model-and-triggers.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -9,8 +9,8 @@ This page explains how to choose the right LLM model for each analysis role and
99

1010
Each analysis role can specify a different model. Choosing the right model lets you balance cost against analysis depth. If you do not specify a model, Pipelines-as-Code uses provider-specific defaults:
1111

12-
- **OpenAI**: `gpt-5-mini`
13-
- **Gemini**: `gemini-2.5-flash-lite`
12+
- **OpenAI**: `gpt-5.4-mini`
13+
- **Gemini**: `gemini-3.1-flash-lite-preview`
1414

1515
### Specifying Models
1616

@@ -37,7 +37,7 @@ settings:
3737
model: "gpt-5"
3838
prompt: "Analyze security failures..."
3939

40-
# Use default model (gpt-5-mini) for general analysis
40+
# Use default model (gpt-5.4-mini) for general analysis
4141
- name: "general-failure"
4242
# No model specified - uses provider default
4343
prompt: "Analyze this failure..."

pkg/apis/pipelinesascode/v1alpha1/types_llm.go

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -63,8 +63,8 @@ type AnalysisRole struct {
6363
// Model specifies which LLM model to use for this role (optional).
6464
// You can specify any model supported by your provider.
6565
// If not specified, provider-specific defaults are used:
66-
// - OpenAI: gpt-5-mini
67-
// - Gemini: gemini-2.5-flash-lite
66+
// - OpenAI: gpt-5.4-mini
67+
// - Gemini: gemini-3.1-flash-lite-preview
6868
// +optional
6969
Model string `json:"model,omitempty"`
7070

pkg/llm/providers/gemini/client.go

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ import (
1515

1616
const (
1717
defaultBaseURL = "https://generativelanguage.googleapis.com/v1beta"
18-
defaultModel = "gemini-2.5-flash-lite"
18+
defaultModel = "gemini-3.1-flash-lite-preview"
1919
)
2020

2121
func init() {

pkg/llm/providers/openai/client.go

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ import (
1414

1515
const (
1616
defaultBaseURL = "https://api.openai.com/v1"
17-
defaultModel = "gpt-5-mini"
17+
defaultModel = "gpt-5.4-mini"
1818
)
1919

2020
func init() {

pkg/llm/providers/openai/client_test.go

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -209,7 +209,7 @@ func TestAnalyzeSuccess(t *testing.T) {
209209
ID: "chatcmpl-123",
210210
Object: "chat.completion",
211211
Created: 1234567890,
212-
Model: "gpt-5-mini",
212+
Model: "gpt-5.4-mini",
213213
Choices: []openaiChoice{
214214
{
215215
Index: 0,
@@ -500,7 +500,7 @@ func TestAnalyzeWithContext(t *testing.T) {
500500
ID: "chatcmpl-123",
501501
Object: "chat.completion",
502502
Created: 1234567890,
503-
Model: "gpt-5-mini",
503+
Model: "gpt-5.4-mini",
504504
Choices: []openaiChoice{
505505
{
506506
Index: 0,
@@ -523,7 +523,7 @@ func TestAnalyzeWithContext(t *testing.T) {
523523
var reqBody openaiRequest
524524
err := json.NewDecoder(req.Body).Decode(&reqBody)
525525
assert.NilError(t, err)
526-
assert.Equal(t, reqBody.Model, "gpt-5-mini")
526+
assert.Equal(t, reqBody.Model, "gpt-5.4-mini")
527527
assert.Equal(t, len(reqBody.Messages), 1)
528528

529529
body, err := json.Marshal(mockResponse)
@@ -570,7 +570,7 @@ func TestRequestMarshaling(t *testing.T) {
570570
var reqBody openaiRequest
571571
err := json.NewDecoder(req.Body).Decode(&reqBody)
572572
assert.NilError(t, err)
573-
assert.Equal(t, reqBody.Model, "gpt-5-mini")
573+
assert.Equal(t, reqBody.Model, "gpt-5.4-mini")
574574
assert.Equal(t, len(reqBody.Messages), 1)
575575
assert.Equal(t, reqBody.Messages[0].Role, "user")
576576
assert.Equal(t, reqBody.MaxCompletionTokens, 100)

samples/repository-llm.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
# Each role can specify a different model. You can use any model supported by your provider.
66
# - Consult OpenAI docs: https://platform.openai.com/docs/models
77
# - Consult Gemini docs: https://ai.google.dev/gemini-api/docs/models/gemini
8-
# If no model is specified, provider defaults are used (gpt-5-mini for OpenAI, gemini-2.5-flash-lite for Gemini)
8+
# If no model is specified, provider defaults are used (gpt-5.4-mini for OpenAI, gemini-3.1-flash-lite-preview for Gemini)
99

1010
apiVersion: v1
1111
kind: Namespace

0 commit comments

Comments
 (0)