You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Updated the default model versions for OpenAI to gpt-5.4-mini and Gemini
to gemini-3.1-flash-lite-preview across the codebase and documentation.
Signed-off-by: Chmouel Boudjnah <chmouel@redhat.com>
You can specify any model your chosen provider supports. See [Model Selection]({{< relref "/docs/guides/llm-analysis/model-and-triggers#model-selection" >}}) for guidance on choosing the right model.
33
33
@@ -53,7 +53,7 @@ spec:
53
53
key: "token"
54
54
roles:
55
55
- name: "failure-analysis"
56
-
model: "gpt-5-mini"# Optional: specify model (uses provider default if omitted)
56
+
model: "gpt-5.4-mini"# Optional: specify model (uses provider default if omitted)
57
57
prompt: |
58
58
You are a DevOps expert. Analyze this failed pipeline and:
59
59
1. Identify the root cause
@@ -129,3 +129,4 @@ When you set `commit_content: true`, Pipelines-as-Code includes the following fi
129
129
- Pipelines-as-Code **intentionally excludes email addresses** from the commit context to protect personally identifiable information (PII) when sending data to external LLM providers.
130
130
- Fields appear only if your Git provider makes them available. Some providers supply limited information (for example, Bitbucket Cloud provides only the author name).
131
131
- Author and committer may be the same person or different (for example, when using `git commit --amend` or rebasing).
Copy file name to clipboardExpand all lines: docs/content/docs/guides/llm-analysis/model-and-triggers.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,8 +9,8 @@ This page explains how to choose the right LLM model for each analysis role and
9
9
10
10
Each analysis role can specify a different model. Choosing the right model lets you balance cost against analysis depth. If you do not specify a model, Pipelines-as-Code uses provider-specific defaults:
11
11
12
-
-**OpenAI**: `gpt-5-mini`
13
-
-**Gemini**: `gemini-2.5-flash-lite`
12
+
-**OpenAI**: `gpt-5.4-mini`
13
+
-**Gemini**: `gemini-3.1-flash-lite-preview`
14
14
15
15
### Specifying Models
16
16
@@ -37,7 +37,7 @@ settings:
37
37
model: "gpt-5"
38
38
prompt: "Analyze security failures..."
39
39
40
-
# Use default model (gpt-5-mini) for general analysis
40
+
# Use default model (gpt-5.4-mini) for general analysis
0 commit comments