Skip to content

Commit 34cb0e8

Browse files
authored
Merge branch 'main' into issue-2664-same-repo-shortcut-comment-sender
2 parents bcb6701 + 2dc12b5 commit 34cb0e8

28 files changed

+1338
-1972
lines changed

.golangci.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -90,6 +90,8 @@ linters:
9090
path: _test\.go
9191
- path: pkg/resolve/resolve.go
9292
text: don't use `init` function
93+
- path: pkg/llm/providers/
94+
text: don't use `init` function
9395
- linters:
9496
- revive
9597
path: pkg/errors/

.tekton/linter.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -281,7 +281,7 @@ spec:
281281
target=x86_64-unknown-linux-gnu
282282
fi
283283
version=$(curl -H "Authorization: Bearer ${HUB_TOKEN}" -L -s https://api.github.com/repos/zizmorcore/zizmor/releases/latest | python3 -c 'import sys, json; print(json.load(sys.stdin)["tag_name"])')
284-
curl -sH "Authorization: Bearer ${HUB_TOKEN}" -L "https://github.com/zizmorcore/zizmor/releases/download/${version}/zizmor-${target}.tar.gz" | tar -xz -C /tmp/ --strip-components=1 -f-
284+
curl -sH "Authorization: Bearer ${HUB_TOKEN}" -L "https://github.com/zizmorcore/zizmor/releases/download/${version}/zizmor-${target}.tar.gz" | tar -xz -C /tmp/ -f-
285285
/tmp/zizmor .github/workflows/
286286
287287
- name: ruff-lint

config/300-repositories.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -398,8 +398,8 @@ spec:
398398
Model specifies which LLM model to use for this role (optional).
399399
You can specify any model supported by your provider.
400400
If not specified, provider-specific defaults are used:
401-
- OpenAI: gpt-5-mini
402-
- Gemini: gemini-2.5-flash-lite
401+
- OpenAI: gpt-5.4-mini
402+
- Gemini: gemini-3.1-flash-lite-preview
403403
type: string
404404
name:
405405
description: Name is a unique identifier for this analysis role

docs/content/docs/api/settings.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -264,8 +264,8 @@ Defines the base prompt template that Pipelines-as-Code sends to the LLM.
264264
{{< param name="roles[].model" type="string" id="param-roles-model" >}}
265265
Specifies the LLM model for this role. If omitted, Pipelines-as-Code uses provider-specific defaults:
266266

267-
- OpenAI: `gpt-5-mini`
268-
- Gemini: `gemini-2.5-flash-lite`
267+
- OpenAI: `gpt-5.4-mini`
268+
- Gemini: `gemini-3.1-flash-lite-preview`
269269
{{< /param >}}
270270

271271
{{< param name="roles[].on_cel" type="string" id="param-roles-on-cel" >}}

docs/content/docs/guides/llm-analysis/_index.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -26,8 +26,8 @@ Additional output destinations (`check-run` and `annotation`) and structured JSO
2626

2727
Pipelines-as-Code supports two LLM providers:
2828

29-
- **OpenAI** -- Default model: `gpt-5-mini`
30-
- **Google Gemini** -- Default model: `gemini-2.5-flash-lite`
29+
- **OpenAI** -- Default model: `gpt-5.4-mini`
30+
- **Google Gemini** -- Default model: `gemini-3.1-flash-lite-preview`
3131

3232
You can specify any model your chosen provider supports. See [Model Selection]({{< relref "/docs/guides/llm-analysis/model-and-triggers#model-selection" >}}) for guidance on choosing the right model.
3333

@@ -53,7 +53,7 @@ spec:
5353
key: "token"
5454
roles:
5555
- name: "failure-analysis"
56-
model: "gpt-5-mini" # Optional: specify model (uses provider default if omitted)
56+
model: "gpt-5.4-mini" # Optional: specify model (uses provider default if omitted)
5757
prompt: |
5858
You are a DevOps expert. Analyze this failed pipeline and:
5959
1. Identify the root cause
@@ -129,3 +129,4 @@ When you set `commit_content: true`, Pipelines-as-Code includes the following fi
129129
- Pipelines-as-Code **intentionally excludes email addresses** from the commit context to protect personally identifiable information (PII) when sending data to external LLM providers.
130130
- Fields appear only if your Git provider makes them available. Some providers supply limited information (for example, Bitbucket Cloud provides only the author name).
131131
- Author and committer may be the same person or different (for example, when using `git commit --amend` or rebasing).
132+
asing).

docs/content/docs/guides/llm-analysis/model-and-triggers.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -9,8 +9,8 @@ This page explains how to choose the right LLM model for each analysis role and
99

1010
Each analysis role can specify a different model. Choosing the right model lets you balance cost against analysis depth. If you do not specify a model, Pipelines-as-Code uses provider-specific defaults:
1111

12-
- **OpenAI**: `gpt-5-mini`
13-
- **Gemini**: `gemini-2.5-flash-lite`
12+
- **OpenAI**: `gpt-5.4-mini`
13+
- **Gemini**: `gemini-3.1-flash-lite-preview`
1414

1515
### Specifying Models
1616

@@ -37,7 +37,7 @@ settings:
3737
model: "gpt-5"
3838
prompt: "Analyze security failures..."
3939

40-
# Use default model (gpt-5-mini) for general analysis
40+
# Use default model (gpt-5.4-mini) for general analysis
4141
- name: "general-failure"
4242
# No model specified - uses provider default
4343
prompt: "Analyze this failure..."

pkg/apis/pipelinesascode/v1alpha1/types_llm.go

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -63,8 +63,8 @@ type AnalysisRole struct {
6363
// Model specifies which LLM model to use for this role (optional).
6464
// You can specify any model supported by your provider.
6565
// If not specified, provider-specific defaults are used:
66-
// - OpenAI: gpt-5-mini
67-
// - Gemini: gemini-2.5-flash-lite
66+
// - OpenAI: gpt-5.4-mini
67+
// - Gemini: gemini-3.1-flash-lite-preview
6868
// +optional
6969
Model string `json:"model,omitempty"`
7070

0 commit comments

Comments
 (0)