fix: ensure LLM analysis comments are updated#2680
fix: ensure LLM analysis comments are updated#2680chmouel wants to merge 1 commit intotektoncd:mainfrom
Conversation
Embedded a unique marker within the PR comment using an HTML comment tag. This allowed the system to correctly identify and update existing AI analysis comments rather than creating a new entry on subsequent PipelineRun executions.
There was a problem hiding this comment.
Code Review
This pull request implements a mechanism to update existing LLM analysis comments on pull requests by embedding a unique HTML marker. It also adds an integration test for Gitea to verify that comments are correctly updated rather than duplicated upon re-triggering. The review feedback suggests refining the marker string to prevent partial matches and scoping the test's comment retrieval to the specific pull request to ensure test reliability.
| updateMarker := fmt.Sprintf("llm-analysis-%s", result.Role) | ||
|
|
||
| // Format the comment with LLM analysis, embedding the marker as an HTML | ||
| // comment so listCommentsByMarker can find and update it on subsequent runs. | ||
| comment := fmt.Sprintf("<!-- %s -->\n## 🤖 AI Analysis - %s\n\n%s\n\n---\n*Generated by Pipelines-as-Code LLM Analysis*", | ||
| updateMarker, result.Role, result.Response.Content) |
There was a problem hiding this comment.
The updateMarker used for identifying existing comments is too generic. If one role name is a prefix of another (e.g., "analysis" and "analysis-detailed"), the provider's regex search might match and update the wrong comment. Including the HTML comment delimiters in the updateMarker itself ensures a more precise match and prevents accidental overwrites of unrelated comments.
| updateMarker := fmt.Sprintf("llm-analysis-%s", result.Role) | |
| // Format the comment with LLM analysis, embedding the marker as an HTML | |
| // comment so listCommentsByMarker can find and update it on subsequent runs. | |
| comment := fmt.Sprintf("<!-- %s -->\n## 🤖 AI Analysis - %s\n\n%s\n\n---\n*Generated by Pipelines-as-Code LLM Analysis*", | |
| updateMarker, result.Role, result.Response.Content) | |
| updateMarker := fmt.Sprintf("<!-- llm-analysis-%s -->", result.Role) | |
| // Format the comment with LLM analysis, embedding the marker so | |
| // listCommentsByMarker can find and update it on subsequent runs. | |
| comment := fmt.Sprintf("%s\n## 🤖 AI Analysis - %s\n\n%s\n\n---\n*Generated by Pipelines-as-Code LLM Analysis*", | |
| updateMarker, result.Role, result.Response.Content) |
| comments, _, err := topts.GiteaCNX.Client().ListRepoIssueComments( | ||
| topts.PullRequest.Base.Repository.Owner.UserName, | ||
| topts.PullRequest.Base.Repository.Name, | ||
| forgejo.ListIssueCommentOptions{}) |
There was a problem hiding this comment.
Using ListRepoIssueComments retrieves all comments across the entire repository, which can lead to non-deterministic test results if other issues or pull requests in the same repository contain similar markers. It is safer and more efficient to use ListIssueComments with the specific pull request index to ensure the test only considers comments relevant to the current test case.
| comments, _, err := topts.GiteaCNX.Client().ListRepoIssueComments( | |
| topts.PullRequest.Base.Repository.Owner.UserName, | |
| topts.PullRequest.Base.Repository.Name, | |
| forgejo.ListIssueCommentOptions{}) | |
| comments, _, err := topts.GiteaCNX.Client().ListIssueComments( | |
| topts.PullRequest.Base.Repository.Owner.UserName, | |
| topts.PullRequest.Base.Repository.Name, | |
| topts.PullRequest.Index, | |
| forgejo.ListIssueCommentOptions{}) |
🤖 AI Analysis - pr-complexity-rating📊 PR Review Complexity
Overall difficulty: Easy SummaryThis PR fixes a bug where AI analysis results generated multiple comments on a Pull Request instead of updating a single persistent comment. It introduces a hidden HTML marker within the comment body to allow the system to uniquely identify and overwrite previous analysis outputs. Suggested reviewers focus
Generated by Pipelines-as-Code LLM Analysis |
|
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2680 +/- ##
==========================================
- Coverage 58.86% 58.86% -0.01%
==========================================
Files 206 206
Lines 20329 20330 +1
==========================================
Hits 11967 11967
- Misses 7589 7590 +1
Partials 773 773 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
@chmouel: PR needs rebase. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
this solution ahs some flaw, altho working, working on a redesign to solve this properly, disabled the ai analysis for now |
📝 Description of the Change
When a PipelineRun triggers an LLM analysis and a subsequent
/retestruns again, the AI analysis comment was being duplicated on the PR rather than updated in-place.The root cause was that the unique marker used by
CreateComment/listCommentsByMarkerto identify existing comments was embedded in theupdateMarkerargument but not in the comment body itself. The provider's lookup scans comment bodies for the marker string, so without the marker present in the body it could never find the existing comment to update.Fix: Embed the marker as an HTML comment (
<!-- llm-analysis-<role> -->) at the top of the formatted comment body. This allowslistCommentsByMarkerto locate the existing comment on subsequent runs and update it instead of creating a new one.The golden test file is updated to include the marker, and the Gitea E2E test is extended to verify that after a
/retestexactly one LLM analysis comment exists on the PR.🔗 Linked GitHub Issue
Fixes https://redhat.atlassian.net/browse/SRVKP-11549
🧪 Testing Strategy
🤖 AI Assistance
✅ Submitter Checklist
fix:,feat:) matches the "Type of Change" I selected above.make testandmake lintlocally to check for and fix any issues.