"Copilot encountered an error and was unable to review this pull request." #190036
-
Select Topic AreaQuestion Copilot Feature AreaCopilot in GitHub BodyI hope this is the right place to submit this question. I've been using Github Copilot's code review feature on a paid personal/individual subscription. I'm using it to review pull requests that I myself open against my own private repository. At first it worked fine. But shortly later it started failing 50% of the time with this error. Over time the failure rate increased and it's now failing > 90% of the time. There are runners stuck executing code review tasks for as long as 6 hours before they eventually time out. I tried submitting a ticket, but the only options for submitting copilot tickets are for "Billing, Signup, or activation" and for "Privacy and data protection.". Github's help agent suggested I file it as a privacy concern, but it's been days and was never answered, so I'm not sure that was the right place. As far as I know, I have not exceeded any quota, nor have I submitted so many requests as to trigger any rate limiting. It seems to happen regardless of whether the PR is large or small. Is there anything I should check? Is this a known issue? The copilot code reviews were a really nice feature before they stopped working. |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 1 reply
-
|
Since you're on a Individual sub and seeing those 6-hour hangs, you aren't hitting a "usage quota"you're hitting a Context Timeout. Basically, the Copilot Review agent is getting "lost" in your repo's dependency graph or a massive lockfile, and it doesn't have a graceful way to give up. The "Zombie Runner" Problem The Support Hack Immediate Workarounds: Convert to Draft: Toggle the PR to a Draft and back to Ready for Review. This often forces the backend to kill any "zombie" processes attached to that specific PR ID and start a fresh runner. Audit your .gitignore: If you have large files or minified assets tracked in Git, Copilot will try to "read" them during the review. Ensure your workspace is lean. The "Small Batch" Test: Try a 1-line change in a new PR. If that fails too, your account’s specific worker queue is likely corrupted and needs a manual reset from GitHub Staff. |
Beta Was this translation helpful? Give feedback.
-
|
I've found that if I go into actions and start canceling the damn things, sometimes they'll generate output anyway. |
Beta Was this translation helpful? Give feedback.
-
|
Yeah, this sounds frustrating — especially since it used to work and then degraded over time. A few things based on similar issues people have run into with Copilot code review:
Long-running jobs (hours) + eventual timeouts usually point to: stuck workers / runners If it were quota/rate limits, you’d typically see immediate failures, not hanging jobs.
If it only happens in one repo, it could be: repo size / history If you have multiple review jobs stuck: cancel any pending/running ones Sometimes old jobs clog the queue and cause cascading failures.
The increasing failure rate you described (>90%) suggests: either a service-side degradation Given: happens on small + large PRs This doesn’t sound like a usage problem — more like a Copilot service reliability issue.
Yeah, this is a gap right now — there isn’t a clear category for: “Copilot feature not working” If you contact support again, label it as: “Other / Technical issue” (if available) and include: example PR links You’re definitely not alone — this kind of “works → degrades → mostly fails” pattern usually isn’t user-side 👍 |
Beta Was this translation helpful? Give feedback.
Since you're on a Individual sub and seeing those 6-hour hangs, you aren't hitting a "usage quota"you're hitting a Context Timeout. Basically, the Copilot Review agent is getting "lost" in your repo's dependency graph or a massive lockfile, and it doesn't have a graceful way to give up.
The "Zombie Runner" Problem
When a review hangs for 6 hours, it usually means the runner is trying to index files that should have been ignored (think node_modules, dist, or massive .json data files). Because it's a private repo, the agent tries extra hard to "understand" the local context to give a good review, but it eventually hits a recursion limit and just spins its wheels until the backend kills the …