feat(kernelLogWatcher): enable revive kmsg parser if channel closed#1004
feat(kernelLogWatcher): enable revive kmsg parser if channel closed#1004daveoy wants to merge 2 commits intokubernetes:masterfrom
Conversation
|
Hi @daveoy. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: daveoy The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
should close #1003 |
|
/ok-to-test |
|
Is CI okay? Looks like the same failure on my other PRs marked ok to test.... |
|
Yeah, I haven't had time to look into them. Do you have time to help debug and fix? /retest |
|
Yes I can take a look |
|
@daveoy: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
wangzhen127
left a comment
There was a problem hiding this comment.
Can you add some test as well?
| return nil | ||
| } | ||
|
|
||
| // revive ourselves if the kmsg channel is closed |
There was a problem hiding this comment.
I am curious about the error you observe when the channel is closed. Do you see the "Kmsg channel closed" log? Is it always associated with high load of the node?
| return | ||
| case msg, ok := <-kmsgs: | ||
| if !ok { | ||
| if val, ok := k.cfg.PluginConfig["revive"]; ok && val == "true" { |
There was a problem hiding this comment.
Can you add a sample config for using revive?
| if val, ok := k.cfg.PluginConfig["revive"]; ok && val == "true" { | ||
| k.reviveMyself() | ||
| } | ||
| klog.Error("Kmsg channel closed") |
There was a problem hiding this comment.
This log line should be before the revive.
| case msg, ok := <-kmsgs: | ||
| if !ok { | ||
| if val, ok := k.cfg.PluginConfig["revive"]; ok && val == "true" { | ||
| k.reviveMyself() |
There was a problem hiding this comment.
Before revive, add a log line about we are starting to revive.
| // enter the watch loop again | ||
| func (k *kernelLogWatcher) reviveMyself() { | ||
| // if k.reviveCount >= reviveRetries { | ||
| // klog.Errorf("Failed to revive kmsg parser after %d retries", reviveRetries) |
There was a problem hiding this comment.
For the retries, is there any time boundary needed? For example, if the revive happens 10 times over a year, it may be fine. But if revive happens 10 times over a minute, something is wrong.
| // close the old kmsg parser and create a new one | ||
| // enter the watch loop again | ||
| func (k *kernelLogWatcher) reviveMyself() { | ||
| // if k.reviveCount >= reviveRetries { |
|
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
|
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
|
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
|
@k8s-triage-robot: Closed this PR. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
this PR adds a revival mechanism for a recurring issue im facing.
in kubernetes, when a node is under significant load, the connection to /dev/kmsg can be closed unexpectedly.
instead of exiting the watcher and restarting the whole pod (subsequently clearing any conditions set by NPD during the new pod's problem daemon init), i would like to revive the kmsg channel and continue execution.