-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Restart to handle read-only Vertica nodes #113
Conversation
pkg/controllers/podfacts.go
Outdated
} else { | ||
pf.upNode = true | ||
pf.readOnly = false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What happens here with respect to line 373?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is check if we are on a server without the read-only state. So it will always be false. I suppose, to avoid duplication I can set it only once in this function.
Other than my comment, it looks good. |
looks good thanks |
This is a change to the restart processing to handle the new read-only state in 11.0SP2. If you lose cluster quorum, nodes that were up stay up node and are put into read-only state. The operator needs to kill those processes before it can do a full cluster restart.
This collects the read-only state (11.0SP2 only), then treats them as down nodes when we restart. The restart reconciler already had code that send a kill signal to running vertica processes.
I also rename the restart testcase to restart-sanity. This was done so that we can run just this testcase within kuttl. kuttl --test option will match all test cases that have the argument in its name. So, trying to run with
kubectl kuttl test --test restart
would match auto-restart-vertica, restart-node-multi-sc and restart. The new name means we can run just that single test viakubectl kuttl --test restart-sanity