Avoid infinite reconcile if subclusters share svc #408
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This fixes a bug when you have two subclusters share the same service object. Subclusters can share a service object if they both have the same value for spec.subclusters[].serviceName. The service object has a label for the subcluster name (vertica.com/subcluster-name). When more
than one subcluster shared the service object, we would continually update the service object by cycling through the subcluster names. This resulted in an infinite reconcile. We would change the service object, that would trigger another reconcile, that would cause another change in the service object, ...
A secondary bug fix in this relates to handle of annotations. When we compare expected vs current service object, we don't handle annotations correctly. The annotations can be an empty map or nil map. Technically they are both equivalent, but our comparison flags them as different.
Thus we always try to update the service object. The k8s client is smart enough not to change the service object in this case. So, we just end up doing a no-op update. But we can save some time if we treated them the same.