-
Notifications
You must be signed in to change notification settings - Fork 296
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Vsphere Failure domains reconciling changes #9239
Add Vsphere Failure domains reconciling changes #9239
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #9239 +/- ##
==========================================
+ Coverage 72.35% 72.41% +0.05%
==========================================
Files 587 589 +2
Lines 46132 46354 +222
==========================================
+ Hits 33380 33565 +185
- Misses 11003 11030 +27
- Partials 1749 1759 +10 ☔ View full report in Codecov by Sentry. |
e6c5e94
to
6c51e9e
Compare
/retest-required |
6c51e9e
to
34df77e
Compare
230b9bf
to
2d97d1b
Compare
2d97d1b
to
7ac7afd
Compare
3e94fb2
to
118e5de
Compare
@@ -1083,6 +1083,12 @@ func TestGetAndValidateClusterConfig(t *testing.T) { | |||
wantCluster: nil, | |||
wantErr: true, | |||
}, | |||
{ | |||
testName: "valid cluster config with vsphere failure domains disabled", | |||
fileName: "testdata/vsphere_cluster_valid_failuredomains.yaml", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we use wantCluster instead of fileName? We want to stop managing file names as they are harder to maintain.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
File is being used to generate input cluster and this is not used for checking the cluster. This is how the current test works. To remove the dependency of file and use a cluster, it needs refactoring of all other tests in this file. We can keep this for now and do it later as part of refactoring.
@@ -88,6 +90,44 @@ func (v *Validator) ValidateVCenterConfig(ctx context.Context, datacenterConfig | |||
return nil | |||
} | |||
|
|||
// ValidateFailureDomains validates the provided list of failure domains. | |||
func (v *Validator) ValidateFailureDomains(vsphereClusterSpec *Spec) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have any validations on the fields themselves?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not in this change. I have an action item to validate each field exists in vsphere. https://github.com/aws/eks-anywhere-internal/issues/3004
118e5de
to
b58253e
Compare
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: rajeshvenkata The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@rajeshvenkata Is a similar solution planned for Control Plane (distribute nodes into different failure domain/AZ)? |
@pawcykca Thanks for the feedback and your interest in failure domains feature. We have just launched failure domains support for worker nodes in the new release v0.22 Few questions regarding control plane support ask.
|
@rajeshvenkata thanks for answer ;-)
Definition can use same parameters as for Worker Nodes failure domains:
=== Example for 2 AZ (failure domains) === Datastore: EKSA Control Plane: === Example for 3 AZ (failure domains) === Datastore: EKSA Control Plane:
|
@pawcykca Thank you for sharing the info and the use cases. We will let you know if we have any further questions. |
Issue #, if available:
https://github.com/aws/eks-anywhere-internal/issues/2995
Description of changes:
Add implementation changes for vsphere failure domains
Testing (if applicable):
Created a vsphere cluster with worker node groups using failure domains and verified VMs are built from failure domain configs
Failure domains definitions in cluster spec. Worker node group 1 using fd-1 and worker node group 2 using fd-2
Vsphere VM from worker node group with fd-1 settings

Vsphere VM from worker node group with fd-2 settings

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.