Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(kubernetes): Support multiple kubectl versions #6134

Merged
merged 17 commits into from
Feb 1, 2024

Conversation

jervi
Copy link
Contributor

@jervi jervi commented Jan 8, 2024

Currently it is hard to support clusters that runs different Kubernetes versions. Because kubectl executables can be defined per cluster, we can overcome this limitation by installing multiple kubectl versions, and then define which binary to use in the cluster config. The kubectl versions are stored without patch version, so that we can upgrade to newer versions without having to change the configuration. We also symlink kubectl to the current default version to not break existing configuration.

To opt in to a newer version, define it like this in the cluster config: kubectlExecutable: /usr/local/bin/kubectl-1.28 E.g.:

kubernetes:
  enabled: true
  primaryAccount: my-cluster
  accounts:
    - name: my-cluster
      kubeconfigFile: configserver:k8s/my-cluster.yml
      cacheThreads: 1
      cachingPolicies: []
      checkPermissionsOnStartup: true
      customResources: []
      dockerRegistries: []
      kinds: []
      liveManifestCalls: false
      namespaces: []
      oAuthScopes: []
      omitKinds: []
      omitNamespaces: []
      onlySpinnakerManaged: false
      providerVersion: V2
      kubectlExecutable: /usr/local/bin/kubectl-1.28

Currently it is hard to support clusters that runs different Kubernetes versions. Because kubectl executables can be defined per cluster, we can overcome this limitation by installing multiple kubectl versions, and then define which binary to use in the cluster config.
The kubectl versions are stored without patch version, so that we can upgrade to newer versions without having to change the configuration. We also symlink `kubectl` to the current default version to not break existing configuration.

To opt in to a newer version, define it like this in the cluster config: `kubectlExecutable: /usr/local/bin/kubectl-1.28`
E.g.:
```
kubernetes:
  enabled: true
  primaryAccount: my-cluster
  accounts:
    - name: my-cluster
      kubeconfigFile: configserver:k8s/my-cluster.yml
      cacheThreads: 1
      cachingPolicies: []
      checkPermissionsOnStartup: true
      customResources: []
      dockerRegistries: []
      kinds: []
      liveManifestCalls: false
      namespaces: []
      oAuthScopes: []
      omitKinds: []
      omitNamespaces: []
      onlySpinnakerManaged: false
      providerVersion: V2
      kubectlExecutable: /usr/local/bin/kubectl-1.28
```
@dbyron-sf
Copy link
Contributor

This looks great, thank you! Can I convince you to take a look at adding some mechanism for testing these additional kubectl versions, perhaps by teaching the integrations tests to have yet another dimension for kubectl version. KubernetesCluster is also likely relevant.

jervi added 3 commits January 9, 2024 09:17
This includes adding Kubernetes 1.29 and removing 1.19 and 1.20
To adhere to the Kubernetes version skew policy
@jervi
Copy link
Contributor Author

jervi commented Jan 10, 2024

Thanks @dbyron-sf! I've taken a stab at it now. I opted to not add another dimension to the matrix, because it would cause a really huge number of builds to test every combination. Also it would go against the version skew policy of Kubernetes, which is what I really wanted to fix here in the first place. So instead I added a lookup table of hard coded kubectl versions for each Kubernetes version to be used in the tests. This means the integration tests will be running with the same kubectl version and Kubernetes version. WDYT?

@jervi
Copy link
Contributor Author

jervi commented Jan 11, 2024

Seems like the branch protections needs to be updated as well, once this is ready to be merged.

@jervi
Copy link
Contributor Author

jervi commented Jan 22, 2024

@dbyron-sf Any chance you can have another look at this? 😄

@dbyron-sf
Copy link
Contributor

Sorry, I've been slammed with other things. Definitely have this on my radar though.

- "kindest/node:v1.21.1@sha256:69860bda5563ac81e3c0057d654b5253219618a22ec3a346306239bba8cfa1a6"
- "kindest/node:v1.20.7@sha256:cbeaf907fc78ac97ce7b625e4bf0de16e3ea725daf6b04f930bd14c67c671ff9"
- "kindest/node:v1.19.11@sha256:07db187ae84b4b7de440a73886f008cf903fcf5764ba8106a9fd5243d6f32729"
# Consider updating the kubectl version mapping in BaseTest.java when adding new kubernetes versions
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The mapping is in KubernetesCluster.java, right? And doesn't this comment more belong in the Dockerfiles?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are correct, I'll update the comment to reference KubernetesCluster.java instead. But no, the mapping is only used for integration tests (and lives within the integration test sources). So the mapping is independent of what is defined in the Dockerfile (unless we implement the changes you mention here, of course).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aaah right, these tests download kubectl, they don't depend on what's in the docker image.

*
* @return The kubectl version to use.
*/
private static String getKubectlVersion() {
Copy link
Contributor

@dbyron-sf dbyron-sf Jan 25, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm torn about this. Part of me really likes it. Part of me thinks we end up testing something farther away from what we ship, because the default version of kubectl is the only one that's used by default.

Does it make sense to change the logic e.g. here to dynamically choose the kubectl binary that matches the target cluster version? Which I suppose would then mean on startup / periodically we'd have to query target clusters for their version?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd love that feature! It would mean that the scope of this PR grows substantially, however... Do you want me to give it a go? What k8s versions would we support then? Also I haven't found a good way to automatically download the latest patch version of a given kubectl release. It would be awesome to not have to manually maintain that kubectl list in all of the dockerfiles and the deb packaging code.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wasn't thinking about dynamically downloading kubectl binaries based on clusters we discover. I suppose that's a possibility, but could be tough to keep compliance folks happy.

I was more thinking about choosing from among the ones we've got. It probably does belong in another PR.

For this PR, what do you think of continuing to test the default kubectl version against all the k8s cluster versions, and then one more round of testing using the mapping here, and if we're very lucky, removing dups? That's way less than every client x every server, but also doesn't reduce the coverage we're getting today.

Then if we ever do dynamically choose the k8s version, we could drop the default round of testing.

One other thought that's been rolling around. I'd love to have another step in the PR build github action that invokes a new gradle Test task to exercise the just-built docker image with testcontainers. Not sure what the magic is to learn the name and tag, or perhaps we get to specify the tag and then know what to tell gradle.

With this, we could potentially adjust these tests to use the kubectl binaries in the docker image and not have to download them.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For this PR, what do you think of continuing to test the default kubectl version against all the k8s cluster versions, and then one more round of testing using the mapping here, and if we're very lucky, removing dups? That's way less than every client x every server, but also doesn't reduce the coverage we're getting today.

I've removed the in-code kubectl selection stuff and reverted to just using GitHub Actions. It's a bit cleaner (albeit with a bit of duplication in the action matrix), and it achieves what you describe above: one round of testing with the default kubectl version and another round using the version that corresponds to the k8s version.

jervi added 5 commits January 26, 2024 00:01
…ons definition

All kind versions will be tested with the default kubectl version (currently 1.22.17), as well as the version that corresponds to the k8s version.
Comment on lines +24 to +40
include:
- kubectl-version: 1.29.1
kubernetes-image: "kindest/node:v1.29.0@sha256:eaa1450915475849a73a9227b8f201df25e55e268e5d619312131292e324d570"
- kubectl-version: 1.28.6
kubernetes-image: "kindest/node:v1.28.0@sha256:b7a4cad12c197af3ba43202d3efe03246b3f0793f162afb40a33c923952d5b31"
- kubectl-version: 1.27.10
kubernetes-image: "kindest/node:v1.27.3@sha256:3966ac761ae0136263ffdb6cfd4db23ef8a83cba8a463690e98317add2c9ba72"
- kubectl-version: 1.26.13
kubernetes-image: "kindest/node:v1.26.6@sha256:6e2d8b28a5b601defe327b98bd1c2d1930b49e5d8c512e1895099e4504007adb"
- kubectl-version: 1.25.16
kubernetes-image: "kindest/node:v1.25.11@sha256:227fa11ce74ea76a0474eeefb84cb75d8dad1b08638371ecf0e86259b35be0c8"
- kubectl-version: 1.24.17
kubernetes-image: "kindest/node:v1.24.15@sha256:7db4f8bea3e14b82d12e044e25e34bd53754b7f2b0e9d56df21774e6f66a70ab"
- kubectl-version: 1.23.17
kubernetes-image: "kindest/node:v1.23.17@sha256:59c989ff8a517a93127d4a536e7014d28e235fb3529d9fba91b3951d461edfdb"
- kubectl-version: 1.21.14
kubernetes-image: "kindest/node:v1.21.14@sha256:8a4e9bb3f415d2bb81629ce33ef9c76ba514c14d707f9797a01e3216376ba093"
Copy link
Contributor Author

@jervi jervi Jan 30, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A bit ugly to repeat all the kindest images here, but that was the best way I could find to conditionally add to an existing matrix. If anyone has a better solution, I'm listening. Relevant documentation.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, it's clunky, but if it works, it's ok with me. And from the looks of the PR checks that ran, it does:
image

@dbyron-sf
Copy link
Contributor

LGTM, and thanks for your persistence here. I'll update the PR check requirements and get this in.

@dbyron-sf dbyron-sf added the ready to merge Approved and ready for a merge label Feb 1, 2024
@mergify mergify bot added the auto merged Merged automatically by a bot label Feb 1, 2024
@mergify mergify bot merged commit 1a787e3 into spinnaker:master Feb 1, 2024
23 checks passed
@jervi jervi deleted the multiple_kubectl_versions branch February 22, 2024 11:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
auto merged Merged automatically by a bot ready to merge Approved and ready for a merge target-release/1.34
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants