|
| 1 | +The steps listed on this page describe a way to modify a running Kubernetes cluster deployed with `acs-engine` on Azure. These steps are only tested with changes targeting actually Azure resources. Changes made to Kubernetes configuration are not tested yet. |
| 2 | + |
| 3 | +## `generate` and `deploy` |
| 4 | + |
| 5 | +These are the common steps (unless described otherwise) you'll have to run after modifying an existing `apimodel.json` file. |
| 6 | + |
| 7 | +* Modify the apimodel.json file located in the `_output/<clustername>` folder |
| 8 | +* Run `acs-engine generate --api-model _output/<clustername>/apimodel.json`. This wil update the `azuredeploy*` files needed for the new ARM deployment. These files are also located in the `_output` folder. |
| 9 | +* Apply the changes by manually starting an ARM deployment. From within the `_output/<clustername>` run |
| 10 | + |
| 11 | + az group deployment --template-file azuredeploy.json --parameters azuredeploy.parameters.json --resource-group "<my-resource-group>" |
| 12 | + |
| 13 | + To use the `az` CLI tools you have to login. More info can be found here: https://docs.microsoft.com/en-us/cli/azure/authenticate-azure-cli?view=azure-cli-latest |
| 14 | + |
| 15 | + _Note: I use `az group deployment` instead of `acs-engine deploy` because the latter seems to assume you are deploying a new cluster and as a result overwriting you private ssh keys located in the _ouput folder_ |
| 16 | + |
| 17 | +* Grab a coffee |
| 18 | +* Profit! |
| 19 | + |
| 20 | + |
| 21 | +## Common scenarios (tested) |
| 22 | + |
| 23 | +### Adding a node pool |
| 24 | + |
| 25 | +Add (or copy) an entry in the `agentPoolProfiles` array. |
| 26 | + |
| 27 | +### Removing a node pool |
| 28 | + |
| 29 | +* Delete the related entry from `agentPoolProfiles` section in the `_output/<clustername>/api-model.json` file |
| 30 | +* [Drain](https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/) nodes from inside Kubernetes |
| 31 | +* `generate` and `deploy` (see above) |
| 32 | +* Delete VM's and related resources (disk, NIC, availability set) from Azure portal |
| 33 | +* Remove the pool from the original `apimodel.json` file |
| 34 | + |
| 35 | +### Resizing a node pool |
| 36 | + |
| 37 | +Use the `acs-engine scale` command |
| 38 | + |
| 39 | + acs-engine scale --location westeurope --subscription-id "xxx" --resource-group "<my-resource-group" \ |
| 40 | + --deployment-dir ./_output/<clustername> --node-pool <nodepool name> --new-node-count <desired number of nodes> --master-FQDN <fqdn of the master lb> |
| 41 | + |
| 42 | +**Remember to also update your original api-model.json file (used for 1st deployment) or else you would end up with the original number of VM's after using the `generate` command described above** |
| 43 | + |
| 44 | +### Resize VM's in existing agent pool |
| 45 | + |
| 46 | +* Modify the `vmSize` in the `agentPoolProfiles` section |
| 47 | +* `generate` and `deploy` (see above) |
| 48 | + |
| 49 | +**Important: The default ARM deployment won't drain your Kubernetes nodes properly before 'rebooting' them. Please [drain](https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/) them manually before deploying the change** |
| 50 | + |
0 commit comments