You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
RKE2, also known as RKE Government, is Rancher's next-generation Kubernetes distribution. This Ansible playbook installs RKE2 for both the control plane and workers.
35
39
@@ -53,40 +57,35 @@ Deployment environment must have Ansible 2.9.0+
53
57
54
58
Usage
55
59
-----
56
-
Create a new directory based on the `sample` directory within the `inventory` directory:
60
+
Create a new directory based on the one of the sample inventory directories within the `docs` directory:
57
61
58
62
```bash
59
-
cp -R inventory/sample inventory/my-cluster
63
+
cp -R ./docs/basic_sample_inventory ./inventory
60
64
```
61
65
62
-
Second, edit `inventory/my-cluster/hosts.yaml` to match the system information gathered above. For example:
66
+
Second, edit `inventory/hosts.yaml` to match the system information gathered above. For example:
63
67
64
68
```yaml
69
+
---
65
70
rke2_cluster:
66
71
children:
67
72
rke2_servers:
68
73
hosts:
69
-
server1.example.com:
74
+
server0.example.com:
70
75
rke2_agents:
71
76
hosts:
72
-
agent1.example.com:
73
-
agent2.example.com:
74
-
node_labels:
75
-
- agent2Label=true"
76
-
all:
77
-
vars:
78
-
install_rke2_version: v1.27.10+rke2r1
77
+
agent0.example.com:
79
78
```
80
79
81
-
If needed, you can also edit`inventory/my-cluster/group_vars/rke2_agents.yml` and `inventory/my-cluster/group_vars/rke2_servers.yml` to match your environment.
80
+
If needed, you can also create`inventory/group_vars/rke2_agents.yml` and `inventory/my-cluster/group_vars/rke2_servers.yml` to match your environment.
82
81
83
82
Start provisioning of the cluster using the following command:
More detailed information can be found [here](./docs/README.md)
87
+
> [!NOTE]
88
+
> More detailed information can be found [here](./docs/README.md)
90
89
91
90
92
91
Tarball Install/Air-Gap Install
@@ -96,18 +95,21 @@ Air-Gap/Tarball install information can be found [here](./docs/tarball_install.m
96
95
97
96
Kubeconfig
98
97
----------
99
-
The root user will have the `kubeconfig` and `kubectl` made available, to access your cluster login into any server node and `kubectl` will be available for use immideatly.
98
+
The root user will have the `kubeconfig` and `kubectl` made available, to access your cluster login into any server node and `kubectl` will be available for use immediately.
100
99
101
100
102
101
Available configurations
103
102
------------------------
104
-
Variables should be set in `inventory/cluster/group_vars/rke2_agents.yml` and `inventory/cluster/group_vars/rke2_servers.yml`. See sample variables in `inventory/sample/group_vars` for reference.
103
+
Variables should be set in `inventory/group_vars/rke2_agents.yml` and `inventory/group_vars/rke2_servers.yml`.
104
+
105
+
> [!NOTE]
106
+
> More detailed information can be found [here](./docs/README.md)
105
107
106
108
107
109
Uninstall RKE2
108
110
---------------
109
111
Note: Uninstalling RKE2 deletes the cluster data and all of the scripts.
110
-
The offical documentation for fully uninstalling the RKE2 cluster can be found in the [RKE2 Documentation](https://docs.rke2.io/install/uninstall/).
112
+
The official documentation for fully uninstalling the RKE2 cluster can be found in the [RKE2 Documentation](https://docs.rke2.io/install/uninstall/).
111
113
112
114
If you used this module to created the cluster and RKE2 was installed via yum, then you can attempt to run this command to remove all cluster data and all RKE2 scripts.
There are two methods for consuming this repository, one is to simply clone the repository and edit it as neccessary, the other is to import it as a collection, both options are detailed below.
25
+
There are two methods for consuming this repository, one is to simply clone the repository and edit it as necessary, the other is to import it as a collection, both options are detailed below.
25
26
26
27
> [!NOTE]
27
28
> If you are looking for airgap or tarball installation instructions, please go [here](./tarball_install.md)
@@ -31,7 +32,7 @@ The simplest method for using this repository (as detailed in the main README.md
31
32
32
33
33
34
## Importing
34
-
The second method for using this project is to import it as a collection in your own `requirements.yaml` as this repository does contain a `galaxy.yaml`. To import it add the following to your `galaxy.yaml`:
35
+
The second method for using this project is to import it as a collection in your own `requirements.yml` as this repository does contain a `galaxy.yml`. To import it add the following to your `galaxy.yml`:
35
36
```yaml
36
37
collections:
37
38
- name: rancherfederal.rke2-ansible
@@ -51,7 +52,7 @@ Then you can call the RKE2 role in a play like so:
51
52
52
53
53
54
# Defining Your Cluster
54
-
This repository is not intended to be opinionated and as a rersult it is important you to have read and understand the [RKE2 docs](https://docs.rke2.io/) before moving forward, this documentation is not intended to be an exhaustive explanation of all possible RKE2 configuration options, it is up to the end user to ensure their options are valid.
55
+
This repository is not intended to be opinionated and as a result it is important you to have read and understand the [RKE2 docs](https://docs.rke2.io/) before moving forward, this documentation is not intended to be an exhaustive explanation of all possible RKE2 configuration options, it is up to the end user to ensure their options are valid.
55
56
56
57
57
58
## Minimal Cluster Inventory
@@ -71,7 +72,7 @@ This is the simplest possible inventory file and will deploy the latest availabl
71
72
72
73
73
74
## Structuring Your Variable Files
74
-
Configurations and variables can become lengthy annd unwieldy, as a general note of advice it is best to move variables into a `group_vars` folder.
75
+
Configurations and variables can become lengthy and unwieldy, as a general note of advice it is best to move variables into a `group_vars` folder.
75
76
```
76
77
./inventory
77
78
├── Cluser_A
@@ -92,7 +93,8 @@ Configurations and variables can become lengthy annd unwieldy, as a general note
92
93
93
94
94
95
## Enabling SELinux
95
-
Enabling SELinux in the playbook requires `seliux: true` be set in either the cluster, group, or host level config profiles (Please see [Special Variables](#special-variables) for more info). Though generally this should be set at the cluster and can be done like so:
96
+
Enabling SELinux in the playbook requires `selinux: true` be set in either the cluster, group, or host level config profiles (Please see [Special Variables](#special-variables) for more info). Though generally this should be set at the cluster and can be done like so:
97
+
__hosts.yml:__
96
98
```yaml
97
99
---
98
100
all:
@@ -104,7 +106,8 @@ For more information please see the RKE2 documentation, [here](https://docs.rke2
104
106
105
107
106
108
## Enabling CIS Modes
107
-
Enabling the CIS tasks in the playbook requires a CIS profile be added to the ansible variables file. This can be placed in either the cluster, or group level config profiles (Please see [Special Variables](#special-variables) for more info). Below is an example, in the example the CIS profile is set at the group level, this ensures all server nodes run the CIS hardening profile tasks.
109
+
Enabling the CIS tasks in the playbook requires a CIS profile be added to the ansible variables file. This can be placed in either the cluster, or group level config profiles (Please see [Special Variables](#special-variables) for more info). Below is an example, in the example the CIS profile is set at the group level, this ensures all server nodes run the CIS hardening profile tasks.
110
+
__hosts.yml:__
108
111
```yaml
109
112
rke2_cluster:
110
113
children:
@@ -132,17 +135,30 @@ There are three levels an RKE2 config variables can be placed in, that is `clust
132
135
- `rke2_cluster.children.rke2_agents.vars.hosts.<host>.host_rke2_config`: Defines a list of node labels for a specific agent node
133
136
134
137
> [!NOTE]
135
-
> Through the rest of these docs you may see references to `rke2_servers.yaml`, this is the group vars file for rke2_servers. This is functionally equivalent to `rke2_cluster.children.rke2_servers.vars`. References to `rke2_agents.yaml` is functionally equivalent to `rke2_cluster.children.rke2_agents.vars`
138
+
> Through the rest of these docs you may see references to `rke2_servers.yml`, this is the group vars file for rke2_servers. This is functionally equivalent to `rke2_cluster.children.rke2_servers.vars`. References to `rke2_agents.yml` is functionally equivalent to `rke2_cluster.children.rke2_agents.vars`
136
139
137
140
It is important to understand these variables here are not special in the sense that they enable or disable certain functions in the RKE2 role, with one notable exception being the `profile` key. These variables are special in the sense that they will be condensed into a single config file on each node. Each node will end up with a merged config file comprised of `cluster_rke2_config`, `group_rke2_config`, and `host_rke2_config`.
138
141
142
+
143
+
### Defining the RKE2 Version
144
+
A version of RKE2 can be selected to be installed via the `all.vars.rke2_install_version` variable, please see the RKE2 repository for available [releases](releases).
145
+
146
+
#### Example
147
+
__group_vars/all.yml:__
148
+
```yaml
149
+
---
150
+
all:
151
+
vars:
152
+
rke2_install_version: v1.29.12+rke2r1
153
+
```
154
+
139
155
### Defining a PSA Config
140
-
In order to define a PSA config, server nodes will need to have the `rke2_pod_security_admission_config_file_path` variable defined, then the `pod-security-admission-config-file` will need to be defined in the rke2_config variable at the relevant level (please see [RKE Config Variables](#rke2-config-variables)).
156
+
In order to define a PSA (Pod Security Admission) config, server nodes will need to have the `rke2_pod_security_admission_config_file_path` variable defined, then the `pod-security-admission-config-file` will need to be defined in the rke2_config variable at the relevant level (please see [RKE Config Variables](#rke2-config-variables)).
141
157
142
158
#### Example
143
159
Below is an example of how this can be defined at the server group level (`rke2_cluster.children.rke2_servers.vars`):
If you have a cluster that needs extra manifests to be deployed or the cluster needs a ciritical component to be configured RKE2's "HelmChartConfig" is an available option (among others). The Ansible repository supports the use of these configuration files, simply place them in a folder and give Ansible the path to the folder, Ansible will enumarte the files and place them on the first server node.
188
+
If you have a cluster that needs extra manifests to be deployed or the cluster needs a critical component to be configured RKE2's "HelmChartConfig" is an available option (among others). The Ansible repository supports the use of these configuration files. Simply place the Helm chart configs in a folder, give Ansible the path to the folder, and Ansible will enumerate the files and place them on the first server node.
173
189
174
190
There are two variables that control the deployment of manifests to the server nodes:
175
191
- `rke2_manifest_config_directory`
@@ -178,13 +194,13 @@ There are two variables that control the deployment of manifests to the server n
178
194
The first variable is used to deploy manifest to the server nodes before starting the RKE2 server process, this ensures critical components (like the CNI) can be configured when the RKE2 server process starts. The second, ensures applications are deployed after the RKE2 server process starts. There are examples of both below.
179
195
180
196
#### Pre-Deploy Example
181
-
The example used is configuring Cilium with the kube-proxy replacement enabled a fairly common use case:
197
+
The example used is configuring Cilium with the kube-proxy replacement enabled (a fairly common use case):
182
198
183
199
> [!WARNING]
184
200
> If this option is used you must provide a `become` password and this must be the password for the local host running the Ansible playbook. The playbook is looking for this directory on the localhost, and will run as root. This imposes some limitations, if you are using an SSH password to login to remote systems (typical for STIG'd clusters) the `become` password must be the same for the cluster nodes AND localhost.
185
201
186
-
__rke2_servers.yaml:__
187
-
For this example to work kubeproxy needs to be disabled, and the Cilium CNI needs to be enabled.
202
+
__group_vars/rke2_servers.yml:__
203
+
For this example to work kube-proxy needs to be disabled, and the Cilium CNI needs to be enabled.
A version of RKE2 can be selected to be installed via the `all.vars.rke2_install_version` variable
249
-
250
-
251
262
# Examples
252
-
There are two examples provided in this folder, "basic_sample_inventory", and "advanced_sample_inventory". The basic example is the simplest possible example, the advanced example is all of the options explained above in one example.
263
+
There are two examples provided in this folder, `basic_sample_inventory`, and `advanced_sample_inventory`. The basic example is the simplest possible example, the advanced example is all of the options explained above in one example.
0 commit comments