Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Document how to deploy with ceph-csi against multiple ceph clusters #1127

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
113 changes: 105 additions & 8 deletions docs/canonicalk8s/charm/howto/ceph-csi.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,11 @@
[Ceph] can be used to hold Kubernetes persistent volumes and is the recommended
storage solution for {{product}}.

The ``ceph-csi`` plugin automatically provisions and attaches the Ceph volumes
to Kubernetes workloads.
The ``ceph-csi`` provisioner attaches the Ceph volumes to Kubernetes workloads.

## Prerequisites

This guide assumes that you have an existing {{product}} cluster.
This guide assumes an existing {{product}} cluster.
See the [charm installation] guide for more details.

In case of localhost/LXD Juju clouds, please make sure that the K8s units are
Expand All @@ -23,20 +22,21 @@ an adequate amount of resources must be allocated:

## Deploying Ceph

Deploy a Ceph cluster containing one monitor and three storage units
(OSDs). In this example, a limited amount of reources is being allocated.
Deploy a Ceph cluster containing one monitor and one storage unit
(OSDs). In this example, a limited amount of resources is being allocated.

```
juju deploy -n 1 ceph-mon \
--constraints "cores=2 mem=4G root-disk=16G" \
--config monitor-count=1
juju deploy -n 3 ceph-osd \
--config monitor-count=1 \
--config expected-osd-count=1
juju deploy -n 1 ceph-osd \
--constraints "cores=2 mem=4G root-disk=16G" \
--storage osd-devices=1G,1 --storage osd-journals=1G,1
juju integrate ceph-osd:mon ceph-mon:osd
```

If using LXD, configure the OSD units to use VM containers by adding the
If using LXD, configure the OSD unit to use VM containers by adding the
constraint: ``virt-type=virtual-machine``.

Once the units are ready, deploy ``ceph-csi``. By default, this enables
Expand Down Expand Up @@ -129,7 +129,104 @@ sudo k8s kubectl wait pod/pv-writer-test \
--timeout 2m
```

## Relate to multiple Ceph clusters

So far, this guide demonstrates to how to integrate with a single Ceph cluster
represented by the single `ceph-mon` application. However {{product}} supports
multiple Ceph clusters. The same `ceph-mon`, `ceph-osd`, and `ceph-csi` charms
can be deployed again as separate Juju applications with different names.

Deploy an alternate Ceph cluster containing one monitor and one storage unit
(OSDs) -- again limiting the resources allocated.

```
juju deploy -n 1 ceph-mon-alt ceph-mon \
--constraints "cores=2 mem=4G root-disk=16G" \
--config monitor-count=1 \
--config expected-osd-count=1
juju deploy -n 1 ceph-osd-alt ceph-osd \
--constraints "cores=2 mem=4G root-disk=16G" \
--storage osd-devices=1G,1 --storage osd-journals=1G,1
juju deploy ceph-csi-alt ceph-csi \
--config provisioner-replicas=1
juju integrate ceph-csi-alt k8s:ceph-k8s-info
juju integrate ceph-csi-alt ceph-mon-alt:client
juju integrate ceph-osd-alt:mon ceph-mon-alt:osd
```

These applications still uses the same charms, but represent new application
instances. A new ceph-cluster via `ceph-mon-alt` and `ceph-osd-alt` and a new
integration with Kubernetes by `ceph-csi-alt`.

There are some Kubernetes Resources which collide in this deployment style.
The admin will notice the `ceph-csi-alt` application in the blocked state with
a status detailing the resource conflicts it detects:

example)
`10 Kubernetes resource collisions (action: list-resources)`

List the collisions by running an action on the charm:

```
juju run ceph-csi-alt/leader list-resources
```

### Resolving collisions

#### Namespace collisions

Many of the Kubernetes Resources managed by the `ceph-csi` charm have an
associated namespace. Ensure the configuration for the `ceph-csi-alt`
application changes so that it doesn't collide with `ceph-csi`.

```
juju exec k8s/leader -- k8s kubectl create namespace ceph-csi-alt
juju config ceph-csi-alt namespace=ceph-csi-alt
```

After this, the number of collisions between the two applications drop off,
but there could still be collisions to investigate.

#### Storage Class collisions

StorageClass Kubernetes Resources managed by the `ceph-csi` charm are
cluster-wide resources and have no namespace.

For each of the supported StorageClass types, there is an independent formatter.

* `ext4`, see [ceph-ext4-storage-class-name-formatter]
* `xfs`, see [ceph-xfs-storage-class-name-formatter]
* `cephfs`, see [cephfs-storage-class-name-formatter]

Each formatter has similar, but distinct formatting rules, so take care to plan
the storage-class names accordingly.

example)

```
juju config ceph-csi-alt cephfs-storage-class-name-formatter="cephfs-{name}-{app}"
```

#### RBAC collisions

RBAC Kubernetes Resources managed by the `ceph-csi` charm are cluster-wide
resources and have no namespace. Two such resources are `ClusterRole` and
`ClusterRoleBinding`.

The charm can be configured to craft separate names for these resources. The
Juju admin can format the names of these objects using a custom formatter.

See [ceph-rbac-name-formatter] docs for more details.

```
juju config ceph-csi-alt ceph-rbac-name-formatter="{name}-{app}"
```

<!-- LINKS -->

[charm installation]: ./charm
[Ceph]: https://docs.ceph.com/
[ceph-rbac-name-formatter]: https://charmhub.io/ceph-csi/configurations?channel=latest/edge#ceph-rbac-name-formatter
[ceph-ext4-storage-class-name-formatter]: https://charmhub.io/ceph-csi/configurations?channel=latest/edge#ceph-ext4-storage-class-name-formatter
[ceph-xfs-storage-class-name-formatter]: https://charmhub.io/ceph-csi/configurations?channel=latest/edge#ceph-xfs-storage-class-name-formatter
[cephfs-storage-class-name-formatter]: https://charmhub.io/ceph-csi/configurations?channel=latest/edge#cephfs-storage-class-name-formatter