mirror of
https://github.com/kubesphere/website.git
synced 2025-12-28 23:02:50 +00:00
Merge pull request #335 from Sherlock113/storageword
Update storage config wording
This commit is contained in:
commit
f2f77ed9ec
|
|
@ -8,18 +8,13 @@ weight: 2140
|
|||
---
|
||||
|
||||
## Overview
|
||||
Persistent volumes are a **Must** for installing KubeSphere. [KubeKey](https://github.com/kubesphere/kubekey)
|
||||
lets KubeSphere be installed on different storage systems by the
|
||||
[add-on mechanism](https://github.com/kubesphere/kubekey/blob/v1.0.0/docs/addons.md).
|
||||
General steps of installing KubeSphere by KubeKey on Linux are like:
|
||||
Persistent volumes are a **Must** for installing KubeSphere. [KubeKey](https://github.com/kubesphere/kubekey) lets KubeSphere be installed on different storage systems by the [add-on mechanism](https://github.com/kubesphere/kubekey/blob/v1.0.0/docs/addons.md). General steps of installing KubeSphere by KubeKey on Linux are:
|
||||
|
||||
1. Install Kubernetes
|
||||
2. Install **add-on** plugin for KubeSphere
|
||||
3. Install Kubesphere by [ks-installer](https://github.com/kubesphere/ks-installer)
|
||||
1. Install Kubernetes.
|
||||
2. Install the **add-on** plugin for KubeSphere.
|
||||
3. Install Kubesphere by [ks-installer](https://github.com/kubesphere/ks-installer).
|
||||
|
||||
In the config of KubeKey, ClusterConfiguration's `spec.persistence.storageClass` need to be set for ks-installer
|
||||
to create PersistentVolumeClaim ( PVC ) for KubeSphere.
|
||||
If empty, **default StorageClass** ( annotation `storageclass.kubernetes.io/is-default-class` equal to `ture` ) will be used.
|
||||
In KubeKey configurations, `spec.persistence.storageClass` of `ClusterConfiguration` needs to be set for ks-installer to create a PersistentVolumeClaim (PVC) for KubeSphere. If it is empty, the **default StorageClass** (annotation `storageclass.kubernetes.io/is-default-class` equals to `true`) will be used.
|
||||
``` yaml
|
||||
apiVersion: installer.kubesphere.io/v1alpha1
|
||||
kind: ClusterConfiguration
|
||||
|
|
@ -29,18 +24,14 @@ spec:
|
|||
...
|
||||
```
|
||||
|
||||
So in Step 2, an available StorageClass **must** be installed. It includes:
|
||||
Therefore, an available StorageClass **must** be installed in Step 2 above. It includes:
|
||||
- StorageClass itself
|
||||
- Storage Plugin for the StorageClass if necessary
|
||||
|
||||
This tutorial introduces **KubeKey add-on configurations** for some mainly used storage plugins.
|
||||
Assume that `spec.persistence.storageClass` is empty, so the installed StorageClass should be set as default.
|
||||
Other storage system can be configured like these ones.
|
||||
This tutorial introduces **KubeKey add-on configurations** for some mainly used storage plugins. If `spec.persistence.storageClass` is empty, the default StorageClass will be installed. Refer to the following sections if you want to configure other storage systems.
|
||||
|
||||
## QingCloud CSI
|
||||
If you plan to install KubeSphere on [QingCloud](https://www.qingcloud.com/), [QingCloud CSI](https://github.com/yunify/qingcloud-csi)
|
||||
could be chosen as the underlying storage plugin.
|
||||
Following is an example of KubeKey add-on config for QingCloud CSI installed by **Helm Charts including a StorageClass**.
|
||||
If you plan to install KubeSphere on [QingCloud](https://www.qingcloud.com/), [QingCloud CSI](https://github.com/yunify/qingcloud-csi) can be chosen as the underlying storage plugin. The following is an example of KubeKey add-on configurations for QingCloud CSI installed by **Helm Charts including a StorageClass**.
|
||||
|
||||
### Chart Config
|
||||
```yaml
|
||||
|
|
@ -51,10 +42,10 @@ config:
|
|||
sc:
|
||||
isDefaultClass: true
|
||||
```
|
||||
If you want to config more values, see [chart configuration for QingCloud CSI](https://github.com/kubesphere/helm-charts/tree/master/src/test/csi-qingcloud#configuration).
|
||||
If you want to configure more values, see [chart configuration for QingCloud CSI](https://github.com/kubesphere/helm-charts/tree/master/src/test/csi-qingcloud#configuration).
|
||||
|
||||
### Add-On Config
|
||||
Save the above chart config locally (e.g. `/root/csi-qingcloud.yaml`). The add-on config for QingCloud CSI could be like:
|
||||
### Add-on Config
|
||||
Save the above chart config locally (e.g. `/root/csi-qingcloud.yaml`). The add-on config for QingCloud CSI could be like:
|
||||
```yaml
|
||||
addons:
|
||||
- name: csi-qingcloud
|
||||
|
|
@ -67,9 +58,7 @@ addons:
|
|||
```
|
||||
|
||||
## NFS Client
|
||||
With a NFS server, you could choose [NFS-client provisioner](https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client)
|
||||
as the storage plugin. NFS-client provisioner create PersistentVolume dynamically.
|
||||
Following is an example of KubeKey add-on config for NFS-client Provisioner installed by **Helm Charts including a StorageClass** .
|
||||
With a NFS server, you can choose [NFS-client Provisioner](https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client) as the storage plugin. NFS-client Provisioner creates the PersistentVolume dynamically. The following is an example of KubeKey add-on configurations for NFS-client Provisioner installed by **Helm Charts including a StorageClass** .
|
||||
|
||||
### Chart Config
|
||||
```yaml
|
||||
|
|
@ -79,10 +68,10 @@ nfs:
|
|||
storageClass:
|
||||
defaultClass: false
|
||||
```
|
||||
If you want to config more values, see [chart configuration for nfs-client](https://github.com/kubesphere/helm-charts/tree/master/src/main/nfs-client-provisioner#configuration).
|
||||
If you want to configure more values, see [chart configuration for nfs-client](https://github.com/kubesphere/helm-charts/tree/master/src/main/nfs-client-provisioner#configuration).
|
||||
|
||||
### KubeKey Add-on Config
|
||||
Save the above chart config locally (e.g. `/root/nfs-client.yaml`). The add-on config for NFS-Client provisioner cloud be like:
|
||||
### Add-on Config
|
||||
Save the above chart config locally (e.g. `/root/nfs-client.yaml`). The add-on config for NFS-Client Provisioner cloud be like:
|
||||
```yaml
|
||||
addons:
|
||||
- name: nfs-client
|
||||
|
|
@ -95,24 +84,21 @@ addons:
|
|||
```
|
||||
|
||||
## Ceph
|
||||
With a Ceph server, you could choose [Ceph RBD](https://kubernetes.io/docs/concepts/storage/storage-classes/#ceph-rbd)
|
||||
or [Ceph CSI](https://github.com/ceph/ceph-csi) as the underlying storage plugin.
|
||||
Ceph RBD is an in-tree storage plugin on Kubernetes,
|
||||
and Ceph CSI is a Container Storage Interface (CSI) driver for RBD, CephFS.
|
||||
With a Ceph server, you can choose [Ceph RBD](https://kubernetes.io/docs/concepts/storage/storage-classes/#ceph-rbd) or [Ceph CSI](https://github.com/ceph/ceph-csi) as the underlying storage plugin. Ceph RBD is an in-tree storage plugin on Kubernetes, and Ceph CSI is a Container Storage Interface (CSI) driver for RBD, CephFS.
|
||||
|
||||
### Which Plugin to Select for Ceph
|
||||
Ceph CSI RBD is appreciated, if you work with **14.0.0 (Nautilus)+** Ceph cluster. Here are some reasons:
|
||||
- In-tree plugin will be deprecated in the future
|
||||
- Ceph RBD only work on Kubernetes with **hyperkube** images, and **hyperkube** images were
|
||||
[deprecated since Kubernetes1.17](https://github.com/kubernetes/kubernetes/pull/85094)
|
||||
- Ceph CSI owns more features like cloning, expanding, snapshot
|
||||
Ceph CSI RBD is the preferred choice if you work with **14.0.0 (Nautilus)+** Ceph cluster. Here are some reasons:
|
||||
- The in-tree plugin will be deprecated in the future.
|
||||
- Ceph RBD only works on Kubernetes with **hyperkube** images, and **hyperkube** images were
|
||||
[deprecated since Kubernetes 1.17](https://github.com/kubernetes/kubernetes/pull/85094).
|
||||
- Ceph CSI has more features such as cloning, expanding and snapshots.
|
||||
|
||||
### Ceph CSI RBD
|
||||
Ceph-CSI should be installed on v1.14.0+ Kubernetes, and work with 14.0.0 (Nautilus)+ Ceph Cluster.
|
||||
Ceph-CSI needs to be installed on v1.14.0+ Kubernetes, and work with 14.0.0 (Nautilus)+ Ceph Cluster.
|
||||
For details about compatibility, see [Ceph CSI Support Matrix](https://github.com/ceph/ceph-csi#support-matrix).
|
||||
|
||||
Following is an example of KubeKey add-on config for Ceph CSI RBD installed by **Helm Charts**.
|
||||
As StorageClass not included in the chart, a StorageClass should be configured in add-on config.
|
||||
The following is an example of KubeKey add-on configurations for Ceph CSI RBD installed by **Helm Charts**.
|
||||
As the StorageClass is not included in the chart, a StorageClass needs to be configured in the add-on config.
|
||||
|
||||
#### Chart Config
|
||||
|
||||
|
|
@ -124,9 +110,9 @@ csiConfig:
|
|||
- "192.168.0.9:6789" # <--TobeReplaced-->
|
||||
- "192.168.0.10:6789" # <--TobeReplaced-->
|
||||
```
|
||||
If you want to config more values, see [chart configuration for ceph-csi-rbd](https://github.com/ceph/ceph-csi/tree/master/charts/ceph-csi-rbd).
|
||||
If you want to configure more values, see [chart configuration for ceph-csi-rbd](https://github.com/ceph/ceph-csi/tree/master/charts/ceph-csi-rbd).
|
||||
|
||||
#### StorageClass ( include secret )
|
||||
#### StorageClass (including secret)
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
|
|
@ -164,8 +150,7 @@ mountOptions:
|
|||
```
|
||||
|
||||
#### Add-On Config
|
||||
Save the above chart config and StorageClass locally (e.g. `/root/ceph-csi-rbd.yaml` and `/root/ceph-csi-rbd-sc.yaml`).
|
||||
The add-on configuration can be set like:
|
||||
Save the above chart config and StorageClass locally (e.g. `/root/ceph-csi-rbd.yaml` and `/root/ceph-csi-rbd-sc.yaml`). The add-on configuration can be set like:
|
||||
```yaml
|
||||
addons:
|
||||
- name: ceph-csi-rbd
|
||||
|
|
@ -183,10 +168,8 @@ addons:
|
|||
```
|
||||
|
||||
### Ceph RBD
|
||||
KubeKey will never use **hyperkube** images. Hence, in-tree Ceph RBD may not work on Kubernetes installed by KubeKey.
|
||||
But if your Ceph cluster is lower than 14.0.0 which means Ceph CSI can't be used, [rbd provisioner](https://github.com/kubernetes-incubator/external-storage/tree/master/ceph/rbd)
|
||||
could be used as a substitute for Ceph RBD. It's format is the same with the [in-tree Ceph RBD](https://kubernetes.io/docs/concepts/storage/storage-classes/#ceph-rbd).
|
||||
Following is an example of KubeKey add-on config for rbd provisioner installed by **Helm Charts including a StorageClass**.
|
||||
KubeKey will never use **hyperkube** images. Hence, in-tree Ceph RBD may not work on Kubernetes installed by KubeKey. However, if your Ceph cluster is lower than 14.0.0 which means Ceph CSI can't be used, [rbd provisioner](https://github.com/kubernetes-incubator/external-storage/tree/master/ceph/rbd) can be used as a substitute for Ceph RBD. Its format is the same with [in-tree Ceph RBD](https://kubernetes.io/docs/concepts/storage/storage-classes/#ceph-rbd).
|
||||
The following is an example of KubeKey add-on configurations for rbd provisioner installed by **Helm Charts including a StorageClass**.
|
||||
|
||||
#### Chart Config
|
||||
```yaml
|
||||
|
|
@ -197,7 +180,7 @@ ceph:
|
|||
sc:
|
||||
isDefault: false
|
||||
```
|
||||
If you want to config more values, see [chart configuration for rbd-provisioner](https://github.com/kubesphere/helm-charts/tree/master/src/test/rbd-provisioner#configuration).
|
||||
If you want to configure more values, see [chart configuration for rbd-provisioner](https://github.com/kubesphere/helm-charts/tree/master/src/test/rbd-provisioner#configuration).
|
||||
|
||||
#### Add-on Config
|
||||
Save the above chart config locally (e.g. `/root/rbd-provisioner.yaml`). The add-on config for rbd provisioner cloud be like:
|
||||
|
|
@ -212,10 +195,10 @@ Save the above chart config locally (e.g. `/root/rbd-provisioner.yaml`). The add
|
|||
```
|
||||
|
||||
## Glusterfs
|
||||
[Glusterfs](https://kubernetes.io/docs/concepts/storage/storage-classes/#glusterfs) is an in-tree storage plugin in Kubernetes. So, **only StorageClass** needs to be installed.
|
||||
Following is an example of KubeKey add-on config for glusterfs.
|
||||
[Glusterfs](https://kubernetes.io/docs/concepts/storage/storage-classes/#glusterfs) is an in-tree storage plugin in Kubernetes. Hence, **only StorageClass** needs to be installed.
|
||||
The following is an example of KubeKey add-on configurations for glusterfs.
|
||||
|
||||
### StorageClass ( include secret )
|
||||
### StorageClass (including secret)
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
|
|
@ -268,5 +251,5 @@ If **no default StorageClass** is configured with **KubeKey** add-on, OpenEBS/Lo
|
|||
|
||||
## Multi-Storage
|
||||
If you intend to install more than one storage plugins, please only set one of them to be the default or
|
||||
set the ClusterConfiguration's `spec.persistence.storageClass` with the StorageClass name you want Kubesphere to use.
|
||||
Otherwise, [ks-installer](https://github.com/kubesphere/ks-installer) will be confused about which StorageClass to use.
|
||||
set `spec.persistence.storageClass` of `ClusterConfiguration` with the StorageClass name you want Kubesphere to use.
|
||||
Otherwise, [ks-installer](https://github.com/kubesphere/ks-installer) will be confused about which StorageClass to use.
|
||||
Loading…
Reference in New Issue