From 37f594ec6008858c2995ceccb9e74428a0ed7906 Mon Sep 17 00:00:00 2001 From: Sherlock113 Date: Tue, 23 Mar 2021 10:24:53 +0800 Subject: [PATCH] Add cn storage section Signed-off-by: Sherlock113 --- .../blogs/install-kubernetes-using-kubekey.md | 2 +- .../ha-configuration.md | 2 +- .../installing-on-linux/introduction/intro.md | 2 +- .../introduction/multioverview.md | 2 +- .../install-gluster-fs.md | 2 +- .../install-nfs-client.md | 2 +- .../install-kubesphere-on-azure-vms.md | 2 +- .../install-kubesphere-on-qingcloud-vms.md | 4 +- .../project-user-guide/storage/volumes.md | 2 +- .../docs/quick-start/all-in-one-on-linux.md | 2 +- .../ha-configuration.md | 2 +- .../installing-on-linux/introduction/intro.md | 2 +- .../introduction/multioverview.md | 2 +- .../introduction/storage-configuration.md | 267 ---------------- .../install-kubesphere-on-vmware-vsphere.md | 4 +- .../_index.md | 7 + .../install-ceph-csi-rbd.md | 128 ++++++++ .../install-gluster-fs.md | 301 ++++++++++++++++++ .../install-nfs-client.md | 273 ++++++++++++++++ .../install-qingcloud-csi.md | 278 ++++++++++++++++ .../understand-persistent-storage.md | 45 +++ .../install-kubesphere-on-ali-ecs.md | 4 +- .../install-kubesphere-on-azure-vms.md | 2 +- .../install-kubesphere-on-huaweicloud-ecs.md | 2 +- .../install-kubesphere-on-qingcloud-vms.md | 4 +- .../project-user-guide/storage/volumes.md | 2 +- .../docs/quick-start/all-in-one-on-linux.md | 2 +- 27 files changed, 1056 insertions(+), 291 deletions(-) delete mode 100644 content/zh/docs/installing-on-linux/introduction/storage-configuration.md create mode 100644 content/zh/docs/installing-on-linux/persistent-storage-configurations/_index.md create mode 100644 content/zh/docs/installing-on-linux/persistent-storage-configurations/install-ceph-csi-rbd.md create mode 100644 content/zh/docs/installing-on-linux/persistent-storage-configurations/install-gluster-fs.md create mode 100644 content/zh/docs/installing-on-linux/persistent-storage-configurations/install-nfs-client.md create mode 100644 content/zh/docs/installing-on-linux/persistent-storage-configurations/install-qingcloud-csi.md create mode 100644 content/zh/docs/installing-on-linux/persistent-storage-configurations/understand-persistent-storage.md diff --git a/content/en/blogs/install-kubernetes-using-kubekey.md b/content/en/blogs/install-kubernetes-using-kubekey.md index 0eb3f62e9..590ef2d6b 100644 --- a/content/en/blogs/install-kubernetes-using-kubekey.md +++ b/content/en/blogs/install-kubernetes-using-kubekey.md @@ -161,7 +161,7 @@ You can use KubeKey to install a specified Kubernetes version. The dependency th - `worker`: worker node names. - You can provide more values in this configuration file, such as `addons`. KubeKey can install all [addons](https://github.com/kubesphere/kubekey/blob/release-1.0/docs/addons.md) that can be installed as a YAML file or Chart file. For example, KubeKey does not install any storage plugin for Kubernetes by default, but you can [add your own storage systems](https://kubesphere.io/docs/installing-on-linux/introduction/storage-configuration/), including NFS Client, Ceph, and Glusterfs. For more information about the configuration file, see [Kubernetes Cluster Configurations](https://kubesphere.io/docs/installing-on-linux/introduction/vars/) and [this file](https://github.com/kubesphere/kubekey/blob/release-1.0/docs/config-example.md). + You can provide more values in this configuration file, such as `addons`. KubeKey can install all [addons](https://github.com/kubesphere/kubekey/blob/release-1.0/docs/addons.md) that can be installed as a YAML file or Chart file. For example, KubeKey does not install any storage plugin for Kubernetes by default, but you can [add your own storage systems](https://kubesphere.io/docs/installing-on-linux/persistent-storage-configurations/understand-persistent-storage/), including NFS Client, Ceph, and Glusterfs. For more information about the configuration file, see [Kubernetes Cluster Configurations](https://kubesphere.io/docs/installing-on-linux/introduction/vars/) and [this file](https://github.com/kubesphere/kubekey/blob/release-1.0/docs/config-example.md). 6. Save the file when you finish editing and execute the following command to install Kubernetes: diff --git a/content/en/docs/installing-on-linux/high-availability-configurations/ha-configuration.md b/content/en/docs/installing-on-linux/high-availability-configurations/ha-configuration.md index d7841763b..01e49c752 100644 --- a/content/en/docs/installing-on-linux/high-availability-configurations/ha-configuration.md +++ b/content/en/docs/installing-on-linux/high-availability-configurations/ha-configuration.md @@ -163,7 +163,7 @@ For more information about different fields in this configuration file, see [Kub ### Persistent storage plugin configurations -For a production environment, you need to prepare persistent storage and configure the storage plugin (e.g. CSI) in `config-sample.yaml` to define which storage service you want to use. For more information, see [Persistent Storage Configurations](../../../installing-on-linux/introduction/storage-configuration/). +For a production environment, you need to prepare persistent storage and configure the storage plugin (e.g. CSI) in `config-sample.yaml` to define which storage service you want to use. For more information, see [Persistent Storage Configurations](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/). ### Enable pluggable components (Optional) diff --git a/content/en/docs/installing-on-linux/introduction/intro.md b/content/en/docs/installing-on-linux/introduction/intro.md index 6baa6559f..a1aa49719 100644 --- a/content/en/docs/installing-on-linux/introduction/intro.md +++ b/content/en/docs/installing-on-linux/introduction/intro.md @@ -56,7 +56,7 @@ KubeSphere has decoupled some core feature components since v2.1.0. These compon ## Storage Configurations -KubeSphere allows you to configure persistent storage services both before and after installation. Meanwhile, KubeSphere supports a variety of open-source storage solutions (for example, Ceph and GlusterFS) as well as commercial storage products. Refer to [Persistent Storage Configurations](../storage-configuration) for detailed instructions regarding how to configure the storage class before you install KubeSphere. +KubeSphere allows you to configure persistent storage services both before and after installation. Meanwhile, KubeSphere supports a variety of open-source storage solutions (for example, Ceph and GlusterFS) as well as commercial storage products. Refer to [Persistent Storage Configurations](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/) for detailed instructions regarding how to configure the storage class before you install KubeSphere. For more information about how to set different storage classes for your workloads after you install KubeSphere, see [Persistent Volumes and Storage Classes](../../../cluster-administration/persistent-volume-and-storage-class/). diff --git a/content/en/docs/installing-on-linux/introduction/multioverview.md b/content/en/docs/installing-on-linux/introduction/multioverview.md index 7ce2bda8f..f8ca0613d 100644 --- a/content/en/docs/installing-on-linux/introduction/multioverview.md +++ b/content/en/docs/installing-on-linux/introduction/multioverview.md @@ -252,7 +252,7 @@ List all your machines under `hosts` and add their detailed information as above #### addons -You can customize persistent storage plugins (e.g. NFS Client, Ceph RBD, and GlusterFS) by specifying storage under the field `addons` in `config-sample.yaml`. For more information, see [Persistent Storage Configurations](../storage-configuration). +You can customize persistent storage plugins (e.g. NFS Client, Ceph RBD, and GlusterFS) by specifying storage under the field `addons` in `config-sample.yaml`. For more information, see [Persistent Storage Configurations](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/). {{< notice note >}} diff --git a/content/en/docs/installing-on-linux/persistent-storage-configurations/install-gluster-fs.md b/content/en/docs/installing-on-linux/persistent-storage-configurations/install-gluster-fs.md index 019835621..b3a62798c 100644 --- a/content/en/docs/installing-on-linux/persistent-storage-configurations/install-gluster-fs.md +++ b/content/en/docs/installing-on-linux/persistent-storage-configurations/install-gluster-fs.md @@ -18,7 +18,7 @@ Ubuntu 16.04 is used as an example in this tutorial. ## Prerequisites -You have set up your GlusterFS cluster and configured Heketi. For more information, see [Set up a GlusterFS Server](../../api-reference/storage-system-installation/glusterfs-server/). +You have set up your GlusterFS cluster and configured Heketi. For more information, see [Set up a GlusterFS Server](../../../api-reference/storage-system-installation/glusterfs-server/). ## Step 1: Configure the Client Machine diff --git a/content/en/docs/installing-on-linux/persistent-storage-configurations/install-nfs-client.md b/content/en/docs/installing-on-linux/persistent-storage-configurations/install-nfs-client.md index c05f32d78..7315b75da 100644 --- a/content/en/docs/installing-on-linux/persistent-storage-configurations/install-nfs-client.md +++ b/content/en/docs/installing-on-linux/persistent-storage-configurations/install-nfs-client.md @@ -16,7 +16,7 @@ Ubuntu 16.04 is used as an example in this tutorial. ## Prerequisites -You must have an NFS server ready providing external storage services. Make sure you have created and exported a directory on the NFS server which your permitted client machines can access. For more information, see [Set up an NFS Server](../../api-reference/storage-system-installation/nfs-server/). +You must have an NFS server ready providing external storage services. Make sure you have created and exported a directory on the NFS server which your permitted client machines can access. For more information, see [Set up an NFS Server](../../../api-reference/storage-system-installation/nfs-server/). ## Step 1: Configure the Client Machine diff --git a/content/en/docs/installing-on-linux/public-cloud/install-kubesphere-on-azure-vms.md b/content/en/docs/installing-on-linux/public-cloud/install-kubesphere-on-azure-vms.md index 2f71297db..2ffb163d6 100644 --- a/content/en/docs/installing-on-linux/public-cloud/install-kubesphere-on-azure-vms.md +++ b/content/en/docs/installing-on-linux/public-cloud/install-kubesphere-on-azure-vms.md @@ -208,7 +208,7 @@ The public load balancer is used directly instead of an internal load balancer d ### Persistent Storage Plugin Configurations -See [Persistent Storage Configurations](../../../installing-on-linux/introduction/storage-configuration/) for details. +See [Persistent Storage Configurations](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/) for details. ### Configure the Network Plugin diff --git a/content/en/docs/installing-on-linux/public-cloud/install-kubesphere-on-qingcloud-vms.md b/content/en/docs/installing-on-linux/public-cloud/install-kubesphere-on-qingcloud-vms.md index 37ee9541e..098b6160e 100644 --- a/content/en/docs/installing-on-linux/public-cloud/install-kubesphere-on-qingcloud-vms.md +++ b/content/en/docs/installing-on-linux/public-cloud/install-kubesphere-on-qingcloud-vms.md @@ -270,7 +270,7 @@ For testing or development, you can skip this part. KubeKey will use the integra - QingStor CSI - More plugins will be supported in future releases -Make sure you have configured the storage plugin before you get started. KubeKey will create a StorageClass and persistent volumes for related workloads during the installation. For more information, see [Persistent Storage Configurations](../../../installing-on-linux/introduction/storage-configuration/). +Make sure you have configured the storage plugin before you get started. KubeKey will create a StorageClass and persistent volumes for related workloads during the installation. For more information, see [Persistent Storage Configurations](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/). ### Step 7: Enable pluggable components (Optional) @@ -338,6 +338,6 @@ To verify if the cluster is highly available, you can turn off an instance on pu [Kubernetes Cluster Configurations](../../../installing-on-linux/introduction/vars/) -[Persistent Storage Configurations](../../../installing-on-linux/introduction/storage-configuration/) +[Persistent Storage Configurations](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/) [Enable Pluggable Components](../../../pluggable-components/) \ No newline at end of file diff --git a/content/en/docs/project-user-guide/storage/volumes.md b/content/en/docs/project-user-guide/storage/volumes.md index 58ea081f0..6a42bb421 100644 --- a/content/en/docs/project-user-guide/storage/volumes.md +++ b/content/en/docs/project-user-guide/storage/volumes.md @@ -42,7 +42,7 @@ All the volumes that are created on the **Volumes** page are PersistentVolumeCla ![volume-creation-method](/images/docs/project-user-guide/volume-management/volumes/volume-creation-method.jpg) - - **Create a volume by StorageClass**. You can configure storage classes both [before](../../../installing-on-linux/introduction/storage-configuration/) and [after](../../../cluster-administration/persistent-volume-and-storage-class/) the installation of KubeSphere. + - **Create a volume by StorageClass**. You can configure storage classes both [before](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/) and [after](../../../cluster-administration/persistent-volume-and-storage-class/) the installation of KubeSphere. - **Create a volume by VolumeSnapshot**. To use a snapshot to create a volume, you must create a volume snapshot first. diff --git a/content/en/docs/quick-start/all-in-one-on-linux.md b/content/en/docs/quick-start/all-in-one-on-linux.md index a3290fe53..e357616ca 100644 --- a/content/en/docs/quick-start/all-in-one-on-linux.md +++ b/content/en/docs/quick-start/all-in-one-on-linux.md @@ -144,7 +144,7 @@ To create a Kubernetes cluster with KubeSphere installed, refer to the following - Supported Kubernetes versions: *v1.15.12*, *v1.16.13*, *v1.17.9* (default), *v1.18.6*. - For all-in-one installation, generally speaking, you do not need to change any configuration. - If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed. KubeKey will install Kubernetes only. If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed. -- KubeKey will install [OpenEBS](https://openebs.io/) to provision LocalPV for the development and testing environment by default, which is convenient for new users. For other storage classes, see [Persistent Storage Configurations](../../installing-on-linux/introduction/storage-configuration/). +- KubeKey will install [OpenEBS](https://openebs.io/) to provision LocalPV for the development and testing environment by default, which is convenient for new users. For other storage classes, see [Persistent Storage Configurations](../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/). {{}} diff --git a/content/zh/docs/installing-on-linux/high-availability-configurations/ha-configuration.md b/content/zh/docs/installing-on-linux/high-availability-configurations/ha-configuration.md index 0f38fb64b..5b93bd0b6 100644 --- a/content/zh/docs/installing-on-linux/high-availability-configurations/ha-configuration.md +++ b/content/zh/docs/installing-on-linux/high-availability-configurations/ha-configuration.md @@ -164,7 +164,7 @@ spec: ### 持久化存储插件配置 -在生产环境中,您需要准备持久化存储并在 `config-sample.yaml` 中配置存储插件(例如 CSI),以明确您想使用哪一种存储服务。有关更多信息,请参见[持久化存储配置](../../../installing-on-linux/introduction/storage-configuration/)。 +在生产环境中,您需要准备持久化存储并在 `config-sample.yaml` 中配置存储插件(例如 CSI),以明确您想使用哪一种存储服务。有关更多信息,请参见[持久化存储配置](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/)。 ### 启用可插拔组件(可选) diff --git a/content/zh/docs/installing-on-linux/introduction/intro.md b/content/zh/docs/installing-on-linux/introduction/intro.md index d3b67597d..206939f43 100644 --- a/content/zh/docs/installing-on-linux/introduction/intro.md +++ b/content/zh/docs/installing-on-linux/introduction/intro.md @@ -56,7 +56,7 @@ KubeSphere 为用户提供轻量级安装程序 [KubeKey](https://github.com/kub ## 存储配置 -您可以在 KubeSphere 安装前或安装后配置持久化储存服务。同时,KubeSphere 支持各种开源存储解决方案(例如 Ceph 和 GlusterFS)以及商业存储产品。有关在安装 KubeSphere 之前配置存储类型的详细说明,请参考[持久化存储配置](../storage-configuration)。 +您可以在 KubeSphere 安装前或安装后配置持久化储存服务。同时,KubeSphere 支持各种开源存储解决方案(例如 Ceph 和 GlusterFS)以及商业存储产品。有关在安装 KubeSphere 之前配置存储类型的详细说明,请参考[持久化存储配置](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/)。 有关如何在安装 KubeSphere 之后配置存储类型,请参考[持久卷和存储类型](../../../cluster-administration/persistent-volume-and-storage-class/)。 diff --git a/content/zh/docs/installing-on-linux/introduction/multioverview.md b/content/zh/docs/installing-on-linux/introduction/multioverview.md index 0c2692a5b..0c3422c4f 100644 --- a/content/zh/docs/installing-on-linux/introduction/multioverview.md +++ b/content/zh/docs/installing-on-linux/introduction/multioverview.md @@ -254,7 +254,7 @@ spec: #### addons -您可以在 `config-sample.yaml` 的 `addons` 字段下指定存储,从而自定义持久化存储插件,例如 NFS 客户端、Ceph RBD、GlusterFS 等。有关更多信息,请参见[持久化存储配置](../../../installing-on-linux/introduction/storage-configuration/)。 +您可以在 `config-sample.yaml` 的 `addons` 字段下指定存储,从而自定义持久化存储插件,例如 NFS 客户端、Ceph RBD、GlusterFS 等。有关更多信息,请参见[持久化存储配置](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/)。 {{< notice note >}} diff --git a/content/zh/docs/installing-on-linux/introduction/storage-configuration.md b/content/zh/docs/installing-on-linux/introduction/storage-configuration.md deleted file mode 100644 index dd4ceab3a..000000000 --- a/content/zh/docs/installing-on-linux/introduction/storage-configuration.md +++ /dev/null @@ -1,267 +0,0 @@ ---- -title: "持久化存储配置" -keywords: 'Kubernetes, KubeSphere, 存储, 存储卷, PVC, KubeKey, 插件' -description: '持久化存储配置' -linkTitle: "持久化存储配置" -weight: 3170 ---- - -## 概述 -安装 KubeSphere 时**必须**有持久化存储卷。[KubeKey](https://github.com/kubesphere/kubekey) 通过 [Add-on 机制](https://github.com/kubesphere/kubekey/blob/v1.0.0/docs/addons.md)可以在不同的存储系统上安装 KubeSphere。在 Linux 上使用 KubeKey 安装 KubeSphere 的一般步骤是: - -1. 安装 Kubernetes。 -2. 安装 KubeSphere 的 **Add-on** 插件。 -3. 使用 [ks-installer](https://github.com/kubesphere/ks-installer) 安装 KubeSphere。 - -在 KubeKey 配置中,需要设置 `ClusterConfiguration` 的 `spec.persistence.storageClass`,使 ks-installer 为 KubeSphere 创建 PersistentVolumeClaim (PVC)。如果此处为空,将使用**默认 StorageClass**(注解 `storageclass.kubernetes.io/is-default-class` 等于 `true`)。 -``` yaml -apiVersion: installer.kubesphere.io/v1alpha1 -kind: ClusterConfiguration -spec: - persistence: - storageClass: "" -... -``` - -因此,在前述的步骤 2 中**必须**安装一个可用的 StorageClass。它包括: -- StorageClass 本身 -- StorageClass 的存储插件(如有必要) - -本教程介绍了一些常用存储插件的 **KubeKey Add-on 配置**。如果 `spec.persistence.storageClass` 为空,将安装默认 StorageClass。如果您想配置其他存储系统,请参考下面的内容。 - -## QingCloud CSI -如果您打算在[青云QingCloud](https://www.qingcloud.com/) 上安装 KubeSphere,可以选择 [QingCloud CSI](https://github.com/yunify/qingcloud-csi) 作为底层存储插件。下面是使用**带有 StorageClass 的 Helm Chart** 安装 QingCloud CSI 的 KubeKey Add-on 配置示例。 - -### Chart 配置 -```yaml -config: - qy_access_key_id: "MBKTPXWCIRIEDQYQKXYL" # Replace it with your own key id. - qy_secret_access_key: "cqEnHYZhdVCVif9qCUge3LNUXG1Cb9VzKY2RnBdX" # Replace it with your own access key. - zone: "pek3a" # Lowercase letters only. -sc: - isDefaultClass: true # Set it as the default storage class. -``` -您需要创建该 Chart 配置文件,并手动输入上面的值。 - -#### 密钥 (Key) - -要获取 `qy_access_key_id` 和 `qy_secret_access_key` 的值,请登录[青云QingCloud](https://console.qingcloud.com/login) 的 Web 控制台,参考下方截图先创建一个密钥。密钥创建之后会存储在 csv 文件中,下载该 csv 文件。 - -![access-key](/images/docs/zh-cn/installing-on-linux/introduction/persistent-storage-configurations/access-key.PNG) - -#### 可用区 (Zone) - -字段 `zone` 指定云存储卷部署的位置。在青云QingCloud 平台上,您必须先选择一个可用区,然后才能创建存储卷。 - -![storage-zone](/images/docs/zh-cn/installing-on-linux/introduction/persistent-storage-configurations/storage-zone.PNG) - -请确保您在 `zone` 中指定的值匹配下表中列出的区域 (Region) ID: - -| 可用区 | 区域 ID | -| ------------------------------------------- | ----------------------- | -| Shanghai1-A/Shanghai1-B | sh1a/sh1b | -| Beijing3-A/Beijing3-B/Beijing3-C/Beijing3-D | pek3a/pek3b/pek3c/pek3d | -| Guangdong2-A/Guangdong2-B | gd2a/gd2b | -| Asia-Pacific 2-A | ap2a | - -如果您想配置更多值,请参见 [QingCloud CSI Chart 配置](https://github.com/kubesphere/helm-charts/tree/master/src/test/csi-qingcloud#configuration)。 - -### Add-on 配置 -将上面的 Chart 配置文件保存至本地(例如 `/root/csi-qingcloud.yaml`)。QingCloud CSI Add-on 配置可设为: -```yaml -addons: -- name: csi-qingcloud - namespace: kube-system - sources: - chart: - name: csi-qingcloud - repo: https://charts.kubesphere.io/test - values: /root/csi-qingcloud.yaml -``` - -## NFS Client -通过 NFS 服务器,您可以选择 [NFS-client Provisioner](https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client) 作为存储插件。NFS-client Provisioner 会动态创建 PersistentVolume。下面是使用**带有 StorageClass 的 Helm Chart** 安装 NFS-client Provisioner 的 KubeKey Add-on 配置示例。 - -### Chart 配置 -```yaml -nfs: - server: "192.168.0.27" # <--ToBeReplaced-> - path: "/mnt/csi/" # <--ToBeReplaced-> -storageClass: - defaultClass: false -``` -如果您想配置更多值,请参见 [NFS-client Chart 配置](https://github.com/kubesphere/helm-charts/tree/master/src/main/nfs-client-provisioner#configuration)。 - -### Add-on 配置 -将上面的 Chart 配置文件保存至本地(例如 `/root/nfs-client.yaml`)。NFS-client Provisioner 的 Add-on 配置可设为: -```yaml -addons: -- name: nfs-client - namespace: kube-system - sources: - chart: - name: nfs-client-provisioner - repo: https://charts.kubesphere.io/main - values: /root/nfs-client.yaml -``` - -## Ceph -通过 Ceph 服务器,您可以选择 [Ceph RBD](https://kubernetes.io/docs/concepts/storage/storage-classes/#ceph-rbd) 或 [Ceph CSI](https://github.com/ceph/ceph-csi) 作为底层存储插件。Ceph RBD 是 Kubernetes 上的一个树内 (in-tree) 存储插件,Ceph CSI 是 CephFS RBD 的容器存储接口 (CSI) 驱动。 - -### 为 Ceph 选择一种插件 -如果您使用 **14.0.0 (Nautilus)+** Ceph 集群,建议优先选择 Ceph CSI RBD。部分理由如下: -- 树内插件在未来会被弃用。 -- Ceph RBD 仅能在使用 **hyperkube** 镜像的 Kubernetes 上运行,而 **hyperkube** 镜像[自 Kubernetes 1.17 开始已经被弃用](https://github.com/kubernetes/kubernetes/pull/85094)。 -- Ceph CSI 具有更多功能,例如克隆、扩容和快照。 - -### Ceph CSI RBD -Ceph-CSI 需要安装在 1.14.0 以上版本的 Kubernetes 上,并与 14.0.0 (Nautilus)+ Ceph 集群一同运行。有关兼容性的详细信息,请参见 [Ceph CSI 支持矩阵](https://github.com/ceph/ceph-csi#support-matrix)。 - -下面是使用 **Helm Chart** 安装 Ceph CSI RBD 的 KubeKey Add-on 配置示例。由于 Chart 中未包含 StorageClass,需要在 Add-on 配置文件中配置一个 StorageClass。 - -#### Chart 配置 - -```yaml -csiConfig: - - clusterID: "cluster1" - monitors: - - "192.168.0.8:6789" # <--TobeReplaced--> - - "192.168.0.9:6789" # <--TobeReplaced--> - - "192.168.0.10:6789" # <--TobeReplaced--> -``` -如果您想配置更多值,请参见 [Ceph CSI RBD Chart 配置](https://github.com/ceph/ceph-csi/tree/master/charts/ceph-csi-rbd)。 - -#### StorageClass(包括 Secret) -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: csi-rbd-secret - namespace: kube-system -stringData: - userID: admin - userKey: "AQDoECFfYD3DGBAAm6CPhFS8TQ0Hn0aslTlovw==" # <--ToBeReplaced--> - encryptionPassphrase: test_passphrase ---- -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: csi-rbd-sc - annotations: - storageclass.beta.kubernetes.io/is-default-class: "true" - storageclass.kubesphere.io/supported-access-modes: '["ReadWriteOnce","ReadOnlyMany","ReadWriteMany"]' -provisioner: rbd.csi.ceph.com -parameters: - clusterID: "cluster1" - pool: "rbd" # <--ToBeReplaced--> - imageFeatures: layering - csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret - csi.storage.k8s.io/provisioner-secret-namespace: kube-system - csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret - csi.storage.k8s.io/controller-expand-secret-namespace: kube-system - csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret - csi.storage.k8s.io/node-stage-secret-namespace: kube-system - csi.storage.k8s.io/fstype: ext4 -reclaimPolicy: Delete -allowVolumeExpansion: true -mountOptions: - - discard -``` - -#### Add-on 配置 -将上面的 Chart 配置文件和 StorageClass 文件保存至本地(例如 `/root/ceph-csi-rbd.yaml` 和 `/root/ceph-csi-rbd-sc.yaml`)。Add-on 配置可以设置为: -```yaml -addons: -- name: ceph-csi-rbd - namespace: kube-system - sources: - chart: - name: ceph-csi-rbd - repo: https://ceph.github.io/csi-charts - values: /root/ceph-csi-rbd.yaml -- name: ceph-csi-rbd-sc - sources: - yaml: - path: - - /root/ceph-csi-rbd-sc.yaml -``` - -### Ceph RBD -KubeKey 不使用 **hyperkube** 镜像。因此,树内 Ceph RBD 可能无法在由 KubeKey 安装的 Kubernetes 上运行。不过,如果您的 Ceph 集群版本低于 14.0.0,即无法使用 Ceph CSI,则可以使用 [RBD Provisioner](https://github.com/kubernetes-incubator/external-storage/tree/master/ceph/rbd) 来替代 Ceph RBD。它的格式与[树内 Ceph RBD](https://kubernetes.io/docs/concepts/storage/storage-classes/#ceph-rbd) 相同。下面是使用**带有 StorageClass 的 Helm Chart** 安装 RBD Provisioner 的 KubeKey Add-on 配置示例。 - -#### Chart 配置 -```yaml -ceph: - mon: "192.168.0.12:6789" # <--ToBeReplaced--> - adminKey: "QVFBS1JkdGRvV0lySUJBQW5LaVpSKzBRY2tjWmd6UzRJdndmQ2c9PQ==" # <--ToBeReplaced--> - userKey: "QVFBS1JkdGRvV0lySUJBQW5LaVpSKzBRY2tjWmd6UzRJdndmQ2c9PQ==" # <--ToBeReplaced--> -sc: - isDefault: false -``` -如果您想配置更多值,请参见 [RBD Provisioner Chart 配置](https://github.com/kubesphere/helm-charts/tree/master/src/test/rbd-provisioner#configuration)。 - -#### Add-on 配置 -将上面的 Chart 配置文件保存至本地(例如 `/root/rbd-provisioner.yaml`)。RBD Provisioner 的 Add-on 配置可设为: -```yaml -- name: rbd-provisioner - namespace: kube-system - sources: - chart: - name: rbd-provisioner - repo: https://charts.kubesphere.io/test - values: /root/rbd-provisioner.yaml -``` - -## Glusterfs -[Glusterfs](https://kubernetes.io/docs/concepts/storage/storage-classes/#glusterfs) 是 Kubernetes 的一个树内存储插件。因此,**只**需要安装 **StorageClass**。下面是 Glusterfs 的 KubeKey Add-on 配置示例。 - -### StorageClass(包括 Secret) -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: heketi-secret - namespace: kube-system -type: kubernetes.io/glusterfs -data: - key: "MTIzNDU2" # <--ToBeReplaced--> ---- -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - annotations: - storageclass.beta.kubernetes.io/is-default-class: "true" - storageclass.kubesphere.io/supported-access-modes: '["ReadWriteOnce","ReadOnlyMany","ReadWriteMany"]' - name: glusterfs -parameters: - clusterid: "21240a91145aee4d801661689383dcd1" # <--ToBeReplaced--> - gidMax: "50000" - gidMin: "40000" - restauthenabled: "true" - resturl: "http://192.168.0.14:8080" # <--ToBeReplaced--> - restuser: admin - secretName: heketi-secret - secretNamespace: kube-system - volumetype: "replicate:2" # <--ToBeReplaced--> -provisioner: kubernetes.io/glusterfs -reclaimPolicy: Delete -volumeBindingMode: Immediate -allowVolumeExpansion: true -``` - -### Add-on 配置 -将上面的 StorageClass YAML 文件保存至本地(例如 **/root/glusterfs-sc.yaml**)。Add-on 配置可以设置为: -```yaml -- addon -- name: glusterfs - sources: - yaml: - path: - - /root/glusterfs-sc.yaml -``` - -## OpenEBS/LocalVolumes -[OpenEBS](https://github.com/openebs/openebs) 动态本地 PV Provisioner 可以在节点上使用唯一的 HostPath(目录)来创建 Kubernetes 本地持久化存储卷,以持久存储数据。若用户没有专门的存储系统,可以用它方便地上手 KubeSphere。如果 **KubeKey** Add-on 的配置中**没有默认 StorageClass**,则会安装 OpenEBS/LocalVolumes。 - -## 多个存储 -如果您想安装多个存储插件,请只将其中一个设置为默认,或者在 `ClusterConfiguration` 的 `spec.persistence.storageClass` 中设置您想让 KubeSphere 使用的 StorageClass 名称。否则,[ks-installer](https://github.com/kubesphere/ks-installer) 将不清楚使用哪一个 StorageClass。 \ No newline at end of file diff --git a/content/zh/docs/installing-on-linux/on-premises/install-kubesphere-on-vmware-vsphere.md b/content/zh/docs/installing-on-linux/on-premises/install-kubesphere-on-vmware-vsphere.md index b5694d7f5..a9a73d323 100644 --- a/content/zh/docs/installing-on-linux/on-premises/install-kubesphere-on-vmware-vsphere.md +++ b/content/zh/docs/installing-on-linux/on-premises/install-kubesphere-on-vmware-vsphere.md @@ -407,7 +407,7 @@ spec: registry: registryMirrors: [] insecureRegistries: [] - addons: [] # add your persistent storage and LoadBalancer plugin configuration here if you have, see https://kubesphere.io/docs/installing-on-linux/introduction/storage-configuration/ + addons: [] ··· # 其它配置可以在安装后之后根据需要进行修改 @@ -417,7 +417,7 @@ spec: 如本文开头的前提条件所说,对于生产环境,我们建议您准备持久性存储,可参考以下说明进行配置。若搭建开发和测试环境,您可以跳过这小节,直接使用默认集成的 OpenEBS 的 LocalPV 存储。 -继续编辑上述`config-sample.yaml`文件,找到`[addons]`字段,这里支持定义任何持久化存储的插件或客户端,如 NFS Client、Ceph、GlusterFS、CSI,根据您自己的持久化存储服务类型,并参考 [持久化存储服务](../../introduction/storage-configuration/) 中对应的示例 yaml 文件进行设置。 +继续编辑上述`config-sample.yaml`文件,找到`[addons]`字段,这里支持定义任何持久化存储的插件或客户端,如 NFS Client、Ceph、GlusterFS、CSI,根据您自己的持久化存储服务类型,并参考 [持久化存储服务](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/) 中对应的示例 yaml 文件进行设置。 #### 执行创建集群 diff --git a/content/zh/docs/installing-on-linux/persistent-storage-configurations/_index.md b/content/zh/docs/installing-on-linux/persistent-storage-configurations/_index.md new file mode 100644 index 000000000..887475801 --- /dev/null +++ b/content/zh/docs/installing-on-linux/persistent-storage-configurations/_index.md @@ -0,0 +1,7 @@ +--- +linkTitle: "持久化存储配置" +weight: 3300 + +_build: + render: false +--- diff --git a/content/zh/docs/installing-on-linux/persistent-storage-configurations/install-ceph-csi-rbd.md b/content/zh/docs/installing-on-linux/persistent-storage-configurations/install-ceph-csi-rbd.md new file mode 100644 index 000000000..b913e217f --- /dev/null +++ b/content/zh/docs/installing-on-linux/persistent-storage-configurations/install-ceph-csi-rbd.md @@ -0,0 +1,128 @@ +--- +title: "安装 Ceph" +keywords: 'KubeSphere, Kubernetes, Ceph, installation, configurations, storage' +description: 'How to create a KubeSphere create with Ceph providing storage services.' +linkTitle: "安装 Ceph" +weight: 3350 +--- + +With a Ceph server, you can choose [Ceph RBD](https://kubernetes.io/docs/concepts/storage/storage-classes/#ceph-rbd) or [Ceph CSI](https://github.com/ceph/ceph-csi) as the underlying storage plugin. Ceph RBD is an in-tree storage plugin on Kubernetes, and Ceph CSI is a Container Storage Interface (CSI) driver for RBD, CephFS. + +### Which plugin to select for Ceph + +Ceph CSI RBD is the preferred choice if you work with **14.0.0 (Nautilus)+** Ceph cluster. Here are some reasons: + +- The in-tree plugin will be deprecated in the future. +- Ceph RBD only works on Kubernetes with **hyperkube** images, and **hyperkube** images were + [deprecated since Kubernetes 1.17](https://github.com/kubernetes/kubernetes/pull/85094). +- Ceph CSI has more features such as cloning, expanding and snapshots. + +### Ceph CSI RBD + +Ceph-CSI needs to be installed on v1.14.0+ Kubernetes, and work with 14.0.0 (Nautilus)+ Ceph Cluster. +For details about compatibility, see [Ceph CSI Support Matrix](https://github.com/ceph/ceph-csi#support-matrix). + +The following is an example of KubeKey add-on configurations for Ceph CSI RBD installed by **Helm Charts**. +As the StorageClass is not included in the chart, a StorageClass needs to be configured in the add-on config. + +#### Chart configurations + +```yaml +csiConfig: + - clusterID: "cluster1" + monitors: + - "192.168.0.8:6789" # <--TobeReplaced--> + - "192.168.0.9:6789" # <--TobeReplaced--> + - "192.168.0.10:6789" # <--TobeReplaced--> +``` + +If you want to configure more values, see [chart configuration for ceph-csi-rbd](https://github.com/ceph/ceph-csi/tree/master/charts/ceph-csi-rbd). + +#### StorageClass (including secret) + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: csi-rbd-secret + namespace: kube-system +stringData: + userID: admin + userKey: "AQDoECFfYD3DGBAAm6CPhFS8TQ0Hn0aslTlovw==" # <--ToBeReplaced--> +--- +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: csi-rbd-sc + annotations: + storageclass.beta.kubernetes.io/is-default-class: "true" + storageclass.kubesphere.io/supported-access-modes: '["ReadWriteOnce","ReadOnlyMany","ReadWriteMany"]' +provisioner: rbd.csi.ceph.com +parameters: + clusterID: "cluster1" + pool: "rbd" # <--ToBeReplaced--> + imageFeatures: layering + csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret + csi.storage.k8s.io/provisioner-secret-namespace: kube-system + csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret + csi.storage.k8s.io/controller-expand-secret-namespace: kube-system + csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret + csi.storage.k8s.io/node-stage-secret-namespace: kube-system + csi.storage.k8s.io/fstype: ext4 +reclaimPolicy: Delete +allowVolumeExpansion: true +mountOptions: + - discard +``` + +#### Add-on configurations + +Save the above chart config and StorageClass locally (e.g. `/root/ceph-csi-rbd.yaml` and `/root/ceph-csi-rbd-sc.yaml`). The add-on configuration can be set like: + +```yaml +addons: +- name: ceph-csi-rbd + namespace: kube-system + sources: + chart: + name: ceph-csi-rbd + repo: https://ceph.github.io/csi-charts + values: /root/ceph-csi-rbd.yaml +- name: ceph-csi-rbd-sc + sources: + yaml: + path: + - /root/ceph-csi-rbd-sc.yaml +``` + +### Ceph RBD + +KubeKey will never use **hyperkube** images. Hence, in-tree Ceph RBD may not work on Kubernetes installed by KubeKey. However, if your Ceph cluster is lower than 14.0.0 which means Ceph CSI can't be used, [rbd provisioner](https://github.com/kubernetes-incubator/external-storage/tree/master/ceph/rbd) can be used as a substitute for Ceph RBD. Its format is the same with [in-tree Ceph RBD](https://kubernetes.io/docs/concepts/storage/storage-classes/#ceph-rbd). +The following is an example of KubeKey add-on configurations for rbd provisioner installed by **Helm Charts including a StorageClass**. + +#### Chart configurations + +```yaml +ceph: + mon: "192.168.0.12:6789" # <--ToBeReplaced--> + adminKey: "QVFBS1JkdGRvV0lySUJBQW5LaVpSKzBRY2tjWmd6UzRJdndmQ2c9PQ==" # <--ToBeReplaced--> + userKey: "QVFBS1JkdGRvV0lySUJBQW5LaVpSKzBRY2tjWmd6UzRJdndmQ2c9PQ==" # <--ToBeReplaced--> +sc: + isDefault: false +``` + +If you want to configure more values, see [chart configuration for rbd-provisioner](https://github.com/kubesphere/helm-charts/tree/master/src/test/rbd-provisioner#configuration). + +#### Add-on configurations + +Save the above chart config locally (e.g. `/root/rbd-provisioner.yaml`). The add-on config for rbd provisioner cloud be like: + +```yaml +- name: rbd-provisioner + namespace: kube-system + sources: + chart: + name: rbd-provisioner + repo: https://charts.kubesphere.io/test + values: /root/rbd-provisioner.yaml +``` diff --git a/content/zh/docs/installing-on-linux/persistent-storage-configurations/install-gluster-fs.md b/content/zh/docs/installing-on-linux/persistent-storage-configurations/install-gluster-fs.md new file mode 100644 index 000000000..345de9417 --- /dev/null +++ b/content/zh/docs/installing-on-linux/persistent-storage-configurations/install-gluster-fs.md @@ -0,0 +1,301 @@ +--- +title: "安装 GlusterFS" +keywords: 'KubeSphere, Kubernetes, GlusterFS, installation, configurations, storage' +description: 'How to create a KubeSphere create with GlusterFS providing storage services.' +linkTitle: "安装 GlusterFS" +weight: 3340 +--- + +[GlusterFS](https://kubernetes.io/docs/concepts/storage/storage-classes/#glusterfs) is an in-tree storage plugin in Kubernetes. Hence, you only need to install the storage class. + +This tutorial demonstrates how to use KubeKey to set up a KubeSphere cluster and configure GlusterFS to provide storage services. + +{{< notice note >}} + +Ubuntu 16.04 is used as an example in this tutorial. + +{{}} + +## Prerequisites + +You have set up your GlusterFS cluster and configured Heketi. For more information, see [Set up a GlusterFS Server](../../../api-reference/storage-system-installation/glusterfs-server/). + +## Step 1: Configure the Client Machine + +You need to install the GlusterFS client package on all your client machines. + +1. Install `software-properties-common`. + + ```bash + apt-get install software-properties-common + ``` + +2. Add the community GlusterFS PPA. + + ```bash + add-apt-repository ppa:gluster/glusterfs-7 + ``` + +3. Make sure you are using the latest package. + + ```bash + apt-get update + ``` + +4. Install the GlusterFS client. + + ```bash + apt-get install glusterfs-server -y + ``` + +5. Verify your GlusterFS version. + + ```bash + glusterfs -V + ``` + +## Step 2: Create a Configuration File for GlusterFS + +The separate configuration file contains all parameters of GlusterFS storage which will be used by KubeKey during installation. + +1. Go to one of the nodes (taskbox) where you want to download KubeKey later and run the following command to create a configuration file. + + ``` + vi glusterfs-sc.yaml + ``` + + An example configuration file (include a Heketi Secret): + + ```yaml + apiVersion: v1 + kind: Secret + metadata: + name: heketi-secret + namespace: kube-system + type: kubernetes.io/glusterfs + data: + key: "MTIzNDU2" # Replace it with your own key. Base64 coding. + --- + apiVersion: storage.k8s.io/v1 + kind: StorageClass + metadata: + annotations: + storageclass.beta.kubernetes.io/is-default-class: "true" + storageclass.kubesphere.io/supported-access-modes: '["ReadWriteOnce","ReadOnlyMany","ReadWriteMany"]' + name: glusterfs + parameters: + clusterid: "21240a91145aee4d801661689383dcd1" # Replace it with your own GlusterFS cluster ID. + gidMax: "50000" + gidMin: "40000" + restauthenabled: "true" + resturl: "http://192.168.0.2:8080" # The Gluster REST service/Heketi service url which provision gluster volumes on demand. Replace it with your own. + restuser: admin + secretName: heketi-secret + secretNamespace: kube-system + volumetype: "replicate:3" # Replace it with your own volume type. + provisioner: kubernetes.io/glusterfs + reclaimPolicy: Delete + volumeBindingMode: Immediate + allowVolumeExpansion: true + ``` + + {{< notice note >}} + + - Use the field `storageclass.beta.kubernetes.io/is-default-class` to set `glusterfs` as your default storage class. If it is `false`, KubeKey will install OpenEBS as the default storage class. + - For more information about parameters in the storage class manifest, see [the Kubernetes documentation](https://kubernetes.io/docs/concepts/storage/storage-classes/#glusterfs). + + {{}} + +2. Save the file. + +## Step 3: Download KubeKey + +Follow the steps below to download [KubeKey](../kubekey) on the taskbox. + +{{< tabs >}} + +{{< tab "Good network connections to GitHub/Googleapis" >}} + +Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly. + +```bash +curl -sfL https://get-kk.kubesphere.io | VERSION=v1.0.1 sh - +``` + +{{}} + +{{< tab "Poor network connections to GitHub/Googleapis" >}} + +Run the following command first to make sure you download KubeKey from the correct zone. + +```bash +export KKZONE=cn +``` + +Run the following command to download KubeKey: + +```bash +curl -sfL https://get-kk.kubesphere.io | VERSION=v1.0.1 sh - +``` + +{{< notice note >}} + +After you download KubeKey, if you transfer it to a new machine also with poor network connections to Googleapis, you must run `export KKZONE=cn` again before you proceed with the steps below. + +{{}} + +{{}} + +{{}} + +{{< notice note >}} + +The commands above download the latest release (v1.0.1) of KubeKey. You can change the version number in the command to download a specific version. + +{{}} + +Make `kk` executable: + +```bash +chmod +x kk +``` + +## Step 4: Create a Cluster + +1. Specify a Kubernetes version and a KubeSphere version that you want to install. For example: + + ```bash + ./kk create config --with-kubernetes v1.17.9 --with-kubesphere v3.0.0 + ``` + + {{< notice note >}} + + - Supported Kubernetes versions: v1.15.12, v1.16.13, v1.17.9 (default), v1.18.6. + + - If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later. + - If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed. + + {{}} + +2. A default file `config-sample.yaml` will be created if you do not customize the name. Edit the file. + + ```bash + vi config-sample.yaml + ``` + + ```yaml + ... + metadata: + name: sample + spec: + hosts: + - {name: client1, address: 192.168.0.5, internalAddress: 192.168.0.5, user: ubuntu, password: Testing123} + - {name: client2, address: 192.168.0.6, internalAddress: 192.168.0.6, user: ubuntu, password: Testing123} + - {name: client3, address: 192.168.0.7, internalAddress: 192.168.0.7, user: ubuntu, password: Testing123} + roleGroups: + etcd: + - client1 + master: + - client1 + worker: + - client2 + - client3 + controlPlaneEndpoint: + domain: lb.kubesphere.local + address: "" + port: "6443" + kubernetes: + version: v1.17.9 + imageRepo: kubesphere + clusterName: cluster.local + network: + plugin: calico + kubePodsCIDR: 10.233.64.0/18 + kubeServiceCIDR: 10.233.0.0/18 + registry: + registryMirrors: [] + insecureRegistries: [] + addons: + - name: glusterfs + namespace: kube-system + sources: + yaml: + path: + - /root/glusterfs-sc.yaml + ... + ``` + +3. Pay special attention to the field of `addons`, under which you must provide the information of the storage class to be created as well as the Heketi Secret. For more information about each parameter in this file, see [Multi-node Installation](../../../installing-on-linux/introduction/multioverview/#2-edit-the-configuration-file). + +4. Save the file and execute the following command to install Kubernetes and KubeSphere: + + ```bash + ./kk create cluster -f config-sample.yaml + ``` + +5. When the installation finishes, you can inspect installation logs with the following command: + + ```bash + kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f + ``` + + Expected output: + + ```bash + ##################################################### + ### Welcome to KubeSphere! ### + ##################################################### + + Console: http://192.168.0.4:30880 + Account: admin + Password: P@88w0rd + + NOTES: + 1. After you log into the console, please check the + monitoring status of service components in + "Cluster Management". If any service is not + ready, please wait patiently until all components + are up and running. + 2. Please change the default password after login. + + ##################################################### + https://kubesphere.io 20xx-xx-xx xx:xx:xx + ##################################################### + ``` + +## Step 5: Verify Installation + +You can verify that GlusterFS has been successfully installed either from the command line or from the KubeSphere web console. + +### Command line + +Run the following command to check your storage class. + +```bash +kubectl get sc +``` + +Expected output: + +```bash +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +glusterfs (default) kubernetes.io/glusterfs Delete Immediate true 104m +``` + +### KubeSphere console + +1. Log in to the web console as `admin` with the default account and password at `:30880`. Click **Platform** in the top left corner and select **Clusters Management**. + +3. Go to **Volumes** under **Storage**, and you can see PVCs in use. + + ![volumes-in-use](/images/docs/installing-on-linux/persistent-storage-configurations/glusterfs-client/volumes-in-use.png) + + {{< notice note >}} + + For more information about how to create volumes on the KubeSphere console, see [Volumes](../../../project-user-guide/storage/volumes/). + + {{}} + +3. On the **Storage Classes** page, you can see the storage class available in your cluster. + + ![storage-class-available](/images/docs/installing-on-linux/persistent-storage-configurations/glusterfs-client/storage-class-available.png) \ No newline at end of file diff --git a/content/zh/docs/installing-on-linux/persistent-storage-configurations/install-nfs-client.md b/content/zh/docs/installing-on-linux/persistent-storage-configurations/install-nfs-client.md new file mode 100644 index 000000000..45734d97a --- /dev/null +++ b/content/zh/docs/installing-on-linux/persistent-storage-configurations/install-nfs-client.md @@ -0,0 +1,273 @@ +--- +title: "安装 NFS Client" +keywords: 'KubeSphere, Kubernetes, storage, installation, configurations, NFS' +description: 'Use KubeKey to set up a KubeSphere cluster and configure NFS storage.' +linkTitle: "安装 NFS Client" +weight: 3330 +--- + +This tutorial demonstrates how to set up a KubeSphere cluster and configure NFS storage. + +{{< notice note >}} + +Ubuntu 16.04 is used as an example in this tutorial. + +{{}} + +## Prerequisites + +You must have an NFS server ready providing external storage services. Make sure you have created and exported a directory on the NFS server which your permitted client machines can access. For more information, see [Set up an NFS Server](../../../api-reference/storage-system-installation/nfs-server/). + +## Step 1: Configure the Client Machine + +Install `nfs-common` on all of the clients. It provides necessary NFS functions while you do not need to install any server components. + +1. Execute the following command to make sure you are using the latest package. + + ```bash + sudo apt-get update + ``` + +2. Install `nfs-common` on all the clients. + + ```bash + sudo apt-get install nfs-common + ``` + +3. Go to one of the client machines (taskbox) where you want to download KubeKey later. Create a configuration file that contains all the necessary parameters of your NFS server which will be referenced by KubeKey during installation. + + ```bash + vi nfs-client.yaml + ``` + + An example configuration file: + + ```yaml + nfs: + server: "192.168.0.2" # This is the server IP address. Replace it with your own. + path: "/mnt/demo" # Replace the exported directory with your own. + storageClass: + defaultClass: false + ``` + + {{< notice note >}} + + - If you want to configure more values, see [chart configurations for NFS-client](https://github.com/kubesphere/helm-charts/tree/master/src/main/nfs-client-provisioner#configuration). + - The `storageClass.defaultClass` field controls whether you want to set the storage class of NFS-client Provisioner as the default one. If you input `false` for it, KubeKey will install [OpenEBS](https://github.com/openebs/openebs) to provide local volumes, while they are not provisioned dynamically as you create workloads on your cluster. After you install KubeSphere, you can change the default storage class on the console directly. + + {{}} + +4. Save the file. + +## Step 2: Download KubeKey + +Follow the steps below to download [KubeKey](../kubekey) on the taskbox. + +{{< tabs >}} + +{{< tab "Good network connections to GitHub/Googleapis" >}} + +Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly. + +```bash +curl -sfL https://get-kk.kubesphere.io | VERSION=v1.0.1 sh - +``` + +{{}} + +{{< tab "Poor network connections to GitHub/Googleapis" >}} + +Run the following command first to make sure you download KubeKey from the correct zone. + +```bash +export KKZONE=cn +``` + +Run the following command to download KubeKey: + +```bash +curl -sfL https://get-kk.kubesphere.io | VERSION=v1.0.1 sh - +``` + +{{< notice note >}} + +After you download KubeKey, if you transfer it to a new machine also with poor network connections to Googleapis, you must run `export KKZONE=cn` again before you proceed with the steps below. + +{{}} + +{{}} + +{{}} + +{{< notice note >}} + +The commands above download the latest release (v1.0.1) of KubeKey. You can change the version number in the command to download a specific version. + +{{}} + +Make `kk` executable: + +```bash +chmod +x kk +``` + +## Step 3: Create a Cluster + +1. Specify a Kubernetes version and a KubeSphere version that you want to install. For example: + + ```bash + ./kk create config --with-kubernetes v1.17.9 --with-kubesphere v3.0.0 + ``` + + {{< notice note >}} + + - Supported Kubernetes versions: v1.15.12, v1.16.13, v1.17.9 (default), v1.18.6. + + - If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later. + - If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed. + + {{}} + +4. A default file `config-sample.yaml` will be created if you do not customize the name. Edit the file. + + ```bash + vi config-sample.yaml + ``` + + ```yaml + ... + metadata: + name: sample + spec: + hosts: + - {name: client1, address: 192.168.0.3, internalAddress: 192.168.0.3, user: ubuntu, password: Testing123} + - {name: client2, address: 192.168.0.4, internalAddress: 192.168.0.4, user: ubuntu, password: Testing123} + - {name: client3, address: 192.168.0.5, internalAddress: 192.168.0.5, user: ubuntu, password: Testing123} + roleGroups: + etcd: + - client1 + master: + - client1 + worker: + - client2 + - client3 + controlPlaneEndpoint: + domain: lb.kubesphere.local + address: "" + port: "6443" + kubernetes: + version: v1.17.9 + imageRepo: kubesphere + clusterName: cluster.local + network: + plugin: calico + kubePodsCIDR: 10.233.64.0/18 + kubeServiceCIDR: 10.233.0.0/18 + registry: + registryMirrors: [] + insecureRegistries: [] + addons: + - name: nfs-client + namespace: kube-system + sources: + chart: + name: nfs-client-provisioner + repo: https://charts.kubesphere.io/main + values: /home/ubuntu/nfs-client.yaml # Use the path of your own NFS-client configuration file. + ... + ``` + +5. Pay special attention to the field of `addons`, under which you must provide the information of NFS-client. For more information about each parameter in this file, see [Multi-node Installation](../../../installing-on-linux/introduction/multioverview/#2-edit-the-configuration-file). + +6. Save the file and execute the following command to install Kubernetes and KubeSphere: + + ```bash + ./kk create cluster -f config-sample.yaml + ``` + +7. When the installation finishes, you can inspect installation logs with the following command: + + ```bash + kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f + ``` + + Expected output: + + ```bash + ##################################################### + ### Welcome to KubeSphere! ### + ##################################################### + + Console: http://192.168.0.3:30880 + Account: admin + Password: P@88w0rd + + NOTES: + 1. After you log into the console, please check the + monitoring status of service components in + "Cluster Management". If any service is not + ready, please wait patiently until all components + are up and running. + 2. Please change the default password after login. + + ##################################################### + https://kubesphere.io 20xx-xx-xx xx:xx:xx + ##################################################### + ``` + +## Step 4: Verify Installation + +You can verify that NFS-client has been successfully installed either from the command line or from the KubeSphere web console. + +### Command line + +1. Run the following command to check your storage class. + + ```bash + kubectl get sc + ``` + + Expected output: + + ```bash + NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE + local (default) openebs.io/local Delete WaitForFirstConsumer false 16m + nfs-client cluster.local/nfs-client-nfs-client-provisioner Delete Immediate true 16m + ``` + + {{< notice note >}} + + If you set `nfs-client` as the default storage class, OpenEBS will not be installed by KubeKey. + + {{}} + +2. Run the following command to check the statuses of Pods. + + ```bash + kubectl get pod -n kube-system + ``` + + Note that `nfs-client` is installed in the namespace `kube-system`. Expected output (exclude irrelevant Pods): + + ```bash + NAME READY STATUS RESTARTS AGE + nfs-client-nfs-client-provisioner-6fc95f4f79-92lsh 1/1 Running 0 16m + ``` + +### KubeSphere console + +1. Log in to the web console as `admin` with the default account and password at `:30880`. Click **Platform** in the top left corner and select **Clusters Management**. + +2. Go to **Pods** in **Application Workloads** and select `kube-system` from the project drop-down list. You can see that the Pod of `nfs-client` is up and running. + + ![nfs-pod](/images/docs/installing-on-linux/persistent-storage-configurations/nfs-client/nfs-pod.png) + +3. Go to **Storage Classes** under **Storage**, and you can see available storage classes in your cluster. + + ![nfs-storage-class](/images/docs/installing-on-linux/persistent-storage-configurations/nfs-client/nfs-storage-class.png) + + {{< notice note >}} + + For more information about how to create volumes on the KubeSphere console, see [Volumes](../../../project-user-guide/storage/volumes/). + + {{}} \ No newline at end of file diff --git a/content/zh/docs/installing-on-linux/persistent-storage-configurations/install-qingcloud-csi.md b/content/zh/docs/installing-on-linux/persistent-storage-configurations/install-qingcloud-csi.md new file mode 100644 index 000000000..112cdbc9b --- /dev/null +++ b/content/zh/docs/installing-on-linux/persistent-storage-configurations/install-qingcloud-csi.md @@ -0,0 +1,278 @@ +--- +title: "安装 QingCloud CSI" +keywords: 'KubeSphere, Kubernetes, QingCloud CSI, installation, configurations, storage' +description: 'How to create a KubeSphere create with QingCloud CSI providing storage services.' +linkTitle: "安装 QingCloud CSI" +weight: 3320 +--- + +If you plan to install KubeSphere on [QingCloud](https://www.qingcloud.com/), [QingCloud CSI](https://github.com/yunify/qingcloud-csi) can be chosen as the underlying storage plugin. + +This tutorial demonstrates how to use KubeKey to set up a KubeSphere cluster and configure QingCloud CSI to provide storage services. + +## Prerequisites + +Your cluster nodes are created on [QingCloud Platform](https://intl.qingcloud.com/). + +## Step 1: Create Access Keys on QingCloud Platform + +To make sure the platform can create cloud disks for your cluster, you need to provide the access key (`qy_access_key_id` and `qy_secret_access_key`) in a separate configuration file of QingCloud CSI. + +1. Log in to the web console of [QingCloud](https://console.qingcloud.com/login) and select **Access Key** from the drop-down list in the top right corner. + + ![access-key](/images/docs/installing-on-linux/introduction/persistent-storage-configuration/access-key.jpg) + +2. Click **Create** to generate keys. Download the key after it is created, which is stored in a csv file. + +## Step 2: Create a Configuration File for QingCloud CSI + +The separate configuration file contains all parameters of QingCloud CSI which will be used by KubeKey during installation. + +1. Go to one of the nodes (taskbox) where you want to download KubeKey later and run the following command to create a configuration file. + + ``` + vi csi-qingcloud.yaml + ``` + + An example configuration file: + + ```yaml + config: + qy_access_key_id: "MBKTPXWCIRIEDQYQKXYL" # Replace it with your own key id. + qy_secret_access_key: "cqEnHYZhdVCVif9qCUge3LNUXG1Cb9VzKY2RnBdX" # Replace it with your own access key. + zone: "pek3a" # Lowercase letters only. + sc: + isDefaultClass: true # Set it as the default storage class. + ``` + +2. The field `zone` specifies where your cloud disks are created. On QingCloud Platform, you must select a zone before you create them. + + ![storage-zone](/images/docs/installing-on-linux/introduction/persistent-storage-configuration/storage-zone.jpg) + + Make sure the value you specify for `zone` matches the region ID below: + + | Zone | Region ID | + | ------------------------------------------- | ----------------------- | + | Shanghai1-A/Shanghai1-B | sh1a/sh1b | + | Beijing3-A/Beijing3-B/Beijing3-C/Beijing3-D | pek3a/pek3b/pek3c/pek3d | + | Guangdong2-A/Guangdong2-B | gd2a/gd2b | + | Asia-Pacific 2-A | ap2a | + + If you want to configure more values, see [chart configuration for QingCloud CSI](https://github.com/kubesphere/helm-charts/tree/master/src/test/csi-qingcloud#configuration). + +3. Save the file. + +## Step 3: Download KubeKey + +Follow the steps below to download [KubeKey](../kubekey) on the taskbox. + +{{< tabs >}} + +{{< tab "Good network connections to GitHub/Googleapis" >}} + +Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly. + +```bash +curl -sfL https://get-kk.kubesphere.io | VERSION=v1.0.1 sh - +``` + +{{}} + +{{< tab "Poor network connections to GitHub/Googleapis" >}} + +Run the following command first to make sure you download KubeKey from the correct zone. + +```bash +export KKZONE=cn +``` + +Run the following command to download KubeKey: + +```bash +curl -sfL https://get-kk.kubesphere.io | VERSION=v1.0.1 sh - +``` + +{{< notice note >}} + +After you download KubeKey, if you transfer it to a new machine also with poor network connections to Googleapis, you must run `export KKZONE=cn` again before you proceed with the steps below. + +{{}} + +{{}} + +{{}} + +{{< notice note >}} + +The commands above download the latest release (v1.0.1) of KubeKey. You can change the version number in the command to download a specific version. + +{{}} + +Make `kk` executable: + +```bash +chmod +x kk +``` + +## Step 4: Create a Cluster + +1. Specify a Kubernetes version and a KubeSphere version that you want to install. For example: + + ```bash + ./kk create config --with-kubernetes v1.17.9 --with-kubesphere v3.0.0 + ``` + + {{< notice note >}} + + - Supported Kubernetes versions: v1.15.12, v1.16.13, v1.17.9 (default), v1.18.6. + + - If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later. + - If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed. + + {{}} + +2. A default file `config-sample.yaml` will be created if you do not customize the name. Edit the file. + + ```bash + vi config-sample.yaml + ``` + + ```yaml + ... + metadata: + name: sample + spec: + hosts: + - {name: master, address: 192.168.0.2, internalAddress: 192.168.0.2, user: root, password: Testing123} + - {name: node1, address: 192.168.0.3, internalAddress: 192.168.0.3, user: root, password: Testing123} + - {name: node2, address: 192.168.0.4, internalAddress: 192.168.0.4, user: root, password: Testing123} + roleGroups: + etcd: + - master + master: + - master + worker: + - node1 + - node2 + controlPlaneEndpoint: + domain: lb.kubesphere.local + address: "" + port: "6443" + kubernetes: + version: v1.17.9 + imageRepo: kubesphere + clusterName: cluster.local + network: + plugin: calico + kubePodsCIDR: 10.233.64.0/18 + kubeServiceCIDR: 10.233.0.0/18 + registry: + registryMirrors: [] + insecureRegistries: [] + addons: + - name: csi-qingcloud + namespace: kube-system + sources: + chart: + name: csi-qingcloud + repo: https://charts.kubesphere.io/test + values: /root/csi-qingcloud.yaml + ... + ``` + +3. Pay special attention to the field of `addons`, under which you must provide the information of QingCloud CSI. For more information about each parameter in this file, see [Multi-node Installation](../../../installing-on-linux/introduction/multioverview/#2-edit-the-configuration-file). + + {{< notice note >}} + + KubeKey will install QingCloud CSI by Helm charts together with its StorageClass. + + {{}} + +4. Save the file and execute the following command to install Kubernetes and KubeSphere: + + ```bash + ./kk create cluster -f config-sample.yaml + ``` + +5. When the installation finishes, you can inspect installation logs with the following command: + + ```bash + kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f + ``` + + Expected output: + + ```bash + ##################################################### + ### Welcome to KubeSphere! ### + ##################################################### + + Console: http://192.168.0.3:30880 + Account: admin + Password: P@88w0rd + + NOTES: + 1. After you log into the console, please check the + monitoring status of service components in + "Cluster Management". If any service is not + ready, please wait patiently until all components + are up and running. + 2. Please change the default password after login. + + ##################################################### + https://kubesphere.io 20xx-xx-xx xx:xx:xx + ##################################################### + ``` + +## Step 5: Verify Installation + +You can verify that QingCloud CSI has been successfully installed either from the command line or from the KubeSphere web console. + +### Command line + +1. Run the following command to check your storage class. + + ```bash + kubectl get sc + ``` + + Expected output: + + ```bash + NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE + csi-qingcloud (default) disk.csi.qingcloud.com Delete WaitForFirstConsumer true 28m + ``` + +2. Run the following command to check the statuses of Pods. + + ```bash + kubectl get pod -n kube-system + ``` + + Note that `csi-qingcloud` is installed in the namespace `kube-system`. Expected output (exclude other irrelevant Pods): + + ```bash + NAME READY STATUS RESTARTS AGE + csi-qingcloud-controller-f95dcddfb-2gfck 5/5 Running 0 28m + csi-qingcloud-node-7dzz8 2/2 Running 0 28m + csi-qingcloud-node-k4hsj 2/2 Running 0 28m + csi-qingcloud-node-sptdb 2/2 Running 0 28m + ``` + +### KubeSphere console + +1. Log in to the web console as `admin` with the default account and password at `:30880`. Click **Platform** in the top left corner and select **Clusters Management**. + +2. Go to **Pods** in **Application Workloads** and select `kube-system` from the project drop-down list. You can see that the Pods of `csi-qingcloud` are up and running. + + ![qingcloud-csi-pod](/images/docs/installing-on-linux/persistent-storage-configurations/qingcloud-csi/qingcloud-csi-pod.png) + +3. Go to **Storage Classes** under **Storage**, and you can see available storage classes in your cluster. + + ![qingcloud-csi-storage-class](/images/docs/installing-on-linux/persistent-storage-configurations/qingcloud-csi/qingcloud-csi-storage-class.png) + + {{< notice note >}} + + For more information about how to create volumes on the KubeSphere console, see [Volumes](../../../project-user-guide/storage/volumes/). + + {{}} \ No newline at end of file diff --git a/content/zh/docs/installing-on-linux/persistent-storage-configurations/understand-persistent-storage.md b/content/zh/docs/installing-on-linux/persistent-storage-configurations/understand-persistent-storage.md new file mode 100644 index 000000000..4861c056f --- /dev/null +++ b/content/zh/docs/installing-on-linux/persistent-storage-configurations/understand-persistent-storage.md @@ -0,0 +1,45 @@ +--- +title: "安装持久化存储" +keywords: 'KubeSphere, Kubernetes, 存储, 安装, 配置' +description: '理解持久化存储' +linkTitle: "安装持久化存储" +weight: 3310 +--- + +Persistent volumes are a **must** for installing KubeSphere. When you use [KubeKey](../../../installing-on-linux/introduction/kubekey/) to set up a KubeSphere cluster, you can install different storage systems as [add-ons](https://github.com/kubesphere/kubekey/blob/v1.0.0/docs/addons.md). The general steps of installing KubeSphere by KubeKey on Linux are: + +1. Install Kubernetes. +2. Install any provided add-ons. +3. Install KubeSphere by [ks-installer](https://github.com/kubesphere/ks-installer). + +In the second step, an available StorageClass **must** be installed. It includes: + +- The StorageClass itself +- The storage plugin for the StorageClass if necessary + +{{< notice note >}} + +Some storage systems require you to prepare a storage server in advance to provide external storage services. + +{{}} + +## How Does KubeKey Install Different Storage Systems + +KubeKey creates [a configuration file](../../../installing-on-linux/introduction/multioverview/#2-edit-the-configuration-file) (`config-sample.yaml` by default) for your cluster which contains all the necessary parameters you can define for different resources, including various add-ons. Different storage systems, such as NFS storage and GlusterFS, can also be installed as add-ons by Helm charts or YAML. To let KubeKey install them in the desired way, you must provide KubeKey with necessary configurations of these storage systems. + +There are generally two ways for you to let KubeKey apply configurations of the storage system to be installed. + +1. Input necessary parameters under the `addons` field directly in `config-sample.yaml`. +2. Create a separate configuration file for your add-on to list all the necessary parameters and provide the path of the file in `config-sample.yaml` so that KubeKey can reference it during installation. + +For more information, see [add-ons](https://github.com/kubesphere/kubekey/blob/v1.0.0/docs/addons.md). + +## Default Storage Class + +KubeKey supports the installation of different storage plugins and storage classes. No matter what storage systems you will be installing, you can specify whether it is a default storage class in its configuration file. If KubeKey detects that no default storage class is specified, it will install [OpenEBS](https://github.com/openebs/openebs) by default. + +OpenEBS Dynamic Local PV provisioner can create Kubernetes Local Persistent Volumes using a unique HostPath (directory) on the node to persist data. It is very convenient for users to get started with KubeSphere when they have no specific storage system. + +## Multi-storage Solutions + +If you intend to install more than one storage plugins, only one of them can be set as the default storage class. Otherwise, KubeKey will be confused about which storage class to use. \ No newline at end of file diff --git a/content/zh/docs/installing-on-linux/public-cloud/install-kubesphere-on-ali-ecs.md b/content/zh/docs/installing-on-linux/public-cloud/install-kubesphere-on-ali-ecs.md index 70578a6f7..c0dc72aa5 100644 --- a/content/zh/docs/installing-on-linux/public-cloud/install-kubesphere-on-ali-ecs.md +++ b/content/zh/docs/installing-on-linux/public-cloud/install-kubesphere-on-ali-ecs.md @@ -206,7 +206,7 @@ metadata: registry: registryMirrors: [] insecureRegistries: [] - addons: [] # add your persistent storage and LoadBalancer plugin configuration here if you have, see https://kubesphere.io/docs/installing-on-linux/introduction/storage-configuration/ + addons: [] ··· # 其它配置可以在安装后之后根据需要进行修改 @@ -218,7 +218,7 @@ metadata: {{< notice note >}} - 继续编辑上述 `config-sample.yaml` 文件,找到 `[addons]` 字段,这里支持定义任何持久化存储的插件或客户端,如 CSI ( -alibaba-cloud-csi-driver)、NFS Client、Ceph、GlusterFS,您可以根据您自己的持久化存储服务类型,并参考 [持久化存储服务](https://kubesphere.com.cn/docs/installing-on-linux/introduction/storage-configuration/) 中对应的示例 yaml 文件进行设置。 +alibaba-cloud-csi-driver)、NFS Client、Ceph、GlusterFS,您可以根据您自己的持久化存储服务类型,并参考 [持久化存储服务](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/) 中对应的示例 yaml 文件进行设置。 - 只需要将 CSI 存储插件安装时需要 apply 的所有 yaml 文件在 `[addons]` 中列出即可,注意预先参考 [Alibaba Cloud Kubernetes CSI Plugin](https://github.com/kubernetes-sigs/alibaba-cloud-csi-driver#alibaba-cloud-kubernetes-csi-plugin),选择您需要的存储类型的 CSI 插件,如 Cloud Disk CSI Plugin、NAS CSI Plugin、NAS CSI Plugin、OSS CSI Plugin,然后在 CSI 的相关 yaml 中配置对接阿里云的相关信息。 {{}} diff --git a/content/zh/docs/installing-on-linux/public-cloud/install-kubesphere-on-azure-vms.md b/content/zh/docs/installing-on-linux/public-cloud/install-kubesphere-on-azure-vms.md index b60d128db..4323fe01f 100644 --- a/content/zh/docs/installing-on-linux/public-cloud/install-kubesphere-on-azure-vms.md +++ b/content/zh/docs/installing-on-linux/public-cloud/install-kubesphere-on-azure-vms.md @@ -209,7 +209,7 @@ The public load balancer is used directly instead of an internal load balancer d ### Persistent Storage Plugin Configuration -See [Storage Configuration](../storage-configuration) for details. +See [Storage Configuration](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/) for details. ### Configure the Network Plugin diff --git a/content/zh/docs/installing-on-linux/public-cloud/install-kubesphere-on-huaweicloud-ecs.md b/content/zh/docs/installing-on-linux/public-cloud/install-kubesphere-on-huaweicloud-ecs.md index 8aa9ee8d1..c804042f9 100644 --- a/content/zh/docs/installing-on-linux/public-cloud/install-kubesphere-on-huaweicloud-ecs.md +++ b/content/zh/docs/installing-on-linux/public-cloud/install-kubesphere-on-huaweicloud-ecs.md @@ -273,7 +273,7 @@ spec: 如本文开头的前提条件所说,对于生产环境,我们建议您准备持久性存储,可参考以下说明进行配置。若搭建开发和测试,您可以直接使用默认集成的 OpenEBS 准备 LocalPV,则可以跳过这小节。 {{< notice note >}} -如果您有已有存储服务端,例如华为云可使用 [弹性文件存储(SFS)](https://support.huaweicloud.com/productdesc-sfs/zh-cn_topic_0034428718.html) 来作为存储服务。继续编辑上述 `config-sample.yaml` 文件,找到 `[addons]` 字段,这里支持定义任何持久化存储的插件或客户端,如 CSI、NFS Client、Ceph、GlusterFS,您可以根据您自己的持久化存储服务类型,并参考 [持久化存储服务](https://kubesphere.com.cn/docs/installing-on-linux/introduction/storage-configuration/) 中对应的示例 yaml 文件进行设置。 +如果您有已有存储服务端,例如华为云可使用 [弹性文件存储(SFS)](https://support.huaweicloud.com/productdesc-sfs/zh-cn_topic_0034428718.html) 来作为存储服务。继续编辑上述 `config-sample.yaml` 文件,找到 `[addons]` 字段,这里支持定义任何持久化存储的插件或客户端,如 CSI、NFS Client、Ceph、GlusterFS,您可以根据您自己的持久化存储服务类型,并参考 [持久化存储服务](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/) 中对应的示例 yaml 文件进行设置。 {{}} ### 执行命令创建集群 diff --git a/content/zh/docs/installing-on-linux/public-cloud/install-kubesphere-on-qingcloud-vms.md b/content/zh/docs/installing-on-linux/public-cloud/install-kubesphere-on-qingcloud-vms.md index 8184b0a9c..ffbcfdfd4 100644 --- a/content/zh/docs/installing-on-linux/public-cloud/install-kubesphere-on-qingcloud-vms.md +++ b/content/zh/docs/installing-on-linux/public-cloud/install-kubesphere-on-qingcloud-vms.md @@ -269,7 +269,7 @@ spec: - QingStor CSI - 未来版本将支持更多插件 -请确保在安装前配置了存储插件。在安装过程中,KubeKey 将为相关的工作负载创建 StorageClass 和持久卷。有关更多信息,请参见[持久化存储配置](../../../installing-on-linux/introduction/storage-configuration/)。 +请确保在安装前配置了存储插件。在安装过程中,KubeKey 将为相关的工作负载创建 StorageClass 和持久卷。有关更多信息,请参见[持久化存储配置](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/)。 ### 步骤 7:启用可插拔组件(可选) @@ -337,6 +337,6 @@ https://kubesphere.io 2020-08-13 10:50:24 [Kubernetes 集群配置](../../../installing-on-linux/introduction/vars/) -[持久化存储配置](../../../installing-on-linux/introduction/storage-configuration/) +[持久化存储配置](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/) [启用可插拔组件](../../../pluggable-components/) \ No newline at end of file diff --git a/content/zh/docs/project-user-guide/storage/volumes.md b/content/zh/docs/project-user-guide/storage/volumes.md index ef1a6e22b..f874ad428 100644 --- a/content/zh/docs/project-user-guide/storage/volumes.md +++ b/content/zh/docs/project-user-guide/storage/volumes.md @@ -42,7 +42,7 @@ weight: 10310 ![volume-creation-method](/images/docs/zh-cn/project-user-guide/volume-management/volumes/volume-creation-method.jpg) - - **通过存储类型**:您可以在 KubeSphere [安装前](../../../installing-on-linux/introduction/storage-configuration/)或[安装后](../../../cluster-administration/persistent-volume-and-storage-class/)配置存储类型。 + - **通过存储类型**:您可以在 KubeSphere [安装前](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/)或[安装后](../../../cluster-administration/persistent-volume-and-storage-class/)配置存储类型。 - **通过存储卷快照创建**:如需通过快照创建存储卷,您必须先创建存储卷快照。 5. 选择**通过存储类型**。有关通过存储卷快照创建存储卷的更多信息,请参阅[存储卷快照](../volume-snapshots/)。 diff --git a/content/zh/docs/quick-start/all-in-one-on-linux.md b/content/zh/docs/quick-start/all-in-one-on-linux.md index 93ea7ed87..7bbcbda5b 100644 --- a/content/zh/docs/quick-start/all-in-one-on-linux.md +++ b/content/zh/docs/quick-start/all-in-one-on-linux.md @@ -146,7 +146,7 @@ chmod +x kk - 支持的 Kubernetes 版本:*v1.15.12*, *v1.16.13*, *v1.17.9* (默认), *v1.18.6*。 - 一般来说,对于 All-in-One 安装,您无需更改任何配置。 - 如果您在这一步的命令中不添加标志 `--with-kubesphere`,则不会部署 KubeSphere,KubeKey 将只安装 Kubernetes。如果您添加标志 `--with-kubesphere` 时不指定 KubeSphere 版本,则会安装最新版本的 KubeSphere。 -- KubeKey 会默认安装 [OpenEBS](https://openebs.io/) 为开发和测试环境提供 LocalPV 以方便新用户。对于其他存储类型,请参见[持久化存储配置](../../installing-on-linux/introduction/storage-configuration/)。 +- KubeKey 会默认安装 [OpenEBS](https://openebs.io/) 为开发和测试环境提供 LocalPV 以方便新用户。对于其他存储类型,请参见[持久化存储配置](../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/)。 {{}}