From 636960592760481bcd8ee8045614bcd207445f34 Mon Sep 17 00:00:00 2001 From: FeynmanZhou Date: Wed, 2 Sep 2020 13:46:29 +0800 Subject: [PATCH] update download link for v3.0.0, sync /en to /zh Signed-off-by: FeynmanZhou --- .../install-ks-on-huawei-cce.md | 4 +- .../install-kubesphere-on-aks.md | 4 +- .../install-kubesphere-on-do.md | 4 +- .../install-kubesphere-on-eks.md | 23 +- .../install-kubesphere-on-gke.md | 4 +- .../install-kubesphere-on-oke.md | 4 +- .../introduction/overview.md | 8 +- .../install-ks-on-linux-airgapped.md | 216 +---------- .../introduction/multioverview.md | 77 ++-- .../install-kubesphere-on-vmware-vsphere.md | 84 +++-- .../public-cloud/install-ks-on-azure-vms.md | 37 +- .../kubesphere-on-qingcloud-instance.md | 45 ++- .../enable-multicluster/agent-connection.md | 2 +- .../enable-multicluster/direct-connection.md | 2 +- .../en/docs/pluggable-components/app-store.md | 6 +- .../pluggable-components/auditing-logs.md | 6 +- .../en/docs/pluggable-components/devops.md | 6 +- .../en/docs/pluggable-components/logging.md | 6 +- .../docs/pluggable-components/service-mesh.md | 6 +- .../docs/quick-start/all-in-one-on-linux.md | 88 ++--- .../enable-pluggable-components.md | 6 +- .../quick-start/minimal-kubesphere-on-k8s.md | 14 +- content/zh/docs/_index.md | 24 +- .../docs/installing-on-kubernetes/_index.md | 4 +- .../hosted-kubernetes/_index.md | 2 +- .../hosted-kubernetes/all-in-one.md | 116 ------ .../complete-installation.md | 76 ---- .../install-ks-on-huawei-cce.md | 78 ++-- .../install-ks-on-tencent-tke.md | 103 ------ .../install-kubesphere-on-aks.md | 131 +++++++ .../install-kubesphere-on-do.md | 126 +++++++ .../install-kubesphere-on-eks.md | 172 +++++++++ .../install-kubesphere-on-gke.md | 132 +++++++ .../install-kubesphere-on-huaweicloud-cce.md | 9 + .../install-kubesphere-on-oke.md | 152 ++++++++ .../hosted-kubernetes/master-ha.md | 152 -------- .../hosted-kubernetes/multi-node.md | 176 --------- .../storage-configuration.md | 157 -------- .../introduction/_index.md | 2 +- .../introduction/intro.md | 93 ----- .../introduction/overview.md | 76 ++++ .../introduction/port-firewall.md | 33 -- .../introduction/prerequisites.md | 54 +++ .../introduction/vars.md | 107 ------ .../on-prem-kubernetes/_index.md | 6 +- .../install-ks-on-linux-airgapped.md | 216 +---------- .../uninstalling/_index.md | 7 + .../uninstalling-kubesphere-from-k8s.md} | 4 +- content/zh/docs/installing-on-linux/_index.md | 2 +- .../cluster-operation/_index.md | 7 + .../cluster-operation/add-new-nodes.md | 66 ++++ .../cluster-operation/remove-nodes.md | 28 ++ .../introduction/_index.md | 4 +- .../installing-on-linux/introduction/intro.md | 103 +++--- .../introduction/multioverview.md | 299 ++++++++++++++++ .../introduction/port-firewall.md | 41 ++- .../introduction/storage-configuration.md | 127 +++++++ .../installing-on-linux/introduction/vars.md | 119 ++----- .../installing-on-linux/on-premise/_index.md | 7 - .../install-ks-on-linux-airgapped.md | 224 ------------ .../installing-on-linux/on-premises/_index.md | 9 + .../install-ks-on-linux-airgapped.md | 0 .../install-kubesphere-on-vmware-vsphere.md | 336 ++++++++++-------- .../public-cloud/_index.md | 4 +- .../public-cloud/all-in-one.md | 116 ------ .../public-cloud/complete-installation.md | 76 ---- .../public-cloud/install-ks-on-azure-vms.md | 240 +++++++++++++ .../install-ks-on-huaweicloud-ecs.md | 263 -------------- .../install-ks-on-linux-airgapped.md | 224 ------------ .../install-kubesphere-on-ali-ecs.md | 276 -------------- .../kubesphere-on-qingcloud-instance.md | 310 ++++++++++++++++ .../public-cloud/master-ha.md | 152 -------- .../public-cloud/multi-node.md | 176 --------- .../public-cloud/storage-configuration.md | 157 -------- .../uninstalling/_index.md | 10 + .../uninstalling-kubesphere-and-Kubernetes.md | 26 ++ content/zh/docs/introduction/_index.md | 27 +- content/zh/docs/introduction/advantages.md | 133 ++++--- content/zh/docs/introduction/features.md | 158 +++++--- content/zh/docs/introduction/scenarios.md | 105 ++++++ .../docs/introduction/what-is-kubesphere.md | 42 ++- .../zh/docs/multicluster-management/_index.md | 10 +- .../enable-multicluster/_index.md | 7 + .../enable-multicluster/agent-connection.md | 214 +++++++++++ .../enable-multicluster/direct-connection.md | 160 +++++++++ .../retrieve-kubeconfig.md | 42 +++ .../import-cloud-hosted-k8s/_index.md | 7 + .../import-aliyun-ack.md | 10 + .../import-cloud-hosted-k8s/import-aws-eks.md | 10 + .../import-on-prem-k8s/_index.md | 7 + .../import-on-prem-k8s/import-kubeadm-k8s.md | 10 + .../introduction/_index.md | 7 + .../introduction/kubefed-in-kubesphere.md | 12 + .../introduction/overview.md | 16 + .../multicluster-management/release-v210.md | 10 - .../multicluster-management/release-v211.md | 8 - .../multicluster-management/release-v300.md | 10 - .../remove-cluster/_index.md | 7 + .../remove-cluster/kubefed-in-kubesphere.md | 10 + .../zh/docs/pluggable-components/app-store.md | 144 ++++++++ .../pluggable-components/auditing-logs.md | 203 +++++++++++ .../zh/docs/pluggable-components/devops.md | 141 ++++++++ .../zh/docs/pluggable-components/logging.md | 196 ++++++++++ .../docs/pluggable-components/release-v200.md | 12 +- .../pluggable-components/release-v2001.md | 92 +++++ .../docs/pluggable-components/release-v201.md | 19 - .../docs/pluggable-components/release-v202.md | 40 --- .../docs/pluggable-components/release-v210.md | 155 -------- .../docs/pluggable-components/release-v211.md | 122 ------- .../docs/pluggable-components/release-v300.md | 4 +- .../docs/pluggable-components/service-mesh.md | 150 ++++++++ .../application-workloads/_index.md | 2 +- .../configuration/_index.md | 4 +- .../grayscale-release/_index.md | 4 +- .../project-administration/_index.md | 6 +- .../docs/project-user-guide/storage/_index.md | 4 +- content/zh/docs/quick-start/_index.md | 28 +- .../docs/quick-start/all-in-one-on-linux.md | 180 +++++++++- .../create-workspace-and-project.md | 252 ++++++++++++- .../quick-start/enable-pluggable-compoents.md | 8 - .../enable-pluggable-components.md | 152 ++++++++ .../quick-start/minimal-kubesphere-on-k8s.md | 58 ++- content/zh/docs/release/release-v300.md | 221 ++++++------ 123 files changed, 5154 insertions(+), 4327 deletions(-) delete mode 100644 content/zh/docs/installing-on-kubernetes/hosted-kubernetes/all-in-one.md delete mode 100644 content/zh/docs/installing-on-kubernetes/hosted-kubernetes/complete-installation.md delete mode 100644 content/zh/docs/installing-on-kubernetes/hosted-kubernetes/install-ks-on-tencent-tke.md create mode 100644 content/zh/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-aks.md create mode 100644 content/zh/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-do.md create mode 100644 content/zh/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-eks.md create mode 100644 content/zh/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-gke.md create mode 100644 content/zh/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-huaweicloud-cce.md create mode 100644 content/zh/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-oke.md delete mode 100644 content/zh/docs/installing-on-kubernetes/hosted-kubernetes/master-ha.md delete mode 100644 content/zh/docs/installing-on-kubernetes/hosted-kubernetes/multi-node.md delete mode 100644 content/zh/docs/installing-on-kubernetes/hosted-kubernetes/storage-configuration.md delete mode 100644 content/zh/docs/installing-on-kubernetes/introduction/intro.md create mode 100644 content/zh/docs/installing-on-kubernetes/introduction/overview.md delete mode 100644 content/zh/docs/installing-on-kubernetes/introduction/port-firewall.md create mode 100644 content/zh/docs/installing-on-kubernetes/introduction/prerequisites.md delete mode 100644 content/zh/docs/installing-on-kubernetes/introduction/vars.md create mode 100644 content/zh/docs/installing-on-kubernetes/uninstalling/_index.md rename content/{en/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-linux-airgapped.md => zh/docs/installing-on-kubernetes/uninstalling/uninstalling-kubesphere-from-k8s.md} (99%) create mode 100644 content/zh/docs/installing-on-linux/cluster-operation/_index.md create mode 100644 content/zh/docs/installing-on-linux/cluster-operation/add-new-nodes.md create mode 100644 content/zh/docs/installing-on-linux/cluster-operation/remove-nodes.md create mode 100644 content/zh/docs/installing-on-linux/introduction/multioverview.md create mode 100644 content/zh/docs/installing-on-linux/introduction/storage-configuration.md delete mode 100644 content/zh/docs/installing-on-linux/on-premise/_index.md delete mode 100644 content/zh/docs/installing-on-linux/on-premise/install-ks-on-linux-airgapped.md create mode 100644 content/zh/docs/installing-on-linux/on-premises/_index.md rename content/zh/docs/{installing-on-kubernetes/hosted-kubernetes => installing-on-linux/on-premises}/install-ks-on-linux-airgapped.md (100%) rename content/zh/docs/installing-on-linux/{on-premise => on-premises}/install-kubesphere-on-vmware-vsphere.md (54%) delete mode 100644 content/zh/docs/installing-on-linux/public-cloud/all-in-one.md delete mode 100644 content/zh/docs/installing-on-linux/public-cloud/complete-installation.md create mode 100644 content/zh/docs/installing-on-linux/public-cloud/install-ks-on-azure-vms.md delete mode 100644 content/zh/docs/installing-on-linux/public-cloud/install-ks-on-huaweicloud-ecs.md delete mode 100644 content/zh/docs/installing-on-linux/public-cloud/install-ks-on-linux-airgapped.md delete mode 100644 content/zh/docs/installing-on-linux/public-cloud/install-kubesphere-on-ali-ecs.md create mode 100644 content/zh/docs/installing-on-linux/public-cloud/kubesphere-on-qingcloud-instance.md delete mode 100644 content/zh/docs/installing-on-linux/public-cloud/master-ha.md delete mode 100644 content/zh/docs/installing-on-linux/public-cloud/multi-node.md delete mode 100644 content/zh/docs/installing-on-linux/public-cloud/storage-configuration.md create mode 100644 content/zh/docs/installing-on-linux/uninstalling/_index.md create mode 100644 content/zh/docs/installing-on-linux/uninstalling/uninstalling-kubesphere-and-Kubernetes.md create mode 100644 content/zh/docs/introduction/scenarios.md create mode 100644 content/zh/docs/multicluster-management/enable-multicluster/_index.md create mode 100644 content/zh/docs/multicluster-management/enable-multicluster/agent-connection.md create mode 100644 content/zh/docs/multicluster-management/enable-multicluster/direct-connection.md create mode 100644 content/zh/docs/multicluster-management/enable-multicluster/retrieve-kubeconfig.md create mode 100644 content/zh/docs/multicluster-management/import-cloud-hosted-k8s/_index.md create mode 100644 content/zh/docs/multicluster-management/import-cloud-hosted-k8s/import-aliyun-ack.md create mode 100644 content/zh/docs/multicluster-management/import-cloud-hosted-k8s/import-aws-eks.md create mode 100644 content/zh/docs/multicluster-management/import-on-prem-k8s/_index.md create mode 100644 content/zh/docs/multicluster-management/import-on-prem-k8s/import-kubeadm-k8s.md create mode 100644 content/zh/docs/multicluster-management/introduction/_index.md create mode 100644 content/zh/docs/multicluster-management/introduction/kubefed-in-kubesphere.md create mode 100644 content/zh/docs/multicluster-management/introduction/overview.md delete mode 100644 content/zh/docs/multicluster-management/release-v210.md delete mode 100644 content/zh/docs/multicluster-management/release-v211.md delete mode 100644 content/zh/docs/multicluster-management/release-v300.md create mode 100644 content/zh/docs/multicluster-management/remove-cluster/_index.md create mode 100644 content/zh/docs/multicluster-management/remove-cluster/kubefed-in-kubesphere.md create mode 100644 content/zh/docs/pluggable-components/app-store.md create mode 100644 content/zh/docs/pluggable-components/auditing-logs.md create mode 100644 content/zh/docs/pluggable-components/devops.md create mode 100644 content/zh/docs/pluggable-components/logging.md create mode 100644 content/zh/docs/pluggable-components/release-v2001.md delete mode 100644 content/zh/docs/pluggable-components/release-v201.md delete mode 100644 content/zh/docs/pluggable-components/release-v202.md delete mode 100644 content/zh/docs/pluggable-components/release-v210.md delete mode 100644 content/zh/docs/pluggable-components/release-v211.md create mode 100644 content/zh/docs/pluggable-components/service-mesh.md delete mode 100644 content/zh/docs/quick-start/enable-pluggable-compoents.md create mode 100644 content/zh/docs/quick-start/enable-pluggable-components.md diff --git a/content/en/docs/installing-on-kubernetes/hosted-kubernetes/install-ks-on-huawei-cce.md b/content/en/docs/installing-on-kubernetes/hosted-kubernetes/install-ks-on-huawei-cce.md index 074a7bbd3..4c38da865 100644 --- a/content/en/docs/installing-on-kubernetes/hosted-kubernetes/install-ks-on-huawei-cce.md +++ b/content/en/docs/installing-on-kubernetes/hosted-kubernetes/install-ks-on-huawei-cce.md @@ -71,8 +71,8 @@ For how to set up or cancel a default StorageClass, refer to Kubernetes official Use [ks-installer](https://github.com/kubesphere/ks-installer) to deploy KubeSphere on an existing Kubernetes cluster. It is suggested that you install it in minimal size. ```bash -$ kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/kubesphere-installer.yaml -$ kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml +$ kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml +$ kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml ``` diff --git a/content/en/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-aks.md b/content/en/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-aks.md index aa0e3f9dd..e2bb0b57e 100644 --- a/content/en/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-aks.md +++ b/content/en/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-aks.md @@ -69,11 +69,11 @@ All the other Resources will be placed in MC_KubeSphereRG_KuberSphereCluster_wes ## Deploy KubeSphere on AKS To start deploying KubeSphere, use the following command. ```bash -kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/kubesphere-installer.yaml +kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml ``` Download the cluster-configuration.yaml as below and you can customize the configuration. You can also enable pluggable components by setting the `enabled` property to `true` in this file. ```bash -wget https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml +wget https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml ``` As `metrics-server` is already installed on AKS, you need to disable the component in the cluster-configuration.yaml file by changing `true` to `false` for `enabled`. ```bash diff --git a/content/en/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-do.md b/content/en/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-do.md index 3999583dd..704665fdc 100644 --- a/content/en/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-do.md +++ b/content/en/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-do.md @@ -44,7 +44,7 @@ Now that the cluster is ready, you can install KubeSphere following this steps: - Install KubeSphere using kubectl. The following command is only for the default minimal installation. ```bash - kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/kubesphere-installer.yaml + kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml ``` - Create a local cluster-configuration.yaml. @@ -53,7 +53,7 @@ Now that the cluster is ready, you can install KubeSphere following this steps: vi cluster-configuration.yaml ``` -- Copy all the content in this [file](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml) and paste it to your local cluster-configuration.yaml. +- Copy all the content in this [file](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) and paste it to your local cluster-configuration.yaml. - Save the file when you finish. Execute the following command to start installation: diff --git a/content/en/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-eks.md b/content/en/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-eks.md index e6439a47e..9113157e3 100644 --- a/content/en/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-eks.md +++ b/content/en/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-eks.md @@ -1,5 +1,5 @@ --- -title: "Deploy KubeSphere on EKS" +title: "Deploy KubeSphere on AWS EKS" keywords: 'Kubernetes, KubeSphere, EKS, Installation' description: 'How to install KubeSphere on EKS' @@ -71,14 +71,14 @@ When your cluster provisioning is complete (usually between 10 and 15 minutes), - Config node group ![config-node-group](/images/docs/eks/config-node-grop.png) -{{< notice note >}} - - Supported Kubernetes versions for KubeSphere 3.0.0: 1.15.x, 1.16.x, 1.17.x, 1.18.x. - - Ubuntu is used for the operating system here as an example. For more information on supported systems, see Overview. - - 3 nodes are included in this example. You can add more nodes based on your own needs especially in a production environment. - - The machine type t3.medium (2 vCPU, 4GB memory) is for minimal installation. If you want to enable pluggable components or use the cluster for production, please select a machine type with more resources. - - For other settings, you can change them as well based on your own needs or use the default value. +{{< notice note >}} +- Supported Kubernetes versions for KubeSphere 3.0.0: 1.15.x, 1.16.x, 1.17.x, 1.18.x. +- Ubuntu is used for the operating system here as an example. For more information on supported systems, see Overview. +- 3 nodes are included in this example. You can add more nodes based on your own needs especially in a production environment. +- The machine type t3.medium (2 vCPU, 4GB memory) is for minimal installation. If you want to enable pluggable components or use the cluster for production, please select a machine type with more resources. +- For other settings, you can change them as well based on your own needs or use the default value. -{{}} +{{}} - When the EKS cluster is ready, you can connect to the cluster with kubectl. ## configure kubectl @@ -111,13 +111,13 @@ For more information, see the help page with the aws eks update-kubeconfig help - Install KubeSphere using kubectl. The following command is only for the default minimal installation. ```bash -kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/kubesphere-installer.yaml +kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml ``` ![minimal-install](/images/docs/eks/minimal-install.png) - Create a local cluster-configuration.yaml. ```shell -kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml +kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml ``` ![config-install](/images/docs/eks/config-install.png) @@ -165,9 +165,8 @@ kubectl get svc -nkubesphere-system - Log in the console with the default account and password (`admin/P@88w0rd`). In the cluster overview page, you can see the dashboard as shown in the following image. -![eks-cluster](/images/docs/eks/esk-kubesphere-ok.png) +![gke-cluster](https://ap3.qingstor.com/kubesphere-website/docs/gke-cluster.png) ## Enable Pluggable Components (Optional) The example above demonstrates the process of a default minimal installation. To enable other components in KubeSphere, see Enable Pluggable Components for more details. - diff --git a/content/en/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-gke.md b/content/en/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-gke.md index 38a29f1ff..82191080d 100644 --- a/content/en/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-gke.md +++ b/content/en/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-gke.md @@ -48,7 +48,7 @@ This guide walks you through the steps of deploying KubeSphere on [Google Kubern - Install KubeSphere using kubectl. The following command is only for the default minimal installation. ```bash -kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/kubesphere-installer.yaml +kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml ``` - Create a local cluster-configuration.yaml. @@ -57,7 +57,7 @@ kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/maste vi cluster-configuration.yaml ``` -- Copy all the content in this [file](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml) and paste it to your local cluster-configuration.yaml. Navigate to `metrics_server`, and change `true` to `false` for `enabled`. +- Copy all the content in this [file](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) and paste it to your local cluster-configuration.yaml. Navigate to `metrics_server`, and change `true` to `false` for `enabled`. ![change-metrics-server](https://ap3.qingstor.com/kubesphere-website/docs/true-false.png) diff --git a/content/en/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-oke.md b/content/en/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-oke.md index 4997b1468..b9acfbddf 100644 --- a/content/en/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-oke.md +++ b/content/en/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-oke.md @@ -68,11 +68,11 @@ If you do not copy and execute the command above, you cannot proceed with the st - Install KubeSphere using kubectl. The following command is only for the default minimal installation. ```bash -kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/kubesphere-installer.yaml +kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml ``` ```bash -kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml +kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml ``` - Inspect the logs of installation: diff --git a/content/en/docs/installing-on-kubernetes/introduction/overview.md b/content/en/docs/installing-on-kubernetes/introduction/overview.md index addc4a040..2352c730f 100644 --- a/content/en/docs/installing-on-kubernetes/introduction/overview.md +++ b/content/en/docs/installing-on-kubernetes/introduction/overview.md @@ -26,16 +26,16 @@ After you make sure your existing Kubernetes cluster meets all the requirements, - Execute the following commands to start installation: ```bash -kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/kubesphere-installer.yaml +kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml ``` ```bash -kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml +kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml ``` {{< notice note >}} -If your server has trouble accessing GitHub, you can copy the content in [kubesphere-installer.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/kubesphere-installer.yaml) and [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml) respectively and past it to local files. You then can use `kubectl apply -f` for the local files to install KubeSphere. +If your server has trouble accessing GitHub, you can copy the content in [kubesphere-installer.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml) and [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) respectively and past it to local files. You then can use `kubectl apply -f` for the local files to install KubeSphere. {{}} @@ -47,7 +47,7 @@ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app= {{< notice tip >}} -In some environments, you may find the installation process stopped by issues related to `metrics_server`, as some cloud providers have already installed metrics server in their platform. In this case, please manually create a local cluster-configuration.yaml file (copy the [content](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml) to it). In this file, disable `metrics_server` by changing `true` to `false` for `enabled`, and use `kubectl apply -f cluster-configuration.yaml` to execute it. +In some environments, you may find the installation process stopped by issues related to `metrics_server`, as some cloud providers have already installed metrics server in their platform. In this case, please manually create a local cluster-configuration.yaml file (copy the [content](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) to it). In this file, disable `metrics_server` by changing `true` to `false` for `enabled`, and use `kubectl apply -f cluster-configuration.yaml` to execute it. {{}} diff --git a/content/en/docs/installing-on-kubernetes/on-prem-kubernetes/install-ks-on-linux-airgapped.md b/content/en/docs/installing-on-kubernetes/on-prem-kubernetes/install-ks-on-linux-airgapped.md index 26b3e4f04..550766807 100644 --- a/content/en/docs/installing-on-kubernetes/on-prem-kubernetes/install-ks-on-linux-airgapped.md +++ b/content/en/docs/installing-on-kubernetes/on-prem-kubernetes/install-ks-on-linux-airgapped.md @@ -7,218 +7,4 @@ description: 'How to install KubeSphere on air-gapped Linux machines' weight: 2240 --- -The air-gapped installation is almost the same as the online installation except it creates a local registry to host the Docker images. We will demonstrate how to install KubeSphere and Kubernetes on air-gapped environment. - -> Note: The dependencies in different operating systems may cause upexpected problems. If you encounter any installation problems on air-gapped environment, please describe your OS information and error logs on [GitHub](https://github.com/kubesphere/kubesphere/issues). - -## Prerequisites - -- If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information. -> - Installer will use `/var/lib/docker` as the default directory where all Docker related files, including the images, are stored. We recommend you to add additional storage to a disk with at least 100G mounted at `/var/lib/docker` and `/mnt/registry` respectively, use the [fdisk](https://www.computerhope.com/unix/fdisk.htm) command for reference. -- Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [OpenEBS](https://openebs.io/) to provide storage service with dynamic provisioning. It is convenient for testing and development. For production, please [configure supported persistent storage service](../storage-configuration) and prepare [high availability configuration](../master-ha) before installation. -- Since the air-gapped machines cannot connect to apt or yum source, please use clean Linux machine to avoid this problem. - -## Step 1: Prepare Linux Hosts - -The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements. - -- Supported OSes: CentOS 7.4 ~ 7.7 (64-bit), Ubuntu 16.04.5/16.04.6/18.04.1/18.04.2/18.04.3 LTS (64-bit) -- Time synchronization is required across all nodes, otherwise the installation may not succeed; -- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`; -- If you are using `Ubuntu 18.04`, you need to use the user `root`. -- Ensure your disk of each node is at least 100G. -- CPU and memory in total of all machines: 2 cores and 4 GB for minimal installation; 8 cores and 16 GB for complete installation. - - -The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes. - -> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide. - -| Host IP | Host Name | Role | -| --- | --- | --- | -|192.168.0.1|master|master, etcd| -|192.168.0.2|node1|node| -|192.168.0.3|node2|node| - -### Cluster Architecture - -#### Single Master, Single Etcd, Two Nodes - -![Architecture](/cluster-architecture.svg) - -## Step 2: Download Installer Package - -Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`. - -```bash -curl -L https://kubesphere.io/download/offline/latest > kubesphere-all-offline-v2.1.1.tar.gz \ -&& tar -zxf kubesphere-all-offline-v2.1.1.tar.gz && cd kubesphere-all-offline-v2.1.1/conf -``` - -## Step 3: Configure Host Template - -> This step is only for multi-node installation, you can skip this step if you choose all-in-one installation. - -Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file. - -> Note: -> -> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`. -> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`. -> - master, node1 and node2 are the host names of each node and all host names should be in lowercase. - -### hosts.ini - -```ini -[all] -master ansible_connection=local ip=192.168.0.1 -node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD -node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD - -[local-registry] -master - -[kube-master] -master - -[kube-node] -node1 -node2 - -[etcd] -master - -[k8s-cluster:children] -kube-node -kube-master -``` - -> Note: -> -> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here. -> - Installer will use a node as the local registry for docker images, defaults to "master" in the group `[local-registry]`. -> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively. -> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`. -> -> Parameters Specification: -> -> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection. -> - `ansible_host`: The name of the host to be connected. -> - `ip`: The ip of the host to be connected. -> - `ansible_user`: The default ssh user name to use. -> - `ansible_become_pass`: Allows you to set the privilege escalation password. -> - `ansible_ssh_pass`: The password of the host to be connected using root. - -## Step 4: Enable All Components - -> This is step is complete installation. You can skip this step if you choose a minimal installation. - -Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default. - -```yaml -# LOGGING CONFIGURATION -# logging is an optional component when installing KubeSphere, and -# Kubernetes builtin logging APIs will be used if logging_enabled is set to false. -# Builtin logging only provides limited functions, so recommend to enable logging. -logging_enabled: true # Whether to install logging system -elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number -elasticsearch_data_replica: 2 # total number of data nodes -elasticsearch_volume_size: 20Gi # Elasticsearch volume size -log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default. -elk_prefix: logstash # the string making up index names. The index name will be formatted as ks--log -kibana_enabled: false # Kibana Whether to install built-in Grafana -#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption. -#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port - -#DevOps Configuration -devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image) -jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default -jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default -jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default -jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters -jenkinsJavaOpts_Xmx: 6g -jenkinsJavaOpts_MaxRAM: 8g -sonarqube_enabled: true # Whether to install built-in SonarQube -#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption. -#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token - -# Following components are all optional for KubeSphere, -# Which could be turned on to install it before installation or later by updating its value to true -openpitrix_enabled: true # KubeSphere application store -metrics_server_enabled: true # For KubeSphere HPA to use -servicemesh_enabled: true # KubeSphere service mesh system(Istio-based) -notification_enabled: true # KubeSphere notification system -alerting_enabled: true # KubeSphere alerting system -``` - -## Step 5: Install KubeSphere to Linux Machines - -> Note: -> -> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default. -> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions. -> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation. -> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts. - -**1.** Enter `scripts` folder, and execute `install.sh` using `root` user: - -```bash -cd ../cripts -./install.sh -``` - -**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume. - -```bash -################################################ - KubeSphere Installer Menu -################################################ -* 1) All-in-one -* 2) Multi-node -* 3) Quit -################################################ -https://kubesphere.io/ 2020-02-24 -################################################ -Please input an option: 2 - -``` - -**3.** Verify the multi-node installation: - -**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go. - -```bash -successsful! -##################################################### -### Welcome to KubeSphere! ### -##################################################### - -Console: http://192.168.0.1:30880 -Account: admin -Password: P@88w0rd - -NOTE:Please modify the default password after login. -##################################################### -``` - -> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components). - -**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in. - -![Login](https://pek3b.qingstor.com/kubesphere-docs/png/20191017172215.png) - -Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up. - -![Landing Page](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png) - -## Enable Pluggable Components - -If you already have set up minimal installation, you still can edit the ConfigMap of ks-installer using the following command. Please make sure there is enough resource in your machines, see [Pluggable Components Overview](/en/installation/pluggable-components/). - -```bash -kubectl edit cm -n kubesphere-system ks-installer -``` - -## FAQ - -If you have further questions please do not hesitate to raise issues on [GitHub](https://github.com/kubesphere/kubesphere/issues). +TBD diff --git a/content/en/docs/installing-on-linux/introduction/multioverview.md b/content/en/docs/installing-on-linux/introduction/multioverview.md index f387455c9..7e2f8f9a9 100644 --- a/content/en/docs/installing-on-linux/introduction/multioverview.md +++ b/content/en/docs/installing-on-linux/introduction/multioverview.md @@ -49,7 +49,7 @@ Please see the requirements for hardware and operating system shown below. To ge The path `/var/lib/docker` is mainly used to store the container data, and will gradually increase in size during use and operation. In the case of a production environment, it is recommended that `/var/lib/docker` should mount a drive separately. -{{}} +{{}} ### Node Requirements @@ -81,49 +81,44 @@ This example includes three hosts as below with the master node serving as the t ## Step 2: Download KubeKey -As below, you can either download the binary file or build the binary package from source code. +As below, you can either download the binary file. + +Download the Installer for KubeSphere v3.0.0. {{< tabs >}} -{{< tab "Download Binary" >}} +{{< tab "For users with poor network to GitHub" >}} -Execute the following command: +For users in China, you can download the installer using this link. ```bash -curl -O -k https://kubernetes.pek3b.qingstor.com/tools/kubekey/kk +wget https://kubesphere.io/kubekey/releases/v1.0.0 ``` +{{}} + +{{< tab "For users with good network to GitHub" >}} + +For users with good network to GitHub, you can download it from [GitHub Release Page](https://github.com/kubesphere/kubekey/releases/tag/v1.0.0) or use the following link directly. + +```bash +wget https://github.com/kubesphere/kubekey/releases/download/v1.0.0/kubekey-v1.0.0-linux-amd64.tar.gz +``` +{{}} + +{{}} + +Unzip it. + +```bash +tar -zxvf v1.0.0 +``` + +Grant the execution right to `kk`: ```bash chmod +x kk ``` -{{}} - -{{< tab "Build Binary from Source Code" >}} - -Execute the following command one by one: - -```bash -git clone https://github.com/kubesphere/kubekey.git -``` - -```bash -cd kubekey -``` - -```bash -./build.sh -``` - -Note: - -- Docker needs to be installed before the building. -- If you have problems accessing `https://proxy.golang.org/`, execute `build.sh -p` instead. - -{{}} - -{{}} - ## Step 3: Create a Cluster For multi-node installation, you need to create a cluster by specifying a configuration file. @@ -133,7 +128,7 @@ For multi-node installation, you need to create a cluster by specifying a config Command: ```bash -./kk create config [--with-kubernetes version] [--with-storage plugins] [--with-kubesphere version] [(-f | --file) path] +./kk create config [--with-kubernetes version] [--with-kubesphere version] [(-f | --file) path] ``` {{< notice info >}} @@ -150,7 +145,7 @@ Here are some examples for your reference: ./kk create config [-f ~/myfolder/abc.yaml] ``` -- You can customize the storage plugins (supported: LocalPV, NFS Client, Ceph RBD, and GlusterFS). You can also specify multiple plugins separated by comma. Please note the first one you add will be the default storage class. +- You can customize the persistent storage plugins (e.g. NFS Client, Ceph RBD, and GlusterFS) in `sample-config.yaml`. ```bash ./kk create config --with-storage localVolume @@ -158,9 +153,9 @@ Here are some examples for your reference: {{< notice note >}} -KubeKey will install [OpenEBS](https://openebs.io/) to provision LocalPV for development and testing environment by default, which is convenient for new users. For production, please use NFS/Ceph/GlusterFS or commercial products as persistent storage solutions, and install [relevant clients](https://github.com/kubesphere/kubekey/blob/master/docs/storage-client.md) in all nodes. For this example of multi-cluster installation, we will use the default storage class (local volume). For more information, see HA Cluster Configuration and Storage Class Configuration. +KubeKey will install [OpenEBS](https://openebs.io/) to provision [LocalPV](https://kubernetes.io/docs/concepts/storage/volumes/#local) for development and testing environment by default, which is convenient for new users. For this example of multi-cluster installation, we will use the default storage class (local volume). For production, please use NFS/Ceph/GlusterFS/CSI or commercial products as persistent storage solutions, you need to specify them in `addons` of `sample-config.yaml`, see [Persistent Storage Configuration](../storage-configuration). -{{}} +{{}} - You can specify a KubeSphere version that you want to install (e.g. `--with-kubesphere v3.0.0`). @@ -223,7 +218,7 @@ hosts: #### controlPlaneEndpoint (for HA installation only) -`controlPlaneEndpoint` allows you to define an external load balancer for an HA cluster. You need to prepare and configure an external load balancer if and only if you need to install more than 3 master nodes. Please note that the address and port should be indented by two spaces in `config-sample.yaml`, and the `address` should be VIP. See KubeSphere on QingCloud Instance for more information. +`controlPlaneEndpoint` allows you to define an external load balancer for an HA cluster. You need to prepare and configure an external load balancer if and only if you need to install more than 3 master nodes. Please note that the address and port should be indented by two spaces in `config-sample.yaml`, and the `address` should be VIP. See HA Configuration for details. {{< notice tip >}} @@ -244,7 +239,7 @@ When you finish editing, save the file. You need to change `config-sample.yaml` above to your own file if you use a different name. -{{}} +{{}} The whole installation process may take 10-20 minutes, depending on your machine and network. @@ -265,7 +260,7 @@ NOTES: 1. After logging into the console, please check the monitoring status of service components in the "Cluster Management". If any service is not - ready, please wait patiently until all components + ready, please wait patiently until all components are ready. 2. Please modify the default password after login. @@ -280,7 +275,7 @@ Now, you will be able to access the web console of KubeSphere at `http://{IP}:30 To access the console, you may need to forward the source port to the intranet port of the intranet IP depending on the platform of your cloud providers. Please also make sure port 30880 is opened in the security group. -{{}} +{{}} ![kubesphere-login](https://ap3.qingstor.com/kubesphere-website/docs/login.png) @@ -301,4 +296,4 @@ echo 'source <(kubectl completion bash)' >>~/.bashrc kubectl completion bash >/etc/bash_completion.d/kubectl ``` -Detailed information can be found [here](https://kubernetes.io/docs/tasks/tools/install-kubectl/#enabling-shell-autocompletion). \ No newline at end of file +Detailed information can be found [here](https://kubernetes.io/docs/tasks/tools/install-kubectl/#enabling-shell-autocompletion). diff --git a/content/en/docs/installing-on-linux/on-premises/install-kubesphere-on-vmware-vsphere.md b/content/en/docs/installing-on-linux/on-premises/install-kubesphere-on-vmware-vsphere.md index 12faff122..7befd94ac 100644 --- a/content/en/docs/installing-on-linux/on-premises/install-kubesphere-on-vmware-vsphere.md +++ b/content/en/docs/installing-on-linux/on-premises/install-kubesphere-on-vmware-vsphere.md @@ -80,7 +80,10 @@ In the Ready to complete page, you review the configuration selections that you ![kubesphereOnVsphere-en-0-1-8](/images/docs/vsphere/kubesphereOnVsphere-en-0-1-8.png) -## Keepalived+Haproxy +## Install a Load Balancer using Keepalived and Haproxy (Optional) + +For production environment, you have to prepare an external Load Balancer. If you do not have a Load Balancer, you can install it using Keepalived and Haproxy. If you are provisioning a development or testing environment, please skip this section. + ### Yum Install host lb-0(10.10.71.77) and host lb-1(10.10.71.66) @@ -159,7 +162,7 @@ global_defs { notification_email { } smtp_connect_timeout 30 - router_id LVS_DEVEL01 + router_id LVS_DEVEL01 vrrp_skip_check_adv_addr vrrp_garp_interval 0 vrrp_gna_interval 0 @@ -173,10 +176,10 @@ vrrp_instance haproxy-vip { state MASTER priority 100 interface ens192 - virtual_router_id 60 - advert_int 1 + virtual_router_id 60 + advert_int 1 authentication { - auth_type PASS + auth_type PASS auth_pass 1111 } unicast_src_ip 10.10.71.77 @@ -185,7 +188,7 @@ vrrp_instance haproxy-vip { } virtual_ipaddress { #vip - 10.10.71.67/24 + 10.10.71.67/24 } track_script { chk_haproxy @@ -198,7 +201,7 @@ remarks haproxy 66 lb-1-10.10.71.66 (/etc/keepalived/keepalived.conf) global_defs { notification_email { } - router_id LVS_DEVEL02 + router_id LVS_DEVEL02 vrrp_skip_check_adv_addr vrrp_garp_interval 0 vrrp_gna_interval 0 @@ -209,7 +212,7 @@ vrrp_script chk_haproxy { weight 2 } vrrp_instance haproxy-vip { - state BACKUP + state BACKUP priority 90 interface ens192 virtual_router_id 60 @@ -223,7 +226,7 @@ vrrp_instance haproxy-vip { 10.10.71.77 } virtual_ipaddress { - 10.10.71.67/24 + 10.10.71.67/24 } track_script { chk_haproxy @@ -243,7 +246,7 @@ systemctl start keepalived Use `ip a s` to view the vip binding status of each lb node ```bash -ip a s +ip a s ``` Pause VIP node haproxy:`systemctl stop haproxy` @@ -255,7 +258,7 @@ systemctl stop haproxy Use `ip a s` again to check the vip binding of each lb node, and check whether vip drifts ```bash -ip a s +ip a s ``` Or use `systemctl status -l keepalived` command to view @@ -264,31 +267,67 @@ Or use `systemctl status -l keepalived` command to view systemctl status -l keepalived ``` - - ## Get the Installer Excutable File -Download Binary +Download the Installer for KubeSphere v3.0.0. + +{{< tabs >}} + +{{< tab "For users with poor network to GitHub" >}} + +For users in China, you can download the installer using this link. + +```bash +wget https://kubesphere.io/kubekey/releases/v1.0.0 +``` +{{}} + +{{< tab "For users with good network to GitHub" >}} + +For users with good network to GitHub, you can download it from [GitHub Release Page](https://github.com/kubesphere/kubekey/releases/tag/v1.0.0) or use the following link directly. + +```bash +wget https://github.com/kubesphere/kubekey/releases/download/v1.0.0/kubekey-v1.0.0-linux-amd64.tar.gz +``` +{{}} + +{{}} + +Unzip it. + +```bash +tar -zxvf v1.0.0 +``` + +Grant the execution right to `kk`: ```bash -curl -O -k https://kubernetes.pek3b.qingstor.com/tools/kubekey/kk chmod +x kk ``` -## Create a Multi-Node Cluster +## Create a Multi-node Cluster You have more control to customize parameters or create a multi-node cluster using the advanced installation. Specifically, create a cluster by specifying a configuration file.。 -### With KubeKey, you can install Kubernetes and KubeSphere +With KubeKey, you can install Kubernetes and KubeSphere Create a Kubernetes cluster with KubeSphere installed (e.g. --with-kubesphere v3.0.0) ```bash -./kk create config --with-kubesphere v3.0.0 -f ~/config-sample.yaml +./kk create config --with-kubernetes v1.17.9 --with-kubesphere v3.0.0 -f ~/config-sample.yaml ``` -#### Modify the file config-sample.yaml according to your environment -vi ~/config-sample.yaml +> The following Kubernetes versions has been fully tested with KubeSphere: +> - v1.15:   v1.15.12 +> - v1.16:   v1.16.13 +> - v1.17:   v1.17.9 (default) +> - v1.18:   v1.18.6 + +Modify the file config-sample.yaml according to your environment + +```bash +vi config-sample.yaml +``` ```yaml apiVersion: kubekey.kubesphere.io/v1alpha1 @@ -308,7 +347,7 @@ spec: - master1 - master2 - master3 - master: + master: - master1 - master2 - master3 @@ -446,7 +485,7 @@ NOTES: 1. After logging into the console, please check the monitoring status of service components in the "Cluster Management". If any service is not - ready, please wait patiently until all components + ready, please wait patiently until all components are ready. 2. Please modify the default password after login. ##################################################### @@ -462,4 +501,3 @@ You will be able to use default account and password `admin / P@88w0rd` to log i #### Enable Pluggable Components (Optional) The example above demonstrates the process of a default minimal installation. To enable other components in KubeSphere, see [Enable Pluggable Components for more details](https://github.com/kubesphere/ks-installer#enable-pluggable-components). - diff --git a/content/en/docs/installing-on-linux/public-cloud/install-ks-on-azure-vms.md b/content/en/docs/installing-on-linux/public-cloud/install-ks-on-azure-vms.md index 1385a8fa6..8d925a9a5 100644 --- a/content/en/docs/installing-on-linux/public-cloud/install-ks-on-azure-vms.md +++ b/content/en/docs/installing-on-linux/public-cloud/install-ks-on-azure-vms.md @@ -10,7 +10,7 @@ Technically, you can either install, administer, and manage Kubernetes yourself ## Introduction -In this tutorial, we will use two key features of Azure virtual machines (VMs): +In this tutorial, we will use two key features of Azure virtual machines (VMs): - Virtual Machine Scale Sets: Azure VMSS let you create and manage a group of load balanced VMs. The number of VM instances can automatically increase or decrease in response to demand or a defined schedule(Kubernates Autoscaler is available, but not covered in this tutorial, see [autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider/azure) for more details), which perfectly fits the Worker Nodes. - Availability sets: An availability set is a logical grouping of VMs within a datacenter that automatically distributed across fault domains. This approach limits the impact of potential physical hardware failures, network outages, or power interruptions. All the Master and ETCD VMs will be placed in an Availability sets to meet our High Availability goals. @@ -88,8 +88,38 @@ ssh -i .ssh/id_rsa2 -p50200 kubesphere@40.81.5.xx 1. First, download it and generate a configuration file to customize the installation as follows. + +{{< tabs >}} + +{{< tab "For users with poor network to GitHub" >}} + +For users in China, you can download the installer using this link. + +```bash +wget https://kubesphere.io/kubekey/releases/v1.0.0 ``` -curl -O -k https://kubernetes.pek3b.qingstor.com/tools/kubekey/kk +{{}} + +{{< tab "For users with good network to GitHub" >}} + +For users with good network to GitHub, you can download it from [GitHub Release Page](https://github.com/kubesphere/kubekey/releases/tag/v1.0.0) or use the following link directly. + +```bash +wget https://github.com/kubesphere/kubekey/releases/download/v1.0.0/kubekey-v1.0.0-linux-amd64.tar.gz +``` +{{}} + +{{}} + +Unzip it. + +```bash +tar -zxvf v1.0.0 +``` + +Grant the execution right to `kk`: + +```bash chmod +x kk ``` @@ -98,7 +128,7 @@ chmod +x kk ``` ./kk create config --with-kubesphere v3.0.0 --with-kubernetes v1.17.9 ``` -> Kubernetes Versions +> The following Kubernetes versions have been fully tested with KubeSphere: > - v1.15:   v1.15.12 > - v1.16:   v1.16.13 > - v1.17:   v1.17.9 (default) @@ -208,4 +238,3 @@ Since we are using self-hosted Kubernetes solutions on Azure, So the Load Balanc ![Load Balancer](/images/docs/aks/azure-vm-loadbalancer-rule.png) 2. Create an Inbound Security rule to allow Internet access in the Network Security Group. ![Firewall](/images/docs/aks/azure-vm-firewall.png) - diff --git a/content/en/docs/installing-on-linux/public-cloud/kubesphere-on-qingcloud-instance.md b/content/en/docs/installing-on-linux/public-cloud/kubesphere-on-qingcloud-instance.md index 07febb5de..bfabcc660 100644 --- a/content/en/docs/installing-on-linux/public-cloud/kubesphere-on-qingcloud-instance.md +++ b/content/en/docs/installing-on-linux/public-cloud/kubesphere-on-qingcloud-instance.md @@ -28,7 +28,7 @@ This example prepares six machines of **Ubuntu 16.04.6**. We will create two loa The Kubernetes document [Options for Highly Available topology](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/) demonstrates that there are two options for configuring the topology of a highly available (HA) Kubernetes cluster, i.e. stacked etcd topology and external etcd topology. You should carefully consider the advantages and disadvantages of each topology before setting up an HA cluster according to [this document](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/). In this guide, we adopt stacked etcd topology to bootstrap an HA cluster for convenient demonstration. -{{}} +{{}} ## Install HA Cluster @@ -61,7 +61,7 @@ Click Submit to continue. After you create the listener, please check the firewall rules of the load balancer. Make sure that the port `6443` has been added to the firewall rules and the external traffic can pass through `6443`. Otherwise, the installation will fail. If you are using QingCloud platform, you can find the information in **Security Groups** under **Security**. -{{}} +{{}} 4. Click **Add Backend**, and choose the VxNet you just selected (in this example, it is `pn`). Click the button **Advanced Search**, choose the three master nodes, and set the port to `6443` which is the default secure port of api-server. @@ -75,7 +75,7 @@ Click **Submit** when you finish. The status of all masters might show `Not Available` after you added them as backends. This is normal since the port `6443` of api-server is not active on master nodes yet. The status will change to `Active` and the port of api-server will be exposed after the installation finishes, which means the internal load balancer you configured works as expected. -{{}} +{{}} ![apply-changes](https://ap3.qingstor.com/kubesphere-website/docs/apply-change.png) @@ -89,7 +89,7 @@ You need to create an EIP in advance. To create an EIP, go to **Elastic IPs** un Two elastic IPs are needed for this whole tutorial, one for the VPC network and the other for the external load balancer created in this step. You cannot associate the same EIP to the VPC network and the load balancer at the same time. -{{}} +{{}} 6. Similarly, create an external load balancer while don't select VxNet for the Network field. Bind the EIP that you created to this load balancer by clicking **Add IPv4**. @@ -101,7 +101,7 @@ Two elastic IPs are needed for this whole tutorial, one for the VPC network and After you create the listener, please check the firewall rules of the load balancer. Make sure that the port `30880` has been added to the firewall rules and the external traffic can pass through `6443`. Otherwise, the installation will fail. If you are using QingCloud platform, you can find the information in **Security Groups** under **Security**. -{{}} +{{}} ![listener2](https://ap3.qingstor.com/kubesphere-website/docs/listener2.png) @@ -117,22 +117,47 @@ Click **Submit** when you finish. [Kubekey](https://github.com/kubesphere/kubekey) is the next-gen installer which is used for installing Kubernetes and KubeSphere v3.0.0 fastly, flexibly and easily. -1. Download KubeKey and generate a configuration file to customize the installation as follows. +{{< tabs >}} + +{{< tab "For users with poor network to GitHub" >}} + +For users in China, you can download the installer using this link. ```bash -curl -O -k https://kubernetes.pek3b.qingstor.com/tools/kubekey/kk +wget https://kubesphere.io/kubekey/releases/v1.0.0 ``` +{{}} + +{{< tab "For users with good network to GitHub" >}} + +For users with good network to GitHub, you can download it from [GitHub Release Page](https://github.com/kubesphere/kubekey/releases/tag/v1.0.0) or use the following link directly. + +```bash +wget https://github.com/kubesphere/kubekey/releases/download/v1.0.0/kubekey-v1.0.0-linux-amd64.tar.gz +``` +{{}} + +{{}} + +Unzip it. + +```bash +tar -zxvf v1.0.0 +``` + +Grant the execution right to `kk`: ```bash chmod +x kk ``` -2. Then create an example configuration file with default configurations. Here we use Kubernetes v1.17.9 as an example. +Then create an example configuration file with default configurations. Here we use Kubernetes v1.17.9 as an example. ```bash ./kk create config --with-kubesphere v3.0.0 --with-kubernetes v1.17.9 ``` +> Tip: These Kubernetes versions have been fully tested with KubeSphere: *v1.15.12*, *v1.16.13*, *v1.17.9* (default), *v1.18.6*. ### Cluster Node Planning @@ -195,7 +220,7 @@ In addition to the node information, you need to provide the load balancer infor - The address and port should be indented by two spaces in `config-sample.yaml`, and the address should be VIP. - The domain name of the load balancer is `lb.kubesphere.local` by default for internal access. If you need to change the domain name, please uncomment and modify it. -{{}} +{{}} After that, you can enable any components you need by following **Enable Pluggable Components** and start your HA cluster installation. @@ -211,7 +236,7 @@ As we mentioned in the prerequisites, considering data persistence in a producti For testing or development, you can skip this part. KubeKey will use the integrated OpenEBS to provision LocalPV as the storage service directly. -{{}} +{{}} **Available Storage Plugins & Clients** diff --git a/content/en/docs/multicluster-management/enable-multicluster/agent-connection.md b/content/en/docs/multicluster-management/enable-multicluster/agent-connection.md index 9fea17bbd..69c78318f 100644 --- a/content/en/docs/multicluster-management/enable-multicluster/agent-connection.md +++ b/content/en/docs/multicluster-management/enable-multicluster/agent-connection.md @@ -12,7 +12,7 @@ weight: 2343 You have already installed at least two KubeSphere clusters, please refer to [Installing on Linux](../../../installing-on-linux) or [Installing on Kubernetes](../../../installing-on-kubernetes) if not yet. {{< notice note >}} -Multi-cluster management requires Kubesphere to be installed on the target clusters. If you have an existing cluster, please install a minimal KubeSphere on it as an agent, see [Installing Minimal KubeSphere on Kubernetes](../../../installing-on-kubernetes/minimal-kubesphere-on-k8s) for details. +Multi-cluster management requires Kubesphere to be installed on the target clusters. If you have an existing cluster, please install a minimal KubeSphere on it as an agent, see [Installing Minimal KubeSphere on Kubernetes](../../installing-on-kubernetes/minimal-kubesphere-on-k8s) for details. {{}} ## Agent Connection diff --git a/content/en/docs/multicluster-management/enable-multicluster/direct-connection.md b/content/en/docs/multicluster-management/enable-multicluster/direct-connection.md index ac9a7a534..9f953eab9 100644 --- a/content/en/docs/multicluster-management/enable-multicluster/direct-connection.md +++ b/content/en/docs/multicluster-management/enable-multicluster/direct-connection.md @@ -12,7 +12,7 @@ weight: 2340 You have already installed at least two KubeSphere clusters, please refer to [Installing on Linux](../../../installing-on-linux) or [Installing on Kubernetes](../../../installing-on-kubernetes) if not yet. {{< notice note >}} -Multi-cluster management requires Kubesphere to be installed on the target clusters. If you have an existing cluster, please install a minimal KubeSphere on it as an agent, see [Installing Minimal KubeSphere on Kubernetes](../../../installing-on-kubernetes/minimal-kubesphere-on-k8s) for details. +Multi-cluster management requires Kubesphere to be installed on the target clusters. If you have an existing cluster, please install a minimal KubeSphere on it as an agent, see [Installing Minimal KubeSphere on Kubernetes](../../installing-on-kubernetes/minimal-kubesphere-on-k8s) for details. {{}} ## Direct Connection diff --git a/content/en/docs/pluggable-components/app-store.md b/content/en/docs/pluggable-components/app-store.md index 8e09e3d44..4045d6207 100644 --- a/content/en/docs/pluggable-components/app-store.md +++ b/content/en/docs/pluggable-components/app-store.md @@ -50,15 +50,15 @@ openpitrix: ### **Installing on Kubernetes** -When you install KubeSphere on Kubernetes, you need to download the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml) for cluster setting. If you want to install App Store, do not use `kubectl apply -f` directly for this file. +When you install KubeSphere on Kubernetes, you need to download the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) for cluster setting. If you want to install App Store, do not use `kubectl apply -f` directly for this file. -1. In the tutorial of [Installing KubeSphere on Kubernetes](https://kubesphere-v3.netlify.app/docs/installing-on-kubernetes/introduction/overview/), you execute `kubectl apply -f` first for the file [kubesphere-installer.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/kubesphere-installer.yaml). After that, to enable App Store, create a local file cluster-configuration.yaml. +1. In the tutorial of [Installing KubeSphere on Kubernetes](https://kubesphere-v3.netlify.app/docs/installing-on-kubernetes/introduction/overview/), you execute `kubectl apply -f` first for the file [kubesphere-installer.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml). After that, to enable App Store, create a local file cluster-configuration.yaml. ```bash vi cluster-configuration.yaml ``` -2. Copy all the content in the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml) and paste it to the local file just created. +2. Copy all the content in the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) and paste it to the local file just created. 3. In this local cluster-configuration.yaml file, navigate to `openpitrix` and enable App Store by changing `false` to `true` for `enabled`. Save the file after you finish. ```bash diff --git a/content/en/docs/pluggable-components/auditing-logs.md b/content/en/docs/pluggable-components/auditing-logs.md index fa0c0ceaf..ce801d30e 100644 --- a/content/en/docs/pluggable-components/auditing-logs.md +++ b/content/en/docs/pluggable-components/auditing-logs.md @@ -64,15 +64,15 @@ es: # Storage backend for logging, tracing, events and auditing. ### **Installing on Kubernetes** -When you install KubeSphere on Kubernetes, you need to download the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml) for cluster setting. If you want to install Auditing, do not use `kubectl apply -f` directly for this file. +When you install KubeSphere on Kubernetes, you need to download the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) for cluster setting. If you want to install Auditing, do not use `kubectl apply -f` directly for this file. -1. In the tutorial of [Installing KubeSphere on Kubernetes](https://kubesphere-v3.netlify.app/docs/installing-on-kubernetes/introduction/overview/), you execute `kubectl apply -f` first for the file [kubesphere-installer.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/kubesphere-installer.yaml). After that, to enable Auditing, create a local file cluster-configuration.yaml. +1. In the tutorial of [Installing KubeSphere on Kubernetes](https://kubesphere-v3.netlify.app/docs/installing-on-kubernetes/introduction/overview/), you execute `kubectl apply -f` first for the file [kubesphere-installer.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml). After that, to enable Auditing, create a local file cluster-configuration.yaml. ```bash vi cluster-configuration.yaml ``` -2. Copy all the content in the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml) and paste it to the local file just created. +2. Copy all the content in the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) and paste it to the local file just created. 3. In this local cluster-configuration.yaml file, navigate to `auditing` and enable Auditing by changing `false` to `true` for `enabled`. Save the file after you finish. ```bash diff --git a/content/en/docs/pluggable-components/devops.md b/content/en/docs/pluggable-components/devops.md index 1c710882d..3622f299a 100644 --- a/content/en/docs/pluggable-components/devops.md +++ b/content/en/docs/pluggable-components/devops.md @@ -48,15 +48,15 @@ devops: ### **Installing on Kubernetes** -When you install KubeSphere on Kubernetes, you need to download the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml) for cluster setting. If you want to install DevOps, do not use `kubectl apply -f` directly for this file. +When you install KubeSphere on Kubernetes, you need to download the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) for cluster setting. If you want to install DevOps, do not use `kubectl apply -f` directly for this file. -1. In the tutorial of [Installing KubeSphere on Kubernetes](https://kubesphere-v3.netlify.app/docs/installing-on-kubernetes/introduction/overview/), you execute `kubectl apply -f` first for the file [kubesphere-installer.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/kubesphere-installer.yaml). After that, to enable DevOps, create a local file cluster-configuration.yaml. +1. In the tutorial of [Installing KubeSphere on Kubernetes](https://kubesphere-v3.netlify.app/docs/installing-on-kubernetes/introduction/overview/), you execute `kubectl apply -f` first for the file [kubesphere-installer.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml). After that, to enable DevOps, create a local file cluster-configuration.yaml. ```bash vi cluster-configuration.yaml ``` -2. Copy all the content in the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml) and paste it to the local file just created. +2. Copy all the content in the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) and paste it to the local file just created. 3. In this local cluster-configuration.yaml file, navigate to `devops` and enable DevOps by changing `false` to `true` for `enabled`. Save the file after you finish. ```bash diff --git a/content/en/docs/pluggable-components/logging.md b/content/en/docs/pluggable-components/logging.md index 43e21637a..18451e2d6 100644 --- a/content/en/docs/pluggable-components/logging.md +++ b/content/en/docs/pluggable-components/logging.md @@ -63,15 +63,15 @@ es: # Storage backend for logging, tracing, events and auditing. ### **Installing on Kubernetes** -When you install KubeSphere on Kubernetes, you need to download the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml) for cluster setting. If you want to install Logging, do not use `kubectl apply -f` directly for this file. +When you install KubeSphere on Kubernetes, you need to download the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) for cluster setting. If you want to install Logging, do not use `kubectl apply -f` directly for this file. -1. In the tutorial of [Installing KubeSphere on Kubernetes](https://kubesphere-v3.netlify.app/docs/installing-on-kubernetes/introduction/overview/), you execute `kubectl apply -f` first for the file [kubesphere-installer.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/kubesphere-installer.yaml). After that, to enable Logging, create a local file cluster-configuration.yaml. +1. In the tutorial of [Installing KubeSphere on Kubernetes](https://kubesphere-v3.netlify.app/docs/installing-on-kubernetes/introduction/overview/), you execute `kubectl apply -f` first for the file [kubesphere-installer.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml). After that, to enable Logging, create a local file cluster-configuration.yaml. ```bash vi cluster-configuration.yaml ``` -2. Copy all the content in the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml) and paste it to the local file just created. +2. Copy all the content in the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) and paste it to the local file just created. 3. In this local cluster-configuration.yaml file, navigate to `logging` and enable Logging by changing `false` to `true` for `enabled`. Save the file after you finish. ```bash diff --git a/content/en/docs/pluggable-components/service-mesh.md b/content/en/docs/pluggable-components/service-mesh.md index 61666df97..2035f722a 100644 --- a/content/en/docs/pluggable-components/service-mesh.md +++ b/content/en/docs/pluggable-components/service-mesh.md @@ -46,15 +46,15 @@ servicemesh: ### **Installing on Kubernetes** -When you install KubeSphere on Kubernetes, you need to download the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml) for cluster setting. If you want to install Service Mesh, do not use `kubectl apply -f` directly for this file. +When you install KubeSphere on Kubernetes, you need to download the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) for cluster setting. If you want to install Service Mesh, do not use `kubectl apply -f` directly for this file. -1. In the tutorial of [Installing KubeSphere on Kubernetes](https://kubesphere-v3.netlify.app/docs/installing-on-kubernetes/introduction/overview/), you execute `kubectl apply -f` first for the file [kubesphere-installer.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/kubesphere-installer.yaml). After that, to enable Service Mesh, create a local file cluster-configuration.yaml. +1. In the tutorial of [Installing KubeSphere on Kubernetes](https://kubesphere-v3.netlify.app/docs/installing-on-kubernetes/introduction/overview/), you execute `kubectl apply -f` first for the file [kubesphere-installer.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml). After that, to enable Service Mesh, create a local file cluster-configuration.yaml. ```bash vi cluster-configuration.yaml ``` -2. Copy all the content in the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml) and paste it to the local file just created. +2. Copy all the content in the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) and paste it to the local file just created. 3. In this local cluster-configuration.yaml file, navigate to `servicemesh` and enable Service Mesh by changing `false` to `true` for `enabled`. Save the file after you finish. ```bash diff --git a/content/en/docs/quick-start/all-in-one-on-linux.md b/content/en/docs/quick-start/all-in-one-on-linux.md index db83fc881..44b48bfa7 100644 --- a/content/en/docs/quick-start/all-in-one-on-linux.md +++ b/content/en/docs/quick-start/all-in-one-on-linux.md @@ -27,11 +27,11 @@ See the requirements for hardware and operating system shown below. To get start | **Red Hat Enterprise Linux 7** | CPU: 2 Cores, Memory: 4 G, Disk Space: 40 G | | **SUSE Linux Enterprise Server 15/openSUSE Leap 15.2** | CPU: 2 Cores, Memory: 4 G, Disk Space: 40 G | -{{< notice note >}} +{{< notice note >}} The system requirements above and the instructions below are for the default minimal installation without any optional components enabled. If your machine has at least 8 cores and 16G memory, it is recommended that you enable all components. For more information, see Enable Pluggable Components. -{{}} +{{}} ### Node Requirements @@ -54,49 +54,40 @@ The system requirements above and the instructions below are for the default min ## Step 2: Download KubeKey -As below, you can either download the binary file or build the binary package from source code. - {{< tabs >}} -{{< tab "Download Binary" >}} +{{< tab "For users with poor network to GitHub" >}} -Execute the following command: +For users in China, you can download the installer using this link. ```bash -curl -O -k https://kubernetes.pek3b.qingstor.com/tools/kubekey/kk +wget https://kubesphere.io/kubekey/releases/v1.0.0 ``` +{{}} + +{{< tab "For users with good network to GitHub" >}} + +For users with good network to GitHub, you can download it from [GitHub Release Page](https://github.com/kubesphere/kubekey/releases/tag/v1.0.0) or use the following link directly. + +```bash +wget https://github.com/kubesphere/kubekey/releases/download/v1.0.0/kubekey-v1.0.0-linux-amd64.tar.gz +``` +{{}} + +{{}} + +Unzip it. + +```bash +tar -zxvf v1.0.0 +``` + +Grant the execution right to `kk`: ```bash chmod +x kk ``` -{{}} - -{{< tab "Build Binary from Source Code" >}} - -Execute the following command one by one: - -```bash -git clone https://github.com/kubesphere/kubekey.git -``` - -```bash -cd kubekey -``` - -```bash -./build.sh -``` - -Note: - -- Docker needs to be installed before the building. -- If you have problems accessing `https://proxy.golang.org/`, execute `build.sh -p` instead. - -{{}} - -{{}} - {{< notice info >}} Developed in Go language, KubeKey represents a brand-new installation tool as a replacement for the ansible-based installer used before. KubeKey provides users with flexible installation choices, as they can install KubeSphere and Kubernetes separately or install them at one time, which is convenient and efficient. @@ -111,24 +102,11 @@ In this QuickStart tutorial, you only need to execute one command for installati ./kk create cluster [--with-kubernetes version] [--with-kubesphere version] ``` -Here are some examples for your reference: +Create a Kubernetes cluster with KubeSphere installed (e.g. `--with-kubesphere v3.0.0`), this is an example for your reference: -- Create a Kubernetes cluster with the default version. ```bash -./kk create cluster -``` - -- Create a Kubernetes cluster with a specified version. - -```bash -./kk create cluster --with-kubernetes v1.18.6 -``` - -- Create a Kubernetes cluster with KubeSphere installed (e.g. `--with-kubesphere v3.0.0`). - -```bash -./kk create cluster --with-kubesphere [version] +./kk create cluster --with-kubernetes v1.17.9 --with-kubesphere [version] ``` {{< notice note >}} @@ -137,7 +115,7 @@ Here are some examples for your reference: - For all-in-one installation, generally speaking, you do not need to change any configuration. - KubeKey will install [OpenEBS](https://openebs.io/) to provision LocalPV for development and testing environment by default, which is convenient for new users. For other storage classes, see Storage Class Configuration. -{{}} +{{}} After you execute the command, you will see a table as below for environment check. @@ -145,11 +123,11 @@ After you execute the command, you will see a table as below for environment che Make sure the above components marked with `y` are installed and input `yes` to continue. -{{< notice note >}} +{{< notice note >}} If you download the binary file directly in Step 2, you do not need to install `docker` as KubeKey will install it automatically. -{{}} +{{}} ## Step 4: Verify the Installation @@ -178,7 +156,7 @@ NOTES: 1. After logging into the console, please check the monitoring status of service components in the "Cluster Management". If any service is not - ready, please wait patiently until all components + ready, please wait patiently until all components are ready. 2. Please modify the default password after login. @@ -191,9 +169,9 @@ https://kubesphere.io 20xx-xx-xx xx:xx:xx You may need to bind EIP and configure port forwarding in your environment for external users to access the console. Besides, make sure the port 30880 is opened in your security groups. -{{}} +{{}} -After logging in the console, you can check the status of different components in **Components**. You may need to wait for some components to be up and running if you want to use related services. +After logging in the console, you can check the status of different components in **Components**. You may need to wait for some components to be up and running if you want to use related services. You can also use `kubectl get pod --all-namespaces` to inspect the running status of KubeSphere workloads. ![components](https://ap3.qingstor.com/kubesphere-website/docs/components.png) diff --git a/content/en/docs/quick-start/enable-pluggable-components.md b/content/en/docs/quick-start/enable-pluggable-components.md index ef9fb50cd..e0caa45a9 100644 --- a/content/en/docs/quick-start/enable-pluggable-components.md +++ b/content/en/docs/quick-start/enable-pluggable-components.md @@ -59,15 +59,15 @@ If you adopt [All-in-one Installation](https://kubesphere-v3.netlify.app/docs/qu ### Installing on Kubernetes -When you install KubeSphere on Kubernetes, you need to download the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml) for cluster setting. If you want to install pluggable components, do not use `kubectl apply -f` directly for this file. +When you install KubeSphere on Kubernetes, you need to download the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) for cluster setting. If you want to install pluggable components, do not use `kubectl apply -f` directly for this file. -1. In the tutorial of [Installing KubeSphere on Kubernetes](https://kubesphere-v3.netlify.app/docs/installing-on-kubernetes/introduction/overview/), you execute `kubectl apply -f` first for the file [kubesphere-installer.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/kubesphere-installer.yaml). After that, to enable pluggable components, create a local file cluster-configuration.yaml. +1. In the tutorial of [Installing KubeSphere on Kubernetes](https://kubesphere-v3.netlify.app/docs/installing-on-kubernetes/introduction/overview/), you execute `kubectl apply -f` first for the file [kubesphere-installer.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml). After that, to enable pluggable components, create a local file cluster-configuration.yaml. ```bash vi cluster-configuration.yaml ``` -2. Copy all the content in the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml) and paste it to the local file just created. +2. Copy all the content in the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) and paste it to the local file just created. 3. In this local cluster-configuration.yaml file, enable the pluggable components you want to install by changing `false` to `true` for `enabled`. Here is [an example file](https://github.com/kubesphere/ks-installer/blob/master/deploy/cluster-configuration.yaml) for your reference. Save the file after you finish. 4. Execute the following command to start installation: diff --git a/content/en/docs/quick-start/minimal-kubesphere-on-k8s.md b/content/en/docs/quick-start/minimal-kubesphere-on-k8s.md index 63e50e7c4..666e90c89 100644 --- a/content/en/docs/quick-start/minimal-kubesphere-on-k8s.md +++ b/content/en/docs/quick-start/minimal-kubesphere-on-k8s.md @@ -17,7 +17,7 @@ In addition to installing KubeSphere on a Linux machine, you can also deploy it - The CSR signing feature is activated in kube-apiserver when it is started with the `--cluster-signing-cert-file` and `--cluster-signing-key-file` parameters. See [RKE installation issue](https://github.com/kubesphere/kubesphere/issues/1925#issuecomment-591698309). - For more information about the prerequisites of installing KubeSphere on Kubernetes, see [Prerequisites](https://kubesphere-v3.netlify.app/docs/installing-on-kubernetes/introduction/prerequisites/). -{{}} +{{}} ## Deploy KubeSphere @@ -25,19 +25,19 @@ After you make sure your machine meets the prerequisites, you can follow the ste - Please read the note below before you execute the commands to start installation: -{{< notice note >}} +{{< notice note >}} -- If your server has trouble accessing GitHub, you can copy the content in [kubesphere-installer.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/kubesphere-installer.yaml) and [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml) respectively and past it to local files. You then can use `kubectl apply -f` for the local files to install KubeSphere. +- If your server has trouble accessing GitHub, you can copy the content in [kubesphere-installer.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml) and [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) respectively and past it to local files. You then can use `kubectl apply -f` for the local files to install KubeSphere. - In cluster-configuration.yaml, you need to disable `metrics_server` manually by changing `true` to `false` if the component has already been installed in your environment, especially for cloud-hosted Kubernetes clusters. -{{}} +{{}} ```bash -kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/kubesphere-installer.yaml +kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml ``` ```bash -kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml +kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml ``` - Inspect the logs of installation: @@ -59,4 +59,4 @@ kubectl get svc/ks-console -n kubesphere-system ## Enable Pluggable Components (Optional) -The guide above is used only for minimal installation by default. To enable other components in KubeSphere, see Enable Pluggable Components for more details. \ No newline at end of file +The guide above is used only for minimal installation by default. To enable other components in KubeSphere, see Enable Pluggable Components for more details. diff --git a/content/zh/docs/_index.md b/content/zh/docs/_index.md index f914c6238..97501485f 100644 --- a/content/zh/docs/_index.md +++ b/content/zh/docs/_index.md @@ -1,10 +1,30 @@ --- -title: "文档" +title: "Documentation" css: "scss/docs.scss" +LinkTitle: "Documentation" + section1: title: KubeSphere Documentation content: Learn how to build and manage cloud native applications using KubeSphere Container Platform. Get documentation, example code, tutorials, and more. image: /images/docs/banner.png ---- \ No newline at end of file + +section3: + title: Run KubeSphere and Kubernetes Stack from the Cloud Service + description: Cloud Providers are providing KubeSphere as a cloud-hosted service for users, help you to create an highly available cluster within minutes via several clicks. These services will be available in September, 2020. + list: + - image: /images/docs/aws.jpg + content: AWS Quickstart + link: + - image: /images/docs/qingcloud.svg + content: QingCloud QKE + link: + - image: /images/docs/radore.jpg + content: Radore RCD + link: + + titleRight: Want to host KubeSphere on your cloud? + btnContent: Partner with us + btnLink: /partner/ +--- diff --git a/content/zh/docs/installing-on-kubernetes/_index.md b/content/zh/docs/installing-on-kubernetes/_index.md index 51adfedde..6747b7cf4 100644 --- a/content/zh/docs/installing-on-kubernetes/_index.md +++ b/content/zh/docs/installing-on-kubernetes/_index.md @@ -1,9 +1,9 @@ --- -title: "Installing on Kubernetes" +title: "Installing KubeSphere on Kubernetes" description: "Help you to better understand KubeSphere with detailed graphics and contents" layout: "single" -linkTitle: "Installing on Kubernetes" +linkTitle: "Installing KubeSphere on Kubernetes" weight: 2500 icon: "/images/docs/docs.svg" diff --git a/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/_index.md b/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/_index.md index cd927f966..a3e5e8745 100644 --- a/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/_index.md +++ b/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/_index.md @@ -1,5 +1,5 @@ --- -linkTitle: "Install on Linux" +linkTitle: "Installing on Hosted Kubernetes" weight: 2200 _build: diff --git a/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/all-in-one.md b/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/all-in-one.md deleted file mode 100644 index 8214171ef..000000000 --- a/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/all-in-one.md +++ /dev/null @@ -1,116 +0,0 @@ ---- -title: "All-in-One Installation" -keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus' -description: 'The guide for installing all-in-one KubeSphere for developing or testing' - -linkTitle: "All-in-One" -weight: 2210 ---- - -For those who are new to KubeSphere and looking for a quick way to discover the platform, the all-in-one mode is your best choice to install it since it is one-click and hassle-free configuration installation with provisioning KubeSphere and Kubernetes on your machine. - -- The following instructions are for the default installation without enabling any optional components as we have made them pluggable since v2.1.0. If you want to enable any one, please see the section [Enable Pluggable Components](../all-in-one#enable-pluggable-components) below. -- If your machine has >= 8 cores and >= 16G memory, we recommend you to install the full package of KubeSphere by [enabling optional components](../complete-installation). - -## Prerequisites - -If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirement](../port-firewall) for more information. - -## Step 1: Prepare Linux Machine - -The following describes the requirements of hardware and operating system. - -- For `Ubuntu 16.04` OS, it is recommended to select the latest `16.04.5`. -- If you are using Ubuntu 18.04, you need to use the root user to install. -- If the Debian system does not have the sudo command installed, you need to execute the `apt update && apt install sudo` command using root before installation. - -### Hardware Recommendation - -| System | Minimum Requirements | -| ------- | ----------- | -| CentOS 7.4 ~ 7.7 (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:100 G | -| Ubuntu 16.04/18.04 LTS (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:100 G | -| Red Hat Enterprise Linux Server 7.4 (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:100 G | -| Debian Stretch 9.5 (64 bit)| CPU:2 Core, Memory:4 G, Disk Space:100 G | - -## Step 2: Download Installer Package - -Execute the following commands to download Installer 2.1.1 and unpack it. - -```bash -curl -L https://kubesphere.io/download/stable/latest > installer.tar.gz \ -&& tar -zxf installer.tar.gz && cd kubesphere-all-v2.1.1/scripts -``` - -## Step 3: Get Started with Installation - -You should not do anything except executing one command as follows. The installer will complete all things for you automatically including installing/updating dependency packages, installing Kubernetes with default version 1.16.7, storage service and so on. - -> Note: -> -> - Generally speaking, do not modify any configuration. -> - KubeSphere installs `calico` by default. If you would like to use a different network plugin, you are allowed to change the configuration in `conf/common.yaml`. You are also allowed to modify other configurations such as storage class, pluggable components, etc. -> - The default storage class is [OpenEBS](https://openebs.io/) which is a kind of [Local Volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) to provision persistence storage service. OpenEBS supports [dynamic provisioning PV](https://docs.openebs.io/docs/next/uglocalpv.html#Provision-OpenEBS-Local-PV-based-on-hostpath). It will be installed automatically for your testing purpose. -> - Please refer [storage configurations](../storage-configuration) for supported storage class. -> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts. - -**1.** Execute the following command: - -```bash -./install.sh -``` - -**2.** Enter `1` to select `All-in-one` mode and type `yes` if your machine satisfies the requirements to start: - -```bash -################################################ - KubeSphere Installer Menu -################################################ -* 1) All-in-one -* 2) Multi-node -* 3) Quit -################################################ -https://kubesphere.io/ 2020-02-24 -################################################ -Please input an option: 1 -``` - -**3.** Verify if KubeSphere is installed successfully or not: - -**(1).** If you see "Successful" returned after completed, it means the installation is successful. The console service is exposed through nodeport 30880 by default. You may need to bind EIP and configure port forwarding in your environment for outside users to access. Make sure you disable the related firewall. - -```bash -successsful! -##################################################### -### Welcome to KubeSphere! ### -##################################################### - -Console: http://192.168.0.8:30880 -Account: admin -Password: P@88w0rd - -NOTE:Please modify the default password after login. -##################################################### -``` - -> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components). - -**(2).** You will be able to use default account and password to log in the console to take a tour of KubeSphere. - -Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up. - -![Dashboard](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png) - -## Enable Pluggable Components - -The guide above is only used for minimal installation by default. You can execute the following command to open the configure map and enable pluggable components. Make sure your cluster has enough CPU and memory in advance, see [Enable Pluggable Components](../pluggable-components). - -```bash -kubectl edit cm -n kubesphere-system ks-installer -``` - -## FAQ - -The installer has been tested on Aliyun, AWS, Huawei Cloud, QingCloud and Tencent Cloud. Please check the [results](https://github.com/kubesphere/ks-installer/issues/23) for details. Also please read the [FAQ of installation](../../faq/faq-install). - -If you have any further questions please do not hesitate to file issues on [GitHub](https://github.com/kubesphere/kubesphere/issues). diff --git a/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/complete-installation.md b/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/complete-installation.md deleted file mode 100644 index e0ab92099..000000000 --- a/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/complete-installation.md +++ /dev/null @@ -1,76 +0,0 @@ ---- -title: "Install All Optional Components" -keywords: 'kubesphere, kubernetes, docker, devops, service mesh, openpitrix' -description: 'Install KubeSphere with all optional components enabled on Linux machine' - - -weight: 2260 ---- - -The installer only installs required components (i.e. minimal installation) by default since v2.1.0. Other components are designed to be pluggable, which means you can enable any of them before or after installation. If your machine meets the following minimum requirements, we recommend you to **enable all components before installation**. A complete installation gives you an opportunity to comprehensively discover the container platform. - - -Minimum Requirements - -- CPU: 8 cores in total of all machines -- Memory: 16 GB in total of all machines - - - -> Note: -> -> - If your machines do not meet the minimum requirements of a complete installation, you can enable any of components at your will. Please refer to [Enable Pluggable Components Installation](../pluggable-components). -> - It works for [All-in-One](../all-in-one) and [Multi-Node](../multi-node). - -This tutorial will walk you through how to enable all components of KubeSphere. - -## Download Installer Package - -If you do not have the package yet, please run the following commands to download Installer 2.1.1 and unpack it, then enter `conf` folder. - -```bash -$ curl -L https://kubesphere.io/download/stable/v2.1.1 > installer.tar.gz \ -&& tar -zxf installer.tar.gz && cd kubesphere-all-v2.1.1/conf -``` - -## Enable All Components - -Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default. - -```yaml -# LOGGING CONFIGURATION -# logging is an optional component when installing KubeSphere, and -# Kubernetes builtin logging APIs will be used if logging_enabled is set to false. -# Builtin logging only provides limited functions, so recommend to enable logging. -logging_enabled: true # Whether to install logging system -elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number -elasticsearch_data_replica: 2 # total number of data nodes -elasticsearch_volume_size: 20Gi # Elasticsearch volume size -log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default. -elk_prefix: logstash # the string making up index names. The index name will be formatted as ks--log -kibana_enabled: false # Kibana Whether to install built-in Grafana -#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption. -#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port - -#DevOps Configuration -devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image) -jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default -jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default -jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default -jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters -jenkinsJavaOpts_Xmx: 6g -jenkinsJavaOpts_MaxRAM: 8g -sonarqube_enabled: true # Whether to install built-in SonarQube -#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption. -#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token - -# Following components are all optional for KubeSphere, -# Which could be turned on to install it before installation or later by updating its value to true -openpitrix_enabled: true # KubeSphere application store -metrics_server_enabled: true # For KubeSphere HPA to use -servicemesh_enabled: true # KubeSphere service mesh system(Istio-based) -notification_enabled: true # KubeSphere notification system -alerting_enabled: true # KubeSphere alerting system -``` - -Save it, then you can continue the installation process. diff --git a/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/install-ks-on-huawei-cce.md b/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/install-ks-on-huawei-cce.md index 4961e5c54..4c38da865 100644 --- a/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/install-ks-on-huawei-cce.md +++ b/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/install-ks-on-huawei-cce.md @@ -1,29 +1,29 @@ --- -title: "在华为云 CCE 安装 KubeSphere" +title: "Install KubeSphere on Huawei CCE" keywords: "kubesphere, kubernetes, docker, huawei, cce" -description: "介绍如何在华为云 CCE 容器引擎上部署 KubeSphere 3.0" +description: "It is to introduce how to install KubeSphere 3.0 on Huaiwei CCE." --- -本指南将介绍如果在[华为云 CCE 容器引擎](https://support.huaweicloud.com/cce/)上部署并使用 KubeSphere 3.0.0 平台。 +This instruction is about how to install KubeSphere 3.0.0 on [Huaiwei CCE](https://support.huaweicloud.com/en-us/qs-cce/cce_qs_0001.html). -## 华为云 CCE 环境准备 +## Preparation for Huawei CCE -### 创建 Kubernetes 集群 +### Create Kubernetes Cluster -首先按使用环境的资源需求创建 Kubernetes 集群,满足以下一些条件即可(如已有环境并满足条件可跳过本节内容): +First, create a Kubernetes Cluster according to the resources. Meet the requirements below (ignore this part if your environment is as required). -- KubeSphere 3.0.0 默认支持的 Kubernetes 版本为 `1.15.x`, `1.16.x`, `1.17.x`, `1.18.x`,需要选择其中支持的版本进行集群创建(如 `v1.15.11`, `v1.17.9`); -- 需要确保 Kubernetes 集群所使用的云主机的网络可以,可以通过在创建集群的同时 “自动创建” 或 “使用已有” 弹性 IP;或者在集群创建后自行配置网络(如配置 [NAT 网关](https://support.huaweicloud.com/natgateway/)); -- 工作节点规格方面建议选择 `s3.xlarge.2` 的 `4核|8GB` 配置,并按需扩展工作节点数量(通常生产环境需要 3 个及以上工作节点)。 +- KubeSphere 3.0.0 supports Kubernetes `1.15.x`, `1.16.x`, `1.17.x`, and `1.18.x` by default. Select a version and create the cluster, e.g. `v1.15.11`, `v1.17.9`. +- Ensure the cloud computing network for your Kubernetes cluster works, or use an elastic IP when “Ato Create”or “Select Existing”; or confiure the network after the cluster is created. Refer to Configure [NAT Gateway](https://support.huaweicloud.com/en-us/productdesc-natgateway/en-us_topic_0086739762.html). +- Select `s3.xlarge.2`  `4-core|8GB` for nodes and add more if necessary (3 and more nodes are required for production environment). -### 创建公网 kubectl 证书 +### Create a public key for kubectl -- 创建完集群后,进入 `资源管理` > `集群管理` 界面,在 `基本信息` > `网络` 面板中,绑定 `公网apiserver地址`; -- 而后在右侧面板中,选择 `kubectl` 标签页,并在 `下载kubectl配置文件` 列表项中 `点击此处下载`,即可获取公用可用的 kubectl 证书。 +- Go to `Resource Management` > `Cluster Management` > `Basic Information` > `Network`, and bind `Public apiserver`. +- Select `kubectl` on the right column, go to `Download kubectl configuration file`, and click `Click here to download`, then you will get a public key for kubectl. -![生成 Kubectl 配置文件](/images/docs/huawei-cce/zh/generate-kubeconfig.png) +![Generate Kubectl config file](/images/docs/huawei-cce/en/generate-kubeconfig.png) -获取 kubectl 配置文件后,可通过 kubectl 命令行工具来验证集群连接: +After you get the configuration file for kubectl, use kubectl command lines to verify the connection to the cluster. ```bash $ kubectl version @@ -32,13 +32,13 @@ Server Version: version.Info{Major:"1", Minor:"17+", GitVersion:"v1.17.9-r0-CCE2 ``` -## KubeSphere 平台部署 +## KubeSphere Deployment -### 创建自定义 StorageClass +### Create a custom StorageClass -> 由于华为 CCE 自带的 Everest CSI 组件所提供的 StorageClass `csi-disk` 默认指定的是 SATA 磁盘(即普通 I/O 磁盘),但实际创建的 Kubernetes 集群所配置的磁盘基本只有 SAS(高 I/O)和 SSD (超高 I/O),因此建议额外创建对应的 StorageClass(并设定为默认)以方便后续部署使用。参见官方文档 - [使用 kubectl 创建云硬盘](https://support.huaweicloud.com/usermanual-cce/cce_01_0044.html#section7)。 +> Huawei CCE built-in Everest CSI provides StorageClass `csi-disk` which uses SATA (normal I/O) by default, but the actual disk that is for Kubernetes clusters is either SAS (high I/O) or SSD (extremely high I/O). So it is suggested that create an extra StorageClass and set it as default for later. Refer to the official document - [Use kubectl to create a cloud storage](https://support.huaweicloud.com/en-us/usermanual-cce/cce_01_0044.html). -以下示例展示如何创建一个 SAS(高 I/O)磁盘对应的 StorageClass: +Below is an example to create a SAS(high I/O) for its corresponding StorageClass. ```yaml # csi-disk-sas.yaml @@ -54,7 +54,7 @@ metadata: parameters: csi.storage.k8s.io/csi-driver-name: disk.csi.everest.io csi.storage.k8s.io/fstype: ext4 - # 绑定华为 “高I/O” 磁盘,如需 “超高I/O“ 则此值改为 SSD + # Bind Huawei “high I/O storage. If use “extremely high I/O, change it to SSD. everest.io/disk-volume-type: SAS everest.io/passthrough: "true" provisioner: everest-csi-provisioner @@ -64,48 +64,48 @@ volumeBindingMode: Immediate ``` -关于如何设定/取消默认 StorageClass,可参考 Kubernetes 官方文档 - [改变默认 StorageClass](https://kubernetes.io/zh/docs/tasks/administer-cluster/change-default-storage-class/)。 +For how to set up or cancel a default StorageClass, refer to Kubernetes official document - [Change Default StorageClass](https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/)。 -### 通过 ks-installer 执行最小化部署 +### Use ks-installer to minimize the deployment -接下来就可以使用 [ks-installer](https://github.com/kubesphere/ks-installer) 在已有的 Kubernetes 集群上来执行 KubeSphere 部署,建议首先还是以最小功能集进行安装,可执行以下命令: +Use [ks-installer](https://github.com/kubesphere/ks-installer) to deploy KubeSphere on an existing Kubernetes cluster. It is suggested that you install it in minimal size. ```bash -$ kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/kubesphere-installer.yaml -$ kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml +$ kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml +$ kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml ``` -执行部署命令后,可以通过进入 `工作负载` > `容器组 Pod` 界面,在右侧面板中查询 `kubesphere-system` 命名空间下的 Pod 运行状态了解 KubeSphere 平台最小功能集的部署状态;通过该命名空间下 `ks-console-xxxx` 容器的状态来了解 KubeSphere 控制台应用的可用状态。 +Go to `Workload` > `Pod`, and check the running status of the pod in `kubesphere-system` of its namespace to understand the minimal deployment of KubeSphere. `ks-console-xxxx` of the namespace to understand the app availability of KubeSphere console. -![部署 KubeSphere 最小功能集](/images/docs/huawei-cce/zh/deploy-ks-minimal.png) +![Deploy KubeSphere in Minimal](/images/docs/huawei-cce/en/deploy-ks-minimal.png) -### 开启 KubeSphere 外网访问 +### Expose KubeSphere Console -通过 `kubesphere-system` 命名空间下的 Pod 运行状态确认 KubeSphere 基础组件都已进入运行状态后,我们需要为 KubeSphere 控制台开启外网访问。 +Check the running status of Pod in `kubesphere-system` namespace and make sure the basic components of KubeSphere are running. Then expose KubeSphere console. -进入 `资源管理` > `网络管理`,在右侧面板中选择 `ks-console` 更改网络访问方式,建议选用 `负载均衡(``LoadBalancer)` 访问方式(需绑定弹性公网 IP),配置完成后如下图: +Go to `Resource Management` > `Network` and choose the service in `ks-console`. It is suggested that you choose `LoadBalancer` (Public IP is required). The configuration is shown below. -![开启 KubeSphere 外网访问](/images/docs/huawei-cce/zh/expose-ks-console.png) +![Expose KubeSphere Console](/images/docs/huawei-cce/en/expose-ks-console.png) -服务细节配置基本上选用默认选项即可,当然也可以按需进行调整: +Default settings are OK for other detailed configurations. You can also set it as you need. -![为 KubeSphere 控制台配置负载均衡访问](/images/docs/huawei-cce/zh/edit-ks-console-svc.png) +![Edit KubeSphere Console SVC](/images/docs/huawei-cce/en/edit-ks-console-svc.png) -通过负载均衡绑定公网访问后,即可使用给定的访问地址进行访问,进入到 KubeSphere 的登陆界面并使用默认账号(用户名 `admin`,密码 `P@88w0rd`)即可登陆平台: +After you set LoadBalancer for KubeSphere console, you can visit it via the given address. Go to KubeSphere login page and use the default account (username `admin` and pw `P@88w0rd`) to log in. -![登录 KubeSphere 平台](/images/docs/huawei-cce/zh/login-ks-console.png) +![Log in KubeSphere Console](/images/docs/huawei-cce/en/login-ks-console.png) -### 通过 KubeSphere 开启附加组件 +### Start add-ons via KubeSphere -KubeSphere 平台外网可访问后,接下来的操作即可都在平台内完成。开启附加组件的操作可以参考社区文档 - `KubeSphere 3.0 界面开启可插拔组件安装`。 +When KubeSphere can be visited via the Internet, all the actions can be done on the console. Refer to the document - `Start add-ons in KubeSphere 3.0`. -💡 需要留意:在开启 Istio 组件之前,由于自定义资源定义(CRD)冲突的问题,需要先删除华为 CCE 自带的 `applications.app.k8s.io` ,最直接的方式是通过 kubectl 工具来完成: +💡 Notes: Before you start Istio, you have to delete `applications.app.k8s.io` built in Huawei CCE due to the CRD conflict. The simple way to do it is to use kubectl. ```bash $ kubectl delete crd applications.app.k8s.io ``` -全部附加组件开启并安装成功后,进入集群管理界面,可以得到如下界面呈现效果,特别是在 `服务组件` 部分可以看到已经开启的各个基础和附加组件: +After all add-ons are installed, go to the Cluster Management, and you will see the interface below. You can see all the started add-ons in `Add-Ons`. -![KubeSphere 全功能集管理界面](/images/docs/huawei-cce/zh/view-ks-console-full.png) +![Full View of KubeSphere Console](/images/docs/huawei-cce/en/view-ks-console-full.png) diff --git a/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/install-ks-on-tencent-tke.md b/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/install-ks-on-tencent-tke.md deleted file mode 100644 index 07c95ed23..000000000 --- a/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/install-ks-on-tencent-tke.md +++ /dev/null @@ -1,103 +0,0 @@ ---- -title: "在腾讯云 TKE 安装 KubeSphere" -keywords: "kubesphere, kubernetes, docker, tencent, tke" -description: "介绍如何在腾讯云 TKE 上部署 KubeSphere 3.0" ---- - -本指南将介绍如何在[腾讯云 TKE](https://cloud.tencent.com/document/product/457/6759) 上部署并使用 KubeSphere 3.0.0 平台。 - -## 腾讯云 TKE 环境准备 - -### 创建 Kubernetes 集群 -首先按使用环境的资源需求[创建 Kubernetes 集群](https://cloud.tencent.com/document/product/457/32189),满足以下一些条件即可(如已有环境并满足条件可跳过本节内容): - -- KubeSphere 3.0.0 默认支持的 Kubernetes 版本为 `1.15.x`, `1.16.x`, `1.17.x`, `1.18.x`,需要选择其中支持的版本进行集群创建(如 `1.16.3`, `1.18.4`); -- 工作节点机型配置规格方面选择 `SA2.LARGE8` 的 `4核|8GB` 配置即可,并按需扩展工作节点数量(通常生产环境需要 3 个及以上工作节点)。 - - - -### 创建公网 kubectl 证书 - -- 创建完集群后,进入 `容器服务` > `集群` 界面,选择刚创建的集群,在 `基本信息` 面板中, `集群APIServer信息` 中开启 `外网访问` 。 -- 然后在下方 `kubeconfig` 列表项中点击 `下载`,即可获取公用可用的 kubectl 证书。 - -![generate-kubeconfig.png](/static/images/docs/tencent-tke/generate-kubeconfig.png) - -- 获取 kubectl 配置文件后,可通过 kubectl 命令行工具来验证集群连接: - - - -```bash -$ kubectl version -Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.4", GitCommit:"c96aede7b5205121079932896c4ad89bb93260af", GitTreeState:"clean", BuildDate:"2020-06-17T11:41:22Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"} -Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.4-tke.2", GitCommit:"f6b0517bc6bc426715a9ff86bd6aef39c81fd64a", GitTreeState:"clean", BuildDate:"2020-08-12T02:18:32Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"} -``` - - -## KubeSphere 平台部署 - -### 通过 ks-installer 执行最小化部署 -接下来就可以使用 [ks-installer](https://github.com/kubesphere/ks-installer) 在已有的 Kubernetes 集群上来执行 KubeSphere 部署,建议首先还是以最小功能集进行安装。 - -- 使用 kubectl 执行以下命令安装 KubeSphere: -```bash -$ kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/kubesphere-installer.yaml -``` - -- 本地创建名为 `cluster-configuration.yaml` 的文件: -```bash -$ vim cluster-configuration.yaml -``` - -- 复制此[文件](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml)中的内容到 `cluster-configuration.yaml` 中,并将 `metrics_server.enabled` 字段设为 `false`,修改完成后执行以下命令: - -![edit-cluster-configuration.png](/static/images/docs/tencent-tke/edit-cluster-configuration.png) -```bash -$ kubectl apply -f cluster-configuration.yaml -``` - -Note: -腾讯云 TKE 托管集群已默认部署 `hpa-metrics-server`,若 `cluster-configuration.yaml` 文件中未禁用,则会导致 KubeSphere 部署失败。 - - -- 执行以下命令查看部署日志,当日志输出如以下图片内容时则表示部署完成: -```bash -$ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f -``` -![ks-install-log.png](/static/images/docs/tencent-tke/ks-install-log.png) - -### 访问 KubeSphere 控制台 -部署完成后,您可以通过以下步骤访问 KubeSphere 控制台。 - -#### NodePort 方式访问 - -- 在 `容器服务` > `集群` 界面中,选择创建好的集群,在 `节点管理` > `节点` 面板中,查看任意一个节点的 `公网 IP`(集群安装时默认会免费为每个节点绑定公网 IP)。 - -![nodeport.png](/static/images/docs/tencent-tke/nodeport.png) - -- 由于服务安装时默认开启 NodePort 且端口为 30880,浏览器输入 `<公网 IP>:30880` ,并以默认账号(用户名 `admin`,密码 `P@88w0rd`)即可登录控制台。 - -![console.png](/static/images/docs/tencent-tke/console.png) - -#### LoadBalancer 方式访问 - -- 在 `容器服务` > `集群` 界面中,选择创建好的集群,在 `服务与路由` > `service` 面板中,点击 `ks-console` 一行中 `更新访问方式`。 - -![loadbalancer1.png](/static/images/docs/tencent-tke/loadbalancer1.png) - -- `服务访问方式` 选择 `提供公网访问`,`端口映射` 中 `服务端口` 填写您希望的端口号,点击 `更新访问方式`。 - -![loadbalancer2.png](/static/images/docs/tencent-tke/loadbalancer2.png) - -- 此时界面您将会看到 LoadBalancer 公网 IP: - -![loadbalancer3.png](/static/images/docs/tencent-tke/loadbalancer3.png) - -- 浏览器输入 `:<映射端口>`,并以默认账号(用户名 `admin`,密码 `P@88w0rd`)即可登录控制台。 - -![console.png](/static/images/docs/tencent-tke/console.png) - -### 通过 KubeSphere 开启附加组件 -KubeSphere 平台外网可访问后,接下来的操作即可都在平台内完成。开启附加组件的操作可以参考社区文档 - `KubeSphere 3.0 界面开启可插拔组件安装`。 -全部附加组件开启并安装成功后,进入集群管理界面,可以得到如下界面呈现效果,特别是在 `服务组件` 部分可以看到已经开启的各个基础和附加组件: -![console-full.png](/static/images/docs/tencent-tke/console-full.png) diff --git a/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-aks.md b/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-aks.md new file mode 100644 index 000000000..e2bb0b57e --- /dev/null +++ b/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-aks.md @@ -0,0 +1,131 @@ +--- +title: "Deploy KubeSphere on AKS" +keywords: "KubeSphere, Kubernetes, Installation, Azure, AKS" +description: "How to deploy KubeSphere on AKS" + +weight: 2270 +--- + +This guide walks you through the steps of deploying KubeSphere on [Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/). + +## Prepare a AKS cluster + +Azure can help you implement infrastructure as code by providing resource deployment automation options. Commonly adopted tools include [ARM templates](https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/overview) and [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/what-is-azure-cli?view=azure-cli-latest). In this guide, we will use Azure CLI to create all the resources that are needed for the installation of KubeSphere. + +### Use Azure Cloud Shell +You don't have to install Azure CLI on your machine as Azure provides a web-based terminal. Click the Cloud Shell button on the menu bar at the upper right corner in Azure portal. + +![Cloud Shell](/images/docs/aks/aks-launch-icon.png) + +Select **Bash** Shell. + +![Bash Shell](/images/docs/aks/aks-choices-bash.png) +### Create a Resource Group + +An Azure resource group is a logical group in which Azure resources are deployed and managed. The following example creates a resource group named `KubeSphereRG` in the location `westus`. + +```bash +az group create --name KubeSphereRG --location westus +``` + +### Create a AKS Cluster +Use the command `az aks create` to create an AKS cluster. The following example creates a cluster named `KuberSphereCluster` with three nodes. This will take several minutes to complete. + +```bash +az aks create --resource-group KubeSphereRG --name KuberSphereCluster --node-count 3 --enable-addons monitoring --generate-ssh-keys +``` +{{< notice note >}} + +You can use `--node-vm-size` or `-s` option to change the size of Kubernetes nodes. Default: Standard_DS2_v2 (2vCPU, 7GB memory). For more options, see [az aks create](https://docs.microsoft.com/en-us/cli/azure/aks?view=azure-cli-latest#az-aks-create). + +{{}} + +### Connect to the Cluster + +To configure kubectl to connect to the Kubernetes cluster, use the command `az aks get-credentials`. This command downloads credentials and configures the Kubernetes CLI to use them. + +```bash +az aks get-credentials --resource-group KubeSphereRG --name KuberSphereCluster +``` + +```bash +kebesphere@Azure:~$ kubectl get nodes +NAME STATUS ROLES AGE VERSION +aks-nodepool1-23754246-vmss000000 Ready agent 38m v1.16.13 +``` +### Check Azure Resources in the Portal +After you execute all the commands above, you can see there are 2 Resource Groups created in Azure Portal. + +![Resource groups](/images/docs/aks/aks-create-command.png) + +Azure Kubernetes Services itself will be placed in KubeSphereRG. + +![Azure Kubernetes Services](/images/docs/aks/aks-dashboard.png) + +All the other Resources will be placed in MC_KubeSphereRG_KuberSphereCluster_westus, such as VMs, Load Balancer and Virtual Network. + +![Azure Kubernetes Services](/images/docs/aks/aks-all-resources.png) + +## Deploy KubeSphere on AKS +To start deploying KubeSphere, use the following command. +```bash +kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml +``` +Download the cluster-configuration.yaml as below and you can customize the configuration. You can also enable pluggable components by setting the `enabled` property to `true` in this file. +```bash +wget https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml +``` +As `metrics-server` is already installed on AKS, you need to disable the component in the cluster-configuration.yaml file by changing `true` to `false` for `enabled`. +```bash +kebesphere@Azure:~$ vim ./cluster-configuration.yaml +--- + metrics_server: # (CPU: 56 m, Memory: 44.35 MiB) Whether to install metrics-server. IT enables HPA (Horizontal Pod Autoscaler). + enabled: false +--- +``` +The installation process will start after the cluster configuration is applied through the following command: +```bash +kubectl apply -f ./cluster-configuration.yaml +``` + +You can inspect the logs of installation through the following command: + +```bash +kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f +``` + +## Access KubeSphere Console + +To access KubeSphere console from a public IP address, you need to change the service type to `LoadBalancer`. +```bash +kubectl edit service ks-console -n kubesphere-system +``` +Find the following section and change the type to `LoadBalancer`. +```bash +spec: + clusterIP: 10.0.78.113 + externalTrafficPolicy: Cluster + ports: + - name: nginx + nodePort: 30880 + port: 80 + protocol: TCP + targetPort: 8000 + selector: + app: ks-console + tier: frontend + version: v3.0.0 + sessionAffinity: None + type: LoadBalancer # Change NodePort to LoadBalancer +status: + loadBalancer: {} +``` +After saving the configuration of ks-console service, you can use the following command to get the public IP address (under `EXTERNAL-IP`). Use the IP address to access the console with the default account and password (`admin/P@88w0rd`). +```bash +kebesphere@Azure:~$ kubectl get svc/ks-console -n kubesphere-system +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +ks-console LoadBalancer 10.0.181.93 13.86.xxx.xxx 80:30194/TCP 13m 6379/TCP 10m +``` +## Enable Pluggable Components (Optional) + +The example above demonstrates the process of a default minimal installation. For pluggable components, you can enable them either before or after the installation. See [Enable Pluggable Components](https://github.com/kubesphere/ks-installer#enable-pluggable-components) for details. \ No newline at end of file diff --git a/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-do.md b/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-do.md new file mode 100644 index 000000000..704665fdc --- /dev/null +++ b/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-do.md @@ -0,0 +1,126 @@ +--- +title: "Deploy KubeSphere on DigitalOcean" +keywords: 'Kubernetes, KubeSphere, DigitalOcean, Installation' +description: 'How to install KubeSphere on DigitalOcean' + +weight: 2265 +--- + +![KubeSphere+DOKS](/images/docs/do/KubeSphere-DOKS.png) + +This guide walks you through the steps of deploying KubeSphere on [ DigitalOcean Kubernetes](https://www.digitalocean.com/products/kubernetes/). + +## Prepare a DOKS Cluster + +A Kubernetes cluster in DO is a prerequisite for installing KubeSphere. Go to your [DO account](https://cloud.digitalocean.com/) and, in the navigation menu, refer to the image below to create a cluster. + +![create-cluster-do](/images/docs/do/create-cluster-do.png) + +You need to select: +1. Kubernetes version (e.g. *1.18.6-do.0*) +2. Datacenter region (e.g. *Frankfurt*) +3. VPC network (e.g. *default-fra1*) +4. Cluster capacity (e.g. 2 standard nodes with 2 vCPUs and 4GB of RAM each) +5. A name for the cluster (e.g. *kubesphere-3*) + +![config-cluster-do](/images/docs/do/config-cluster-do.png) + +{{< notice note >}} + +- Supported Kubernetes versions for KubeSphere 3.0.0: 1.15.x, 1.16.x, 1.17.x, 1.18.x. +- 2 nodes are included in this example. You can add more nodes based on your own needs especially in a production environment. +- The machine type Standard / 4 GB / 2 vCPUs is for minimal installation. If you plan to enable several pluggable components or use the cluster for production, we recommend to upgrade your nodes to a more powerfull type (such as CPU-Optimized / 8 GB / 4 vCPUs). It seems that DigitalOcean provisions the master nodes based on the type of the worker nodes, and for Standard ones the API server can become unresponsive quite fast. + +{{}} + +When the cluster is ready, you can download the config file for kubectl. + +![download-config-file](/images/docs/do/download-config-file.png) + +## Install KubeSphere on DOKS + +Now that the cluster is ready, you can install KubeSphere following this steps: + +- Install KubeSphere using kubectl. The following command is only for the default minimal installation. + + ```bash + kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml + ``` + +- Create a local cluster-configuration.yaml. + + ```bash + vi cluster-configuration.yaml + ``` + +- Copy all the content in this [file](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) and paste it to your local cluster-configuration.yaml. + +- Save the file when you finish. Execute the following command to start installation: + + ```bash + kubectl apply -f cluster-configuration.yaml + ``` + +- Inspect the logs of installation: + + ```bash + kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f + ``` + +When the installation finishes, you can see the following message: + +```bash +##################################################### +### Welcome to KubeSphere! ### +##################################################### +Console: http://10.XXX.XXX.XXX:30880 +Account: admin +Password: P@88w0rd +NOTES: + 1. After logging into the console, please check the + monitoring status of service components in + the "Cluster Management". If any service is not + ready, please wait patiently until all components + are ready. + 2. Please modify the default password after login. +##################################################### +https://kubesphere.io 2020-xx-xx xx:xx:xx +``` + +## Access KubeSphere Console + +Now that KubeSphere is installed, you can access the web console of KubeSphere by following the steps below. + +- Go to the Kubernetes Dashboard provided by DigitalOcean. + + ![kubernetes-dashboard-access](/images/docs/do/kubernetes-dashboard-access.png) + +- Select the **kubesphere-system** namespace + + ![kubernetes-dashboard-namespace](/images/docs/do/kubernetes-dashboard-namespace.png) + +- In **Service -> Services**, edit the service **ks-console**. + + ![kubernetes-dashboard-edit](/images/docs/do/kubernetes-dashboard-edit.png) + +- Change the type from `NodePort` to `LoadBalancer`. Save the file when you finish. + + ![lb-change](/images/docs/do/lb-change.png) + +- Access the KubeSphere's web console using the endpoint generated by DO. + + ![access-console](/images/docs/do/access-console.png) + + {{< notice tip >}} + + Instead of changing the service type to `LoadBalancer`, you can also access KubeSphere console via `NodeIP:NodePort` (service type set to `NodePort`). You need to get the pulic IP of anyone of your nodes. + + {{}} + +- Log in the console with the default account and password (`admin/P@88w0rd`). In the cluster overview page, you can see the dashboard as shown in the following image. + + ![doks-cluster](/images/docs/do/doks-cluster.png) + +## Enable Pluggable Components (Optional) + +The example above demonstrates the process of a default minimal installation. To enable other components in KubeSphere, see Enable Pluggable Components for more details. diff --git a/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-eks.md b/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-eks.md new file mode 100644 index 000000000..9113157e3 --- /dev/null +++ b/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-eks.md @@ -0,0 +1,172 @@ +--- +title: "Deploy KubeSphere on AWS EKS" +keywords: 'Kubernetes, KubeSphere, EKS, Installation' +description: 'How to install KubeSphere on EKS' + +weight: 2265 +--- + +This guide walks you through the steps of deploying KubeSphere on [AWS EKS](https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html). +## Install the AWS CLI +Tht aws EKS does not have a web terminal like GKE, so we must install aws cli first. Take a example for macOS and other operating system can according [Getting Started EKS](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html) +```shell +pip3 install awscli --upgrade --user +``` +Check it with `aws --version` +![check-aws-cli](/images/docs/eks/check-aws-cli.png) + +## Prepare a EKS Cluster + +- A standard Kubernetes cluster in AWS is a prerequisite of installing KubeSphere. Go to the navigation menu and refer to the image below to create a cluster. + +![create-cluster-eks](/images/docs/eks/eks-launch-icon.png) + +- On the Configure cluster page, fill in the following fields: +![config-cluster-page](/images/docs/eks/config-cluster-page.png) + + - Name – A unique name for your cluster. + + - Kubernetes version – The version of Kubernetes to use for your cluster. + + - Cluster service role – Select the IAM role that you created with [Create your Amazon EKS cluster IAM role](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html#role-create). + + - Secrets encryption – (Optional) Choose to enable envelope encryption of Kubernetes secrets using the AWS Key Management Service (AWS KMS). If you enable envelope encryption, the Kubernetes secrets are encrypted using the customer master key (CMK) that you select. The CMK must be symmetric, created in the same region as the cluster, and if the CMK was created in a different account, the user must have access to the CMK. For more information, see [Allowing users in other accounts to use a CMK](https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-modifying-external-accounts.html) in the AWS Key Management Service Developer Guide. + + - Kubernetes secrets encryption with an AWS KMS CMK requires Kubernetes version 1.13 or later. If no keys are listed, you must create one first. For more information, see [Creating keys](https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html). + + - Tags – (Optional) Add any tags to your cluster. For more information, see [Tagging your Amazon EKS resources](https://docs.aws.amazon.com/eks/latest/userguide/eks-using-tags.html). + +- Select Next. + + - On the Specify networking page, select values for the following fields: + ![network](/images/docs/eks/networking.png) + + - VPC – The VPC that you created previously in [Create your Amazon EKS cluster VPC](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html#vpc-create). You can find the name of your VPC in the drop-down list. + + - Subnets – By default, the available subnets in the VPC specified in the previous field are preselected. Select any subnet that you don't want to host cluster resources, such as worker nodes or load balancers. + + - Security groups – The SecurityGroups value from the AWS CloudFormation output that you generated with [Create your Amazon EKS cluster VPC](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html#vpc-create). This security group has ControlPlaneSecurityGroup in the drop-down name. +- For Cluster endpoint access – Choose one of the following options: +![endpoints](/images/docs/eks/endpoints.png) + - Public – Enables only public access to your cluster's Kubernetes API server endpoint. Kubernetes API requests that originate from outside of your cluster's VPC use the public endpoint. By default, access is allowed from any source IP address. You can optionally restrict access to one or more CIDR ranges such as 192.168.0.0/16, for example, by selecting Advanced settings and then selecting Add source. + + - Private – Enables only private access to your cluster's Kubernetes API server endpoint. Kubernetes API requests that originate from within your cluster's VPC use the private VPC endpoint. + + > Important + If you created a VPC without outbound internet access, then you must enable private access. + + - Public and private – Enables public and private access. +- Select Next. +![logging](/images/docs/eks/logging.png) + - On the Configure logging page, you can optionally choose which log types that you want to enable. By default, each log type is Disabled. For more information, see [Amazon EKS control plane logging](https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html). + +- Select Next. +![revies](/images/docs/eks/review.png) + - On the Review and create page, review the information that you entered or selected on the previous pages. Select Edit if you need to make changes to any of your selections. Once you're satisfied with your settings, select Create. The Status field shows CREATING until the cluster provisioning process completes. +For more information about the previous options, see Modifying cluster endpoint access. +When your cluster provisioning is complete (usually between 10 and 15 minutes), note the API server endpoint and Certificate authority values. These are used in your kubectl configuration. +![creating](/images/docs/eks/creating.png) +- Create **Node Group**, define 2 nodes in this cluster. + ![node-group](/images/docs/eks/node-group.png) +- Config node group + ![config-node-group](/images/docs/eks/config-node-grop.png) + +{{< notice note >}} +- Supported Kubernetes versions for KubeSphere 3.0.0: 1.15.x, 1.16.x, 1.17.x, 1.18.x. +- Ubuntu is used for the operating system here as an example. For more information on supported systems, see Overview. +- 3 nodes are included in this example. You can add more nodes based on your own needs especially in a production environment. +- The machine type t3.medium (2 vCPU, 4GB memory) is for minimal installation. If you want to enable pluggable components or use the cluster for production, please select a machine type with more resources. +- For other settings, you can change them as well based on your own needs or use the default value. + +{{}} + +- When the EKS cluster is ready, you can connect to the cluster with kubectl. +## configure kubectl +We will uses the kubectl command-line utility for communicating with the cluster API server. Firstly, we should get the kubeconfig of the eks cluster which created just now. +- Configure your AWS CLI credentials +```shell +$ aws configure +AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE +AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY +Default region name [None]: region-code +Default output format [None]: json +``` +- To create your kubeconfig file with the AWS CLI + +```shell +aws eks --region us-west-2 update-kubeconfig --name cluster_name +``` + - By default, the resulting configuration file is created at the default kubeconfig path (.kube/config) in your home directory or merged with an existing kubeconfig at that location. You can specify another path with the --kubeconfig option. + + - You can specify an IAM role ARN with the --role-arn option to use for authentication when you issue kubectl commands. Otherwise, the IAM entity in your default AWS CLI or SDK credential chain is used. You can view your default AWS CLI or SDK identity by running the aws sts get-caller-identity command. + +For more information, see the help page with the aws eks update-kubeconfig help command or see update-kubeconfig in the [AWS CLI Command Reference](https://docs.aws.amazon.com/eks/latest/userguide/security_iam_id-based-policy-examples.html). +- Test your configuration. + ```shell + kubectl get svc + ``` + +## Install KubeSphere on EKS + +- Install KubeSphere using kubectl. The following command is only for the default minimal installation. + +```bash +kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml +``` +![minimal-install](/images/docs/eks/minimal-install.png) + +- Create a local cluster-configuration.yaml. +```shell +kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml +``` +![config-install](/images/docs/eks/config-install.png) + +- Inspect the logs of installation: + +```bash +kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f +``` + +- When the installation finishes, you can see the following message: + +```bash +##################################################### +### Welcome to KubeSphere! ### +##################################################### +Account: admin +Password: P@88w0rd +NOTES: + 1. After logging into the console, please check the + monitoring status of service components in + the "Cluster Management". If any service is not + ready, please wait patiently until all components + are ready. + 2. Please modify the default password after login. +##################################################### +https://kubesphere.io 2020-xx-xx xx:xx:xx +``` + +## Access KubeSphere Console + +Now that KubeSphere is installed, you can access the web console of KubeSphere by following the step below. + +- Select the service **ks-console**. +```shell +kubectl get svc -nkubesphere-system +``` + +- `kubectl edit ks-console` and change the type from `NodePort` to `LoadBalancer`. Save the file when you finish. +![loadbalancer](/images/docs/eks/loadbalancer.png) + +- `kubectl get svc -nkubesphere-system` and get your external ip + ![external-ip](/images/docs/eks/external-ip.png) + +- Access the web console of KubeSphere using the external-ip generated by EKS. + +- Log in the console with the default account and password (`admin/P@88w0rd`). In the cluster overview page, you can see the dashboard as shown in the following image. + +![gke-cluster](https://ap3.qingstor.com/kubesphere-website/docs/gke-cluster.png) + +## Enable Pluggable Components (Optional) + +The example above demonstrates the process of a default minimal installation. To enable other components in KubeSphere, see Enable Pluggable Components for more details. diff --git a/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-gke.md b/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-gke.md new file mode 100644 index 000000000..82191080d --- /dev/null +++ b/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-gke.md @@ -0,0 +1,132 @@ +--- +title: "Deploy KubeSphere on GKE" +keywords: 'Kubernetes, KubeSphere, GKE, Installation' +description: 'How to install KubeSphere on GKE' + +weight: 2265 +--- + +![KubeSphere+GKE](https://pek3b.qingstor.com/kubesphere-docs/png/20191123145223.png) + +This guide walks you through the steps of deploying KubeSphere on [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/). + +## Prepare a GKE Cluster + +- A standard Kubernetes cluster in GKE is a prerequisite of installing KubeSphere. Go to the navigation menu and refer to the image below to create a cluster. + +![create-cluster-gke](https://ap3.qingstor.com/kubesphere-website/docs/create-cluster-gke.jpg) + +- In **Cluster basics**, select a Master version. The static version `1.15.12-gke.2` is used here as an example. + +![](https://ap3.qingstor.com/kubesphere-website/docs/master-version.png) + +- In **default-pool** under **Node Pools**, define 3 nodes in this cluster. + +![node-number](https://ap3.qingstor.com/kubesphere-website/docs/node-number.png) + +- Go to **Nodes**, select the image type and set the Machine Configuration as below. When you finish, click **Create**. + +![machine-config](https://ap3.qingstor.com/kubesphere-website/docs/machine-configuration.jpg) + +{{< notice note >}} + +- Supported Kubernetes versions for KubeSphere 3.0.0: 1.15.x, 1.16.x, 1.17.x, 1.18.x. +- Ubuntu is used for the operating system here as an example. For more information on supported systems, see Overview. +- 3 nodes are included in this example. You can add more nodes based on your own needs especially in a production environment. +- The machine type e2-medium (2 vCPU, 4GB memory) is for minimal installation. If you want to enable pluggable components or use the cluster for production, please select a machine type with more resources. +- For other settings, you can change them as well based on your own needs or use the default value. + +{{}} + +- When the GKE cluster is ready, you can connect to the cluster with Cloud Shell. + + +![cloud-shell-gke](https://ap3.qingstor.com/kubesphere-website/docs/cloud-shell.png) + +## Install KubeSphere on GKE + +- Install KubeSphere using kubectl. The following command is only for the default minimal installation. + +```bash +kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml +``` + +- Create a local cluster-configuration.yaml. + +```bash +vi cluster-configuration.yaml +``` + +- Copy all the content in this [file](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) and paste it to your local cluster-configuration.yaml. Navigate to `metrics_server`, and change `true` to `false` for `enabled`. + +![change-metrics-server](https://ap3.qingstor.com/kubesphere-website/docs/true-false.png) + +{{< notice warning >}} + +Metrics Server is already installed on GKE. If you do not disable `metrics_server` in the cluster-configuration.yaml file, it will cause an issue and stop the installation process. + +{{}} + +- Save the file when you finish. Execute the following command to start installation: + +```bash +kubectl apply -f cluster-configuration.yaml +``` + +- Inspect the logs of installation: + +```bash +kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f +``` + +- When the installation finishes, you can see the following message: + +```bash +##################################################### +### Welcome to KubeSphere! ### +##################################################### +Console: http://10.128.0.44:30880 +Account: admin +Password: P@88w0rd +NOTES: + 1. After logging into the console, please check the + monitoring status of service components in + the "Cluster Management". If any service is not + ready, please wait patiently until all components + are ready. + 2. Please modify the default password after login. +##################################################### +https://kubesphere.io 2020-xx-xx xx:xx:xx +``` + +## Access KubeSphere Console + +Now that KubeSphere is installed, you can access the web console of KubeSphere by following the step below. + +- In **Services & Ingress**, select the service **ks-console**. + +![ks-console](https://ap3.qingstor.com/kubesphere-website/docs/console-service.jpg) + +- In **Service details**, click **Edit** and change the type from `NodePort` to `LoadBalancer`. Save the file when you finish. + +![lb-change](https://ap3.qingstor.com/kubesphere-website/docs/lb-change.jpg) + +- Access the web console of KubeSphere using the endpoint generated by GKE. + + +![access-console](https://ap3.qingstor.com/kubesphere-website/docs/access-console.png) + +{{< notice tip >}} + +Instead of changing the service type to `LoadBalancer`, you can also access KubeSphere console via `NodeIP:NodePort` (service type set to `NodePort`). You may need to open port `30880` in firewall rules. + +{{}} + +- Log in the console with the default account and password (`admin/P@88w0rd`). In the cluster overview page, you can see the dashboard as shown in the following image. + +![gke-cluster](https://ap3.qingstor.com/kubesphere-website/docs/gke-cluster.png) + +## Enable Pluggable Components (Optional) + +The example above demonstrates the process of a default minimal installation. To enable other components in KubeSphere, see Enable Pluggable Components for more details. + diff --git a/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-huaweicloud-cce.md b/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-huaweicloud-cce.md new file mode 100644 index 000000000..dfc8c7211 --- /dev/null +++ b/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-huaweicloud-cce.md @@ -0,0 +1,9 @@ +--- +title: "Install KubeSphere on Huaweicloud CCE" +keywords: 'Kubernetes, KubeSphere, CCE, Installation, Huaweicloud' +description: 'How to install KubeSphere on Huaweicloud CCE' + +weight: 2268 +--- + +TBD diff --git a/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-oke.md b/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-oke.md new file mode 100644 index 000000000..b9acfbddf --- /dev/null +++ b/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-oke.md @@ -0,0 +1,152 @@ +--- +title: "Deploy KubeSphere on Oracle OKE" +keywords: 'Kubernetes, KubeSphere, OKE, Installation, Oracle-cloud' +description: 'How to install KubeSphere on Oracle OKE' + +weight: 2247 +--- + +This guide walks you through the steps of deploying KubeSphere on [Oracle Kubernetes Engine](https://www.oracle.com/cloud/compute/container-engine-kubernetes.html). + +## Create a Kubernetes Cluster + +- A standard Kubernetes cluster in OKE is a prerequisite of installing KubeSphere. Go to the navigation menu and refer to the image below to create a cluster. + +![oke-cluster](https://ap3.qingstor.com/kubesphere-website/docs/oke-cluster.jpg) + +- In the pop-up window, select **Quick Create** and click **Launch Workflow**. + +![oke-quickcreate](https://ap3.qingstor.com/kubesphere-website/docs/oke-quickcreate.jpg) + +{{< notice note >}} + +In this example, **Quick Create** is used for demonstration which will automatically create all the resources necessary for a cluster in Oracle Cloud. If you select **Custom Create**, you need to create all the resources (such as VCN and LB Subnets) yourself. + +{{}} + +- Next, you need to set the cluster with basic information. Here is an example for your reference. When you finish, click **Next**. + +![](https://ap3.qingstor.com/kubesphere-website/docs/cluster-setting.jpg) + +{{< notice note >}} + +- Supported Kubernetes versions for KubeSphere 3.0.0: 1.15.x, 1.16.x, 1.17.x, 1.18.x. +- It is recommended that you should select **Public** for **Visibility Type**, which will assign a public IP address for every node. The IP address can be used later to access the web console of KubeSphere. +- In Oracle Cloud, a Shape is a template that determines the number of CPUs, amount of memory, and other resources that are allocated to an instance. `VM.Standard.E2.2 (2 CPUs and 16G Memory)` is used in this example. For more information, see [Standard Shapes](https://docs.cloud.oracle.com/en-us/iaas/Content/Compute/References/computeshapes.htm#vmshapes__vm-standard). +- 3 nodes are included in this example. You can add more nodes based on your own needs especially in a production environment. + +{{}} + +- Review cluster information and click **Create Cluster** if no adjustment is needed. + +![](https://ap3.qingstor.com/kubesphere-website/docs/create-cluster.jpg) + +- After the cluster is created, click **Close**. + +![cluster-ready](https://ap3.qingstor.com/kubesphere-website/docs/cluster-ready.jpg) + +- Make sure the Cluster Status is **Active** and click **Access Cluster**. + +![access-cluster](https://ap3.qingstor.com/kubesphere-website/docs/access-cluster.jpg) + +- In the pop-up window, select **Cloud Shell Access** to access the cluster. Click **Launch Cloud Shell** and copy the code provided by Oracle Cloud. + +![cloud-shell-access](https://ap3.qingstor.com/kubesphere-website/docs/cloudshell-access.png) + +- In Cloud Shell, paste the command so that we can execute the installation command later. + +![cloud-shell-oke](https://ap3.qingstor.com/kubesphere-website/docs/oke-cloud-shell.png) + +{{< notice warning >}} + +If you do not copy and execute the command above, you cannot proceed with the steps below. + +{{}} + +## Install KubeSphere on OKE + +- Install KubeSphere using kubectl. The following command is only for the default minimal installation. + +```bash +kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml +``` + +```bash +kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml +``` + +- Inspect the logs of installation: + +```bash +kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f +``` + +- When the installation finishes, you can see the following message: + +```bash +##################################################### +### Welcome to KubeSphere! ### +##################################################### + +Console: http://10.0.10.2:30880 +Account: admin +Password: P@88w0rd + +NOTES: + 1. After logging into the console, please check the + monitoring status of service components in + the "Cluster Management". If any service is not + ready, please wait patiently until all components + are ready. + 2. Please modify the default password after login. + +##################################################### +https://kubesphere.io 20xx-xx-xx xx:xx:xx +``` + +## Access KubeSphere Console + +Now that KubeSphere is installed, you can access the web console of KubeSphere either through `NodePort` or `LoadBalancer`. + +- Check the service of KubeSphere console through the following command: + +```bash +kubectl get svc -n kubesphere-system +``` + +- The output may look as below. You can change the type to `LoadBalancer` so that the external IP address can be exposed. + +![console-nodeport](https://ap3.qingstor.com/kubesphere-website/docs/nodeport-console.jpg) + +{{< notice tip >}} + +It can be seen above that the service `ks-console` is being exposed through NodePort, which means you can access the console directly via `NodeIP:NodePort` (the public IP address of any node is applicable). You may need to open port `30880` in firewall rules. + +{{}} + +- Execute the command to edit the service configuration. + +```bash +kubectl edit svc ks-console -o yaml -n kubesphere-system +``` + +- Navigate to `type` and change `NodePort` to `LoadBalancer`. Save the configuration after you finish. + +![](https://ap3.qingstor.com/kubesphere-website/docs/change-service-type.png) + +- Execute the following command again and you can see the IP address displayed as below. + +```bash +kubectl get svc -n kubesphere-system +``` + +![console-service](https://ap3.qingstor.com/kubesphere-website/docs/console-service.png) + +- Log in the console through the external IP address with the default account and password (`admin/P@88w0rd`). In the cluster overview page, you can see the dashboard shown below: + +![kubesphere-oke-dashboard](https://ap3.qingstor.com/kubesphere-website/docs/kubesphere-oke-dashboard.png) + +## Enable Pluggable Components (Optional) + +The example above demonstrates the process of a default minimal installation. To enable other components in KubeSphere, see Enable Pluggable Components for more details. + diff --git a/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/master-ha.md b/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/master-ha.md deleted file mode 100644 index ee8f26203..000000000 --- a/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/master-ha.md +++ /dev/null @@ -1,152 +0,0 @@ ---- -title: "High Availability Configuration" -keywords: "kubesphere, kubernetes, docker,installation, HA, high availability" -description: "The guide for installing a high availability of KubeSphere cluster" - -weight: 2230 ---- - -## Introduction - -[Multi-node installation](../multi-node) can help you to quickly set up a single-master cluster on multiple machines for development and testing. However, we need to consider the high availability of the cluster for production. Since the key components on the master node, i.e. kube-apiserver, kube-scheduler, and kube-controller-manager are running on a single master node, Kubernetes and KubeSphere will be unavailable during the master being down. Therefore we need to set up a high availability cluster by provisioning load balancers and multiple masters. You can use any cloud load balancer, or any hardware load balancer (e.g. F5). In addition, keepalved and Haproxy is also an alternative for creating such high-availability cluster. - -This document walks you through an example how to create two [QingCloud Load Balancer](https://docs.qingcloud.com/product/network/loadbalancer), serving as internal load balancer and external load balancer respectively, and how to configure the high availability of masters and Etcd using the load balancers. - -## Prerequisites - -- Please make sure that you already read [Multi-Node installation](../multi-node). This document only demonstrates how to configure load balancers. -- You need a [QingCloud](https://console.qingcloud.com/login) account to create load balancers, or follow the guide of any other cloud provider to create load balancers. - -## Architecture - -This example prepares six machines of CentOS 7.5. We will create two load balancers, and deploy three masters and Etcd nodes on three of the machines. You can configure these masters and Etcd nodes in `conf/hosts.ini`. - -![Master and etcd node high availability architecture](https://pek3b.qingstor.com/kubesphere-docs/png/20200307215924.png) - -## Install HA Cluster - -### Step 1: Create Load Balancers - -This step briefly shows an example of creating a load balancer on QingCloud platform. - -#### Create an Internal Load Balancer - -1.1. Log in [QingCloud Console](https://console.qingcloud.com/login) and select **Network & CDN → Load Balancers**, then click on the create button and fill in the basic information. - -1.2. Choose the VxNet that your machines are created within from the **Network** dropdown list. Here is `kube`. Other settings can be default values as follows. Click **Submit** to complete the creation. - -![Create Internal LB on QingCloud](https://pek3b.qingstor.com/kubesphere-docs/png/20200215224125.png) - -1.3. Drill into the detail page of the load balancer, then create a listener that listens to the port `6443` of the `TCP` protocol. - -- Name: Define a name for this Listener -- Listener Protocol: Select `TCP` protocol -- Port: `6443` -- Load mode: `Poll` - -> Note: After creating the listener, please check the firewall rules of the load balancer. Make sure that the port `6443` has been added to the firewall rules and the external traffic can pass through `6443`. Otherwise, the installation will fail. - -![Add Listener to LB](https://pek3b.qingstor.com/kubesphere-docs/png/20200215225205.png) - -1.4. Click **Add Backend**, choose the VxNet `kube` that we chose. Then click on the button **Advanced Search** and choose the three master nodes under the VxNet and set the port to `6443` which is the default secure port of api-server. - -Click **Submit** when you are done. - -![Choose Backends](https://pek3b.qingstor.com/kubesphere-docs/png/20200215225550.png) - -1.5. Click on the button **Apply Changes** to activate the configurations. At this point, you can find the three masters have been added as the backend servers of the listener that is behind the internal load balancer. - -> Please note: The status of all masters might shows `Not available` after you added them as backends. This is normal since the port `6443` of api-server are not active in masters yet. The status will change to `Active` and the port of api-server will be exposed after installation complete, which means the internal load balancer you configured works as expected. - -![Apply Changes](https://pek3b.qingstor.com/kubesphere-docs/png/20200215230107.png) - -#### Create an External Load Balancer - -You need to create an EIP in advance. - -1.6. Similarly, create an external load balancer without joining any network, but associate the EIP that you created to this load balancer. - -1.7. Enter the load balancer detail page, create a listener that listens to the port `30880` of the `HTTP` protocol which is the nodeport of KubeSphere console.. - -> Note: After creating the listener, please check the firewall rules of the load balancer. Make sure that the port `30880` has been added to the firewall rules and the external traffic can pass through `6443`. Otherwise, the installation will fail. - -![Create external LB](https://pek3b.qingstor.com/kubesphere-docs/png/20200215232114.png) - -1.8. Click **Add Backend**, then choose the `six` machines that we are going to install KubeSphere within the VxNet `Kube`, and set the port to `30880`. - -Click **Submit** when you are done. - -1.9. Click on the button **Apply Changes** to activate the configurations. At this point, you can find the six machines have been added as the backend servers of the listener that is behind the external load balancer. - -![Apply Changes](https://pek3b.qingstor.com/kubesphere-docs/png/20200215232445.png) - -### Step 2: Modify the host.ini - -Go to the taskbox where you downloaded the installer by following the [Multi-node Installation](../multi-node) and complete the following configurations. - -| **Parameter** | **Description** | -|--------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `[all]` | node information. Use the following syntax if you run installation as `root` user:
- ` ansible_connection= ip=`
- ` ansible_host= ip= ansible_ssh_pass=`
If you log in as a non-root user, use the syntax:
- ` ansible_connection= ip= ansible_user= ansible_become_pass=` | -| `[kube-master]` | master node names | -| `[kube-node]` | worker node names | -| `[etcd]` | etcd node names. The number of `etcd` nodes needs to be odd. | -| `[k8s-cluster:children]` | group names of `[kube-master]` and `[kube-node]` | - - -We use **CentOS 7.5** with `root` user to install an HA cluster. Please see the following configuration as an example: - -> Note: ->
-> If the _taskbox_ cannot establish `ssh` connection with the rest nodes, try to use the non-root user configuration. - -#### host.ini example - -```ini -[all] -master1 ansible_connection=local ip=192.168.0.1 -master2 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD -master3 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD -node1 ansible_host=192.168.0.4 ip=192.168.0.4 ansible_ssh_pass=PASSWORD -node2 ansible_host=192.168.0.5 ip=192.168.0.5 ansible_ssh_pass=PASSWORD -node3 ansible_host=192.168.0.6 ip=192.168.0.6 ansible_ssh_pass=PASSWORD - -[kube-master] -master1 -master2 -master3 - -[kube-node] -node1 -node2 -node3 - -[etcd] -master1 -master2 -master3 - -[k8s-cluster:children] -kube-node -kube-master -``` - -### Step 3: Configure the Load Balancer Parameters - -Besides configuring the `common.yaml` by following the [Multi-node Installation](../multi-node), you need to modify the load balancer information in the `common.yaml`. Assume the **VIP** address and listening port of the **internal load balancer** are `192.168.0.253` and `6443`, then you can refer to the following example. - -> - Note that address and port should be indented by two spaces in `common.yaml`, and the address should be VIP. -> - The domain name of the load balancer is "lb.kubesphere.local" by default for internal access. If you need to change the domain name, please uncomment and modify it. - -#### The configuration sample in common.yaml - -```yaml -## External LB example config -## apiserver_loadbalancer_domain_name: "lb.kubesphere.local" -loadbalancer_apiserver: - address: 192.168.0.253 - port: 6443 -``` - -Finally, please refer to the [guide](../storage-configuration) to configure the persistent storage service in `common.yaml` and start your HA cluster installation. - -Then it is ready to install the high availability KubeSphere cluster. diff --git a/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/multi-node.md b/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/multi-node.md deleted file mode 100644 index d1cd790ea..000000000 --- a/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/multi-node.md +++ /dev/null @@ -1,176 +0,0 @@ ---- -title: "Multi-node Installation" -keywords: 'kubesphere, kubernetes, docker, kubesphere installer' -description: 'The guide for installing KubeSphere on Multi-Node in development or testing environment' - -weight: 2220 ---- - -`Multi-Node` installation enables installing KubeSphere on multiple nodes. Typically, use any one node as _taskbox_ to run the installation task. Please note `ssh` communication is required to be established between taskbox and other nodes. - -- The following instructions are for the default installation without enabling any optional components as we have made them pluggable since v2.1.0. If you want to enable any one, please read [Enable Pluggable Components](../pluggable-components). -- If your machines in total have >= 8 cores and >= 16G memory, we recommend you to install the full package of KubeSphere by [Enabling Optional Components](../complete-installation). -- The installation time depends on your network bandwidth, your computer configuration, the number of nodes, etc. - -## Prerequisites - -If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information. - -## Step 1: Prepare Linux Hosts - -The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements. - -- Time synchronization is required across all nodes, otherwise the installation may not succeed; -- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`; -- If you are using `Ubuntu 18.04`, you need to use the user `root`; -- If the Debian system does not have the sudo command installed, you need to execute `apt update && apt install sudo` command using root before installation. - -### Hardware Recommendation - -- KubeSphere can be installed on any cloud platform. -- The installation speed can be accelerated by increasing network bandwidth. -- If you choose air-gapped installation, ensure your disk of each node is at least 100G. - -| System | Minimum Requirements (Each node) | -| --- | --- | -| CentOS 7.4 ~ 7.7 (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:40 G | -| Ubuntu 16.04/18.04 LTS (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:40 G | -| Red Hat Enterprise Linux Server 7.4 (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:40 G | -| Debian Stretch 9.5 (64 bit)| CPU:2 Core, Memory:4 G, Disk Space:40 G | - -The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes. - -> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide. - -| Host IP | Host Name | Role | -| --- | --- | --- | -|192.168.0.1|master|master, etcd| -|192.168.0.2|node1|node| -|192.168.0.3|node2|node| - -### Cluster Architecture - -#### Single Master, Single Etcd, Two Nodes - -![Architecture](/cluster-architecture.svg) - -## Step 2: Download Installer Package - -**1.** Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`. - -```bash -curl -L https://kubesphere.io/download/stable/latest > installer.tar.gz \ -&& tar -zxf installer.tar.gz && cd kubesphere-all-v2.1.1/conf -``` - -**2.** Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file. - -> Note: -> -> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`. -> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`. -> - master, node1 and node2 are the host names of each node and all host names should be in lowercase. - -### hosts.ini - -```ini -[all] -master ansible_connection=local ip=192.168.0.1 -node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD -node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD - -[kube-master] -master - -[kube-node] -node1 -node2 - -[etcd] -master - -[k8s-cluster:children] -kube-node -kube-master -``` - -> Note: -> -> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here. -> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively. -> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`. -> -> Parameters Specification: -> -> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection. -> - `ansible_host`: The name of the host to be connected. -> - `ip`: The ip of the host to be connected. -> - `ansible_user`: The default ssh user name to use. -> - `ansible_become_pass`: Allows you to set the privilege escalation password. -> - `ansible_ssh_pass`: The password of the host to be connected using root. - -## Step 3: Install KubeSphere to Linux Machines - -> Note: -> -> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default. -> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions. -> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation. -> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts. - -**1.** Enter `scripts` folder, and execute `install.sh` using `root` user: - -```bash -cd ../cripts -./install.sh -``` - -**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume. - -```bash -################################################ - KubeSphere Installer Menu -################################################ -* 1) All-in-one -* 2) Multi-node -* 3) Quit -################################################ -https://kubesphere.io/ 2020-02-24 -################################################ -Please input an option: 2 - -``` - -**3.** Verify the multi-node installation: - -**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go. - -```bash -successsful! -##################################################### -### Welcome to KubeSphere! ### -##################################################### - -Console: http://192.168.0.1:30880 -Account: admin -Password: P@88w0rd - -NOTE:Please modify the default password after login. -##################################################### -``` - -> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components). - -**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in. - -![Login](https://pek3b.qingstor.com/kubesphere-docs/png/20191017172215.png) - -Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up. - -![Landing Page](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png) - -## FAQ - -The installer has been tested on Aliyun, AWS, Huawei Cloud, QingCloud, Tencent Cloud. Please check the [results](https://github.com/kubesphere/ks-installer/issues/23) for details. Also please read the [FAQ of installation](../../faq/faq-install). - -If you have any further questions please do not hesitate to file issues on [GitHub](https://github.com/kubesphere/kubesphere/issues). diff --git a/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/storage-configuration.md b/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/storage-configuration.md deleted file mode 100644 index a3d8d5156..000000000 --- a/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/storage-configuration.md +++ /dev/null @@ -1,157 +0,0 @@ ---- -title: "StorageClass Configuration" -keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus' -description: 'Instructions for Setting up StorageClass for KubeSphere' - -weight: 2250 ---- - -Currently, Installer supports the following [Storage Class](https://kubernetes.io/docs/concepts/storage/storage-classes/), providing persistent storage service for KubeSphere (more storage classes will be supported soon). - -- NFS -- Ceph RBD -- GlusterFS -- QingCloud Block Storage -- QingStor NeonSAN -- Local Volume (for development and test only) - -The versions of storage systems and corresponding CSI plugins in the table listed below have been well tested. - -| **Name** | **Version** | **Reference** | -| ----------- | --- |---| -Ceph RBD Server | v0.94.10 | For development and testing, refer to [Install Ceph Storage Server](/zh-CN/appendix/ceph-ks-install/) for details. Please refer to [Ceph Documentation](http://docs.ceph.com/docs/master/) for production. | -Ceph RBD Client | v12.2.5 | Before installing KubeSphere, you need to configure the corresponding parameters in `common.yaml`. Please refer to [Ceph RBD](../storage-configuration/#ceph-rbd) | -GlusterFS Server | v3.7.6 | For development and testing, refer to [Deploying GlusterFS Storage Server](/zh-CN/appendix/glusterfs-ks-install/) for details. Please refer to [Gluster Documentation](https://www.gluster.org/install/) or [Gluster Documentation](http://gluster.readthedocs.io/en/latest/Install-Guide/Install/) for production. Note you need to install [Heketi Manager (V3.0.0)](https://github.com/heketi/heketi/tree/master/docs/admin). | -|GlusterFS Client |v3.12.10|Before installing KubeSphere, you need to configure the corresponding parameters in `common.yaml`. Please refer to [GlusterFS](../storage-configuration/#glusterfs)| -|NFS Client | v3.1.0 | Before installing KubeSphere, you need to configure the corresponding parameters in `common.yaml`. Make sure you have prepared NFS storage server. Please see [NFS Client](../storage-configuration/#nfs) | -QingCloud-CSI|v0.2.0.1|You need to configure the corresponding parameters in `common.yaml` before installing KubeSphere. Please refer to [QingCloud CSI](../storage-configuration/#qingcloud-csi) for details| -NeonSAN-CSI|v0.3.0| Before installing KubeSphere, you need to configure the corresponding parameters in `common.yaml`. Make sure you have prepared QingStor NeonSAN storage server. Please see [Neonsan-CSI](../storage-configuration/#neonsan-csi) | - -> Note: You are only allowed to set ONE default storage classes in the cluster. To specify a default storage class, make sure there is no default storage class already exited in the cluster. - -## Storage Configuration - -After preparing the storage server, you need to refer to the parameters description in the following table. Then modify the corresponding configurations in `conf/common.yaml` accordingly. - -The following describes the storage configuration in `common.yaml`. - -> Note: Local Volume is configured as the default storage class in `common.yaml` by default. If you are going to set other storage class as the default, disable the Local Volume and modify the configuration for other storage class. - -### Local Volume (For developing or testing only) - -A [Local Volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) represents a mounted local storage device such as a disk, partition or directory. Local volumes can only be used as a statically created PersistentVolume. We recommend you to use Local volume in testing or development only since it is quick and easy to install KubeSphere without the struggle to set up persistent storage server. Refer to following table for the definition in `conf/common.yaml`. - -| **Local volume** | **Description** | -| --- | --- | -| local\_volume\_provisioner\_enabled | Whether to use Local as the persistent storage, defaults to true | -| local\_volume\_provisioner\_storage\_class | Storage class name, default value:local | -| local\_volume\_is\_default\_class | Whether to set Local as the default storage class, defaults to true.| - -### NFS - -An NFS volume allows an existing NFS (Network File System) share to be mounted into your Pod. NFS can be configured in `conf/common.yaml`. Note you need to prepare NFS server in advance. - -| **NFS** | **Description** | -| --- | --- | -| nfs\_client\_enable | Whether to use NFS as the persistent storage, defaults to false | -| nfs\_client\_is\_default\_class | Whether to set NFS as default storage class, defaults to false. | -| nfs\_server | The NFS server address, either IP or Hostname | -| nfs\_path | NFS shared directory, which is the file directory shared on the server, see [Kubernetes Documentation](https://kubernetes.io/docs/concepts/storage/volumes/#nfs) | -|nfs\_vers3\_enabled | Specifies which version of the NFS protocol to use, defaults to false which means v4. True means v4 | -|nfs_archiveOnDelete | Archive PVC when deleting. It will automatically remove data from `oldPath` when it sets to false | - -### Ceph RBD - -The open source [Ceph RBD](https://ceph.com/) distributed storage system can be configured to use in `conf/common.yaml`. You need to prepare Ceph storage server in advance. Please refer to [Kubernetes Documentation](https://kubernetes.io/docs/concepts/storage/storage-classes/#ceph-rbd) for more details. - -| **Ceph\_RBD** | **Description** | -| --- | --- | -| ceph\_rbd\_enabled | Whether to use Ceph RBD as the persistent storage, defaults to false | -| ceph\_rbd\_storage\_class | Storage class name | -| ceph\_rbd\_is\_default\_class | Whether to set Ceph RBD as default storage class, defaults to false | -| ceph\_rbd\_monitors | Ceph monitors, comma delimited. This parameter is required, which depends on Ceph RBD server parameters | -| ceph\_rbd\_admin\_id | Ceph client ID that is capable of creating images in the pool. Defaults to “admin” | -| ceph\_rbd\_admin\_secret | Admin_id's secret, secret name for "adminId". This parameter is required. The provided secret must have type “kubernetes.io/rbd” | -| ceph\_rbd\_pool | Ceph RBD pool. Default is “rbd” | -| ceph\_rbd\_user\_id | Ceph client ID that is used to map the RBD image. Default is the same as adminId | -| ceph\_rbd\_user\_secret | Secret for User_id, it is required to create this secret in namespace which used rbd image | -| ceph\_rbd\_fsType | fsType that is supported by Kubernetes. Default: "ext4"| -| ceph\_rbd\_imageFormat | Ceph RBD image format, “1” or “2”. Default is “1” | -|ceph\_rbd\_imageFeatures| This parameter is optional and should only be used if you set imageFormat to “2”. Currently supported features are layering only. Default is “”, and no features are turned on| - -> Note: -> -> The ceph secret, which is created in storage class, like "ceph_rbd_admin_secret" and "ceph_rbd_user_secret", is retrieved using following command in Ceph storage server. - -```bash -ceph auth get-key client.admin -``` - -### GlusterFS - -[GlusterFS](https://docs.gluster.org/en/latest/) is a scalable network filesystem suitable for data-intensive tasks such as cloud storage and media streaming. You need to prepare GlusterFS storage server in advance. Please refer to [Kubernetes Documentation](https://kubernetes.io/docs/concepts/storage/storage-classes/#glusterfs) for further information. - -| **GlusterFS(It requires glusterfs cluster which is managed by heketi)**|**Description** | -| --- | --- | -| glusterfs\_provisioner\_enabled | Whether to use GlusterFS as the persistent storage, defaults to false | -| glusterfs\_provisioner\_storage\_class | Storage class name | -| glusterfs\_is\_default\_class | Whether to set GlusterFS as default storage class, defaults to false | -| glusterfs\_provisioner\_restauthenabled | Gluster REST service authentication boolean that enables authentication to the REST server | -| glusterfs\_provisioner\_resturl | Gluster REST service/Heketi service url which provision gluster volumes on demand. The general format should be "IP address:Port" and this is a mandatory parameter for GlusterFS dynamic provisioner| -| glusterfs\_provisioner\_clusterid | Optional, for example, 630372ccdc720a92c681fb928f27b53f is the ID of the cluster which will be used by Heketi when provisioning the volume. It can also be a list of clusterids | -| glusterfs\_provisioner\_restuser | Gluster REST service/Heketi user who has access to create volumes in the Gluster Trusted Pool | -| glusterfs\_provisioner\_secretName | Optional, identification of Secret instance that contains user password to use when talking to Gluster REST service, Installer will automatically create this secret in kube-system | -| glusterfs\_provisioner\_gidMin | The minimum value of GID range for the storage class | -| glusterfs\_provisioner\_gidMax |The maximum value of GID range for the storage class | -| glusterfs\_provisioner\_volumetype | The volume type and its parameters can be configured with this optional value, for example: ‘Replica volume’: volumetype: replicate:3 | -| jwt\_admin\_key | "jwt.admin.key" field is from "/etc/heketi/heketi.json" in Heketi server | - -**Attention:** - - > Please note: `"glusterfs_provisioner_clusterid"` could be returned from glusterfs server by running the following command: - - ```bash - export HEKETI_CLI_SERVER=http://localhost:8080 - heketi-cli cluster list - ``` - -### QingCloud Block Storage - -[QingCloud Block Storage](https://docs.qingcloud.com/product/Storage/volume/) is supported in KubeSphere as the persistent storage service. If you would like to experience dynamic provisioning when creating volume, we recommend you to use it as your persistent storage solution. KubeSphere integrates [QingCloud-CSI](https://github.com/yunify/qingcloud-csi/blob/master/README_zh.md), and allows you to use various block storage services of QingCloud. With simple configuration, you can quickly expand, clone PVCs and view the topology of PVCs, create/delete snapshot, as well as restore volume from snapshot. - -QingCloud-CSI plugin has implemented the standard CSI. You can easily create and manage different types of volumes in KubeSphere, which are provided by QingCloud. The corresponding PVCs will created with ReadWriteOnce access mode and mounted to running Pods. - -QingCloud-CSI supports create the following five types of volume in QingCloud: - -- High capacity -- Standard -- SSD Enterprise -- Super high performance -- High performance - -|**QingCloud-CSI** | **Description**| -| --- | ---| -| qingcloud\_csi\_enabled|Whether to use QingCloud-CSI as the persistent storage volume, defaults to false | -| qingcloud\_csi\_is\_default\_class| Whether to set QingCloud-CSI as default storage class, defaults to false | -qingcloud\_access\_key\_id ,
qingcloud\_secret\_access\_key| Please obtain it from [QingCloud Console](https://console.qingcloud.com/login) | -|qingcloud\_zone| Zone should be the same as the zone where the Kubernetes cluster is installed, and the CSI plugin will operate on the storage volumes for this zone. For example: zone can be set to these values, such as sh1a (Shanghai 1-A), sh1b (Shanghai 1-B), pek2 (Beijing 2), pek3a (Beijing 3-A), pek3b (Beijing 3-B), pek3c (Beijing 3-C), gd1 (Guangdong 1), gd2a (Guangdong 2-A), ap1 (Asia Pacific 1), ap2a (Asia Pacific 2-A) | -| type | The type of volume in QingCloud platform. In QingCloud platform, 0 represents high performance volume. 3 represents super high performance volume. 1 or 2 represents high capacity volume depending on cluster‘s zone, see [QingCloud Documentation](https://docs.qingcloud.com/product/api/action/volume/create_volumes.html)| -| maxSize, minSize | Limit the range of volume size in GiB| -| stepSize | Set the increment of volumes size in GiB| -| fsType | The file system of the storage volume, which supports ext3, ext4, xfs. The default is ext4| - -### QingStor NeonSAN - -The NeonSAN-CSI plugin supports the enterprise-level distributed storage [QingStor NeonSAN](https://www.qingcloud.com/products/qingstor-neonsan/) as the persistent storage solution. You need prepare the NeonSAN server, then configure the NeonSAN-CSI plugin to connect to its storage server in `conf/common.yaml`. Please refer to [NeonSAN-CSI Reference](https://github.com/wnxn/qingstor-csi/blob/master/docs/reference_zh.md#storageclass-%E5%8F%82%E6%95%B0) for further information. - -| **NeonSAN** | **Description** | -| --- | --- | -| neonsan\_csi\_enabled | Whether to use NeonSAN as the persistent storage, defaults to false | -| neonsan\_csi\_is\_default\_class | Whether to set NeonSAN-CSI as the default storage class, defaults to false| -Neonsan\_csi\_protocol | transportation protocol, user must set the option, such as TCP or RDMA| -| neonsan\_server\_address | NeonSAN server address | -| neonsan\_cluster\_name| NeonSAN server cluster name| -| neonsan\_server\_pool|A comma separated list of pools. Tell plugin to manager these pools. User must set the option, the default value is kube| -| neonsan\_server\_replicas|NeonSAN image replica count. Default: 1| -| neonsan\_server\_stepSize|set the increment of volumes size in GiB. Default: 1| -| neonsan\_server\_fsType|The file system to use for the volume. Default: ext4| diff --git a/content/zh/docs/installing-on-kubernetes/introduction/_index.md b/content/zh/docs/installing-on-kubernetes/introduction/_index.md index 2cf101ca5..5ad62e229 100644 --- a/content/zh/docs/installing-on-kubernetes/introduction/_index.md +++ b/content/zh/docs/installing-on-kubernetes/introduction/_index.md @@ -1,5 +1,5 @@ --- -linkTitle: "Installation" +linkTitle: "Introduction" weight: 2100 _build: diff --git a/content/zh/docs/installing-on-kubernetes/introduction/intro.md b/content/zh/docs/installing-on-kubernetes/introduction/intro.md deleted file mode 100644 index a176c3255..000000000 --- a/content/zh/docs/installing-on-kubernetes/introduction/intro.md +++ /dev/null @@ -1,93 +0,0 @@ ---- -title: "Introduction" -keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus' -description: 'KubeSphere Installation Overview' - -linkTitle: "Introduction" -weight: 2110 ---- - -[KubeSphere](https://kubesphere.io/) is an enterprise-grade multi-tenant container platform built on [Kubernetes](https://kubernetes.io). It provides an easy-to-use UI for users to manage application workloads and computing resources with a few clicks, which greatly reduces the learning curve and the complexity of daily work such as development, testing, operation and maintenance. KubeSphere aims to alleviate the pain points of Kubernetes including storage, network, security and ease of use, etc. - -KubeSphere supports installing on cloud-hosted and on-premises Kubernetes cluster, e.g. native K8s, GKE, EKS, RKE, etc. It also supports installing on Linux host including virtual machine and bare metal with provisioning fresh Kubernetes cluster. Both of the two methods are easy and friendly to install KubeSphere. Meanwhile, KubeSphere offers not only online installer, but air-gapped installer for such environment with no access to the internet. - -KubeSphere is open source project on [GitHub](https://github.com/kubesphere). There are thousands of users are using KunbeSphere, and many of them are running KubeSphere for their production workloads. - -In summary, there are several installation options you can choose. Please note not all options are mutually exclusive. For instance, you can deploy KubeSphere with minimal packages on existing K8s cluster on multiple nodes in air-gapped environment. Here is the decision tree shown in the following graph you may reference for your own situation. - -- [All-in-One](../all-in-one): Intall KubeSphere on a singe node. It is only for users to quickly get familar with KubeSphere. -- [Multi-Node](../multi-node): Install KubeSphere on multiple nodes. It is for testing or development. -- [Install KubeSphere on Air Gapped Linux](../install-ks-on-linux-airgapped): All images of KubeSphere have been encapsulated into a package, it is convenient for air gapped installation on Linux machines. -- [High Availability Multi-Node](../master-ha): Install high availability KubeSphere on multiple nodes which is used for production environment. -- [KubeSphere on Existing K8s](../install-on-k8s): Deploy KubeSphere on your Kubernetes cluster including cloud-hosted services such as GKE, EKS, etc. -- [KubeSphere on Air-Gapped K8s](../install-on-k8s-airgapped): Install KubeSphere on a disconnected Kubernetes cluster. -- Minimal Packages: Only install minimal required system components of KubeSphere. The minimum of resource requirement is down to 1 core and 2G memory. -- [Full Packages](../complete-installation): Install all available system components of KubeSphere including DevOps, service mesh, application store, etc. - -![Installer Options](https://pek3b.qingstor.com/kubesphere-docs/png/20200305093158.png) - -## Before Installation - -- As the installation will pull images and update operating system from the internet, your environment must have the internet access. If not, then you need to use the air-gapped installer instead. -- For all-in-one installation, the only one node is both the master and the worker. -- For multi-node installation, you are asked to specify the node roles in the configuration file before installation. -- Your linux host must have OpenSSH Server installed. -- Please check the [ports requirements](../port-firewall) before installation. - -## Quick Install For Development and Testing - -KubeSphere has decoupled some components since v2.1.0. The installer only installs required components by default which brings the benefits of fast installation and minimal resource consumption. If you want to install any optional component, please check the following section [Pluggable Components Overview](../intro#pluggable-components-overview) for details. - -The quick install of KubeSphere is only for development or testing since it uses local volume for storage by default. If you want a production install please refer to the section [High Availability Installation for Production Environment](../intro#high-availability-installation-for-production-environment). - -### 1. Install KubeSphere on Linux - -- [All-in-One](../all-in-one): It means a single-node hassle-free configuration installation with one-click. -- [Multi-Node](../multi-node): It allows you to install KubeSphere on multiple instances using local volume, which means it is not required to install storage server such as Ceph, GlusterFS. - -> Note:With regard to air-gapped installation please refer to [Install KubeSphere on Air Gapped Linux Machines](../install-ks-on-linux-airgapped). - -### 2. Install KubeSphere on Existing Kubernetes - -You can install KubeSphere on your existing Kubernetes cluster. Please refer [Install KubeSphere on Kubernetes](../install-on-k8s) for instructions. - -## High Availability Installation for Production Environment - -### 1. Install HA KubeSphere on Linux - -KubeSphere installer supports installing a highly available cluster for production with the prerequisites being a load balancer and persistent storage service set up in advance. - -- [Persistent Service Configuration](../storage-configuration): By default, KubeSphere Installer uses [Local Volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning in Kubernetes cluster. It is convenient for quick install of testing environment. In production environment, it must have a storage server set up. Please refer [Persistent Service Configuration](../storage-configuration) for details. -- [Load Balancer Configuration for HA install](../master-ha): Before you get started with multi-node installation in production environment, you need to configure a load balancer. Either cloud LB or `HAproxy + keepalived` works for the installation. - -### 2. Install HA KubeSphere on Existing Kubernetes - -Before you install KubeSphere on existing Kubernetes, please check the prerequisites of the installation on Linux described above, and verify the existing Kubernetes to see if it satisfies these prerequisites or not, i.e., a load balancer and persistent storage service. - -If your Kubernetes is ready, please refer [Install KubeSphere on Kubernetes](../install-on-k8s) for instructions. - -> You can install KubeSphere on cloud Kubernetes service such as [Installing KubeSphere on GKE cluster](../install-on-gke) - -## Pluggable Components Overview - -KubeSphere has decoupled some core feature components since v2.1.0. These components are designed to be pluggable, which means you can enable any of them before or after installation. The installer by default does not install the pluggable components. Please check the guide [Enable Pluggable Components Installation](../pluggable-components) for your requirement. - -![Pluggable Components](https://pek3b.qingstor.com/kubesphere-docs/png/20191207140846.png) - -## Storage Configuration Instruction - -The following links explain how to configure different types of persistent storage services. Please refer to [Storage Configuration Instruction](../storage-configuration) for detailed instructions regarding how to configure the storage class in KubeSphere. - -- [NFS](https://kubernetes.io/docs/concepts/storage/volumes/#nfs) -- [GlusterFS](https://www.gluster.org/) -- [Ceph RBD](https://ceph.com/) -- [QingCloud Block Storage](https://docs.qingcloud.com/product/storage/volume/) -- [QingStor NeonSAN](https://docs.qingcloud.com/product/storage/volume/super_high_performance_shared_volume/) - -## Add New Nodes - -KubeSphere Installer allows you to scale the number of nodes, see [Add New Nodes](../add-nodes). - -## Uninstall - -Uninstall will remove KubeSphere from the machines. This operation is irreversible and dangerous. Please check [Uninstall](../uninstall). diff --git a/content/zh/docs/installing-on-kubernetes/introduction/overview.md b/content/zh/docs/installing-on-kubernetes/introduction/overview.md new file mode 100644 index 000000000..2352c730f --- /dev/null +++ b/content/zh/docs/installing-on-kubernetes/introduction/overview.md @@ -0,0 +1,76 @@ +--- +title: "Overview" +keywords: "KubeSphere, Kubernetes, Installation" +description: "Overview of KubeSphere Installation on Kubernetes" + +linkTitle: "Overview" +weight: 2105 +--- + +![KubeSphere+K8s](https://pek3b.qingstor.com/kubesphere-docs/png/20191123144507.png) + +As part of KubeSphere's commitment to provide a plug-and-play architecture for users, it can be easily installed on existing Kubernetes clusters. More specifically, KubeSphere can be deployed on Kubernetes either hosted on clouds (e.g. AWS EKS, QingCloud QKE and Google GKE) or on-premises. This is because KubeSphere does not hack Kubernetes itself. It only interacts with the Kubernetes API to manage Kubernetes cluster resources. In other words, KubeSphere can be installed on any native Kubernetes cluster and Kubernetes distribution. + +This section gives you an overview of the general steps of installing KubeSphere on Kubernetes. For more information about the specific way of installation in different environments, see Installing on Hosted Kubernetes and Installing on On-premises Kubernetes. + +{{< notice note >}} + +Please read the prerequisites before you install KubeSphere on existing Kubernetes clusters. + +{{}} + +## Deploy KubeSphere + +After you make sure your existing Kubernetes cluster meets all the requirements, you can use kubectl to trigger the default minimal installation of KubeSphere. + +- Execute the following commands to start installation: + +```bash +kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml +``` + +```bash +kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml +``` + +{{< notice note >}} + +If your server has trouble accessing GitHub, you can copy the content in [kubesphere-installer.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml) and [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) respectively and past it to local files. You then can use `kubectl apply -f` for the local files to install KubeSphere. + +{{}} + +- Inspect the logs of installation: + +```bash +kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f +``` + +{{< notice tip >}} + +In some environments, you may find the installation process stopped by issues related to `metrics_server`, as some cloud providers have already installed metrics server in their platform. In this case, please manually create a local cluster-configuration.yaml file (copy the [content](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) to it). In this file, disable `metrics_server` by changing `true` to `false` for `enabled`, and use `kubectl apply -f cluster-configuration.yaml` to execute it. + +{{}} + +- Use `kubectl get pod --all-namespaces` to see whether all pods are running normally in relevant namespaces of KubeSphere. If they are, check the port (30880 by default) of the console through the following command: + +```bash +kubectl get svc/ks-console -n kubesphere-system +``` + +- Make sure port 30880 is opened in security groups and access the web console through the NodePort (`IP:30880`) with the default account and password (`admin/P@88w0rd`). + +![kubesphere-console](https://ap3.qingstor.com/kubesphere-website/docs/login.png) + +## Enable Pluggable Components (Optional) + +If you start with a default minimal installation, refer to Enable Pluggable Components to install other components. + +{{< notice tip >}} + +- Pluggable components can be enabled either before or after the installation. Please refer to the example file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/blob/master/deploy/cluster-configuration.yaml) for more details. +- Make sure there is enough CPU and memory available in your cluster. +- It is highly recommended that you install these pluggable components to discover the full-stack features and capabilities provided by KubeSphere. + +{{}} + + diff --git a/content/zh/docs/installing-on-kubernetes/introduction/port-firewall.md b/content/zh/docs/installing-on-kubernetes/introduction/port-firewall.md deleted file mode 100644 index 875c2e9b0..000000000 --- a/content/zh/docs/installing-on-kubernetes/introduction/port-firewall.md +++ /dev/null @@ -1,33 +0,0 @@ ---- -title: "Port Requirements" -keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus' -description: '' - -linkTitle: "Requirements" -weight: 2120 ---- - - -KubeSphere requires certain ports to communicate among services, so you need to make sure the following ports open for use. - -| Service | Protocol | Action | Start Port | End Port | Notes | -|---|---|---|---|---|---| -| ssh | TCP | allow | 22 | | | -| etcd | TCP | allow | 2379 | 2380 | | -| apiserver | TCP | allow | 6443 | | | -| calico | TCP | allow | 9099 | 9100 | | -| bgp | TCP | allow | 179 | | | -| nodeport | TCP | allow | 30000 | 32767 | | -| master | TCP | allow | 10250 | 10258 | | -| dns | TCP | allow | 53 | | | -| dns | UDP | allow | 53 | | | -| local-registry | TCP | allow | 5000 | | Required for air gapped environment| -| local-apt | TCP | allow | 5080 | | Required for air gapped environment| -| rpcbind | TCP | allow | 111 | | When using NFS as storage server | -| ipip | IPIP | allow | | | Calico network requires ipip protocol | - -**Note** - -Please note when you use Calico network plugin and run your cluster in classic network in cloud environment, you need to open IPIP protocol for souce IP. For instance, the following is the sample on QingCloud showing how to open IPIP protocol. - -![](https://pek3b.qingstor.com/kubesphere-docs/png/20200304200605.png) diff --git a/content/zh/docs/installing-on-kubernetes/introduction/prerequisites.md b/content/zh/docs/installing-on-kubernetes/introduction/prerequisites.md new file mode 100644 index 000000000..7dcebc354 --- /dev/null +++ b/content/zh/docs/installing-on-kubernetes/introduction/prerequisites.md @@ -0,0 +1,54 @@ +--- +title: "Prerequisites" +keywords: "KubeSphere, Kubernetes, Installation, Prerequisites" +description: "The prerequisites of installing KubeSphere on existing Kubernetes" + +linkTitle: "Prerequisites" +weight: 2125 +--- + + + +Not only can KubeSphere be installed on virtual machines and bare metal with provisioned Kubernetes, but also supports installing on cloud-hosted and on-premises existing Kubernetes clusters as long as your Kubernetes cluster meets the prerequisites below. + +- Kubernetes version: `1.15.x, 1.16.x, 1.17.x, 1.18.x`; +- CPU > 1 Core; Memory > 2 G; +- A default Storage Class in your Kubernetes cluster is configured; use `kubectl get sc` to verify it. +- The CSR signing feature is activated in kube-apiserver when it is started with the `--cluster-signing-cert-file` and `--cluster-signing-key-file` parameters. See [RKE installation issue](https://github.com/kubesphere/kubesphere/issues/1925#issuecomment-591698309). + +## Pre-checks + +1. Make sure your Kubernetes version is compatible by running `kubectl version` in your cluster node. The output may look as below: + +```bash +$ kubectl version +Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.1", GitCommit:"4485c6f18cee9a5d3c3b4e523bd27972b1b53892", GitTreeState:"clean", BuildDate:"2019-07-18T09:09:21Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"} +Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.1", GitCommit:"4485c6f18cee9a5d3c3b4e523bd27972b1b53892", GitTreeState:"clean", BuildDate:"2019-07-18T09:09:21Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"} +``` + +{{< notice note >}} + +Pay attention to the `Server Version` line. If `GitVersion` shows an older one, you need to upgrade Kubernetes first. Please refer to [Upgrading kubeadm clusters from v1.14 to v1.15](https://v1-15.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-15/). + +{{}} + +2. Check if the available resources in your cluster meet the minimum requirements. + +```bash +$ free -g + total used free shared buff/cache available +Mem: 16 4 10 0 3 2 +Swap: 0 0 0 +``` + +3. Check if there is a default Storage Class in your cluster. An existing Storage Class is a prerequisite for KubeSphere installation. + +```bash +$ kubectl get sc +NAME PROVISIONER AGE +glusterfs (default) kubernetes.io/glusterfs 3d4h +``` + +If your Kubernetes cluster environment meets all the requirements above, then you are ready to deploy KubeSphere on your existing Kubernetes cluster. + +For more information, see Overview of Installing on Kubernetes. \ No newline at end of file diff --git a/content/zh/docs/installing-on-kubernetes/introduction/vars.md b/content/zh/docs/installing-on-kubernetes/introduction/vars.md deleted file mode 100644 index cda3aa5db..000000000 --- a/content/zh/docs/installing-on-kubernetes/introduction/vars.md +++ /dev/null @@ -1,107 +0,0 @@ ---- -title: "Common Configurations" -keywords: 'KubeSphere, kubernetes, docker, helm, jenkins, istio, prometheus' -description: 'Configure cluster parameters before installing' - -linkTitle: "Kubernetes Cluster Configuration" -weight: 2130 ---- - -This tutorial explains how to customize KubeSphere configurations in `conf/common.yaml`. You can reference the following section to understand each parameter. - -```yaml -######################### Kubernetes ######################### -# The default k8s version will be installed -kube_version: v1.16.7 - -# The default etcd version will be installed -etcd_version: v3.2.18 - -# Configure a cron job to backup etcd data, which is running on etcd machines. -# Period of running backup etcd job, the unit is minutes. -# The default value 30 means backup etcd every 30 minutes. -etcd_backup_period: 30 - -# How many backup replicas to keep. -# The default value5 means to keep latest 5 backups, older ones will be deleted by order. -keep_backup_number: 5 - -# The location to store etcd backups files on etcd machines. -etcd_backup_dir: "/var/backups/kube_etcd" - -# Add other registry. (For users who need to accelerate image download) -docker_registry_mirrors: - - https://docker.mirrors.ustc.edu.cn - - https://registry.docker-cn.com - - https://mirror.aliyuncs.com - -# Kubernetes network plugin, Calico will be installed by default. Note that Calico and flannel are recommended, which are tested and verified by KubeSphere. -kube_network_plugin: calico - -# A valid CIDR range for Kubernetes services, -# 1. should not overlap with node subnet -# 2. should not overlap with Kubernetes pod subnet -kube_service_addresses: 10.233.0.0/18 - -# A valid CIDR range for Kubernetes pod subnet, -# 1. should not overlap with node subnet -# 2. should not overlap with Kubernetes services subnet -kube_pods_subnet: 10.233.64.0/18 - -# Kube-proxy proxyMode configuration, either ipvs, or iptables -kube_proxy_mode: ipvs - -# Maximum pods allowed to run on every node. -kubelet_max_pods: 110 - -# Enable nodelocal dns cache, see https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.md#nodelocal-dns-cache for further information -enable_nodelocaldns: true - -# Highly Available loadbalancer example config -# apiserver_loadbalancer_domain_name: "lb.kubesphere.local" # Loadbalancer domain name -# loadbalancer_apiserver: # Loadbalancer apiserver configuration, please uncomment this line when you prepare HA install -# address: 192.168.0.10 # Loadbalancer apiserver IP address -# port: 6443 # apiserver port - -######################### KubeSphere ######################### - -# Version of KubeSphere -ks_version: v2.1.0 - -# KubeSphere console port, range 30000-32767, -# but 30180/30280/30380 are reserved for internal service -console_port: 30880 # KubeSphere console nodeport - -#CommonComponent -mysql_volume_size: 20Gi # MySQL PVC size -minio_volume_size: 20Gi # Minio PVC size -etcd_volume_size: 20Gi # etcd PVC size -openldap_volume_size: 2Gi # openldap PVC size -redis_volume_size: 2Gi # Redis PVC size - - -# Monitoring -prometheus_replica: 2 # Prometheus replicas with 2 by default which are responsible for monitoring different segments of data source and provide high availability as well. -prometheus_memory_request: 400Mi # Prometheus request memory -prometheus_volume_size: 20Gi # Prometheus PVC size -grafana_enabled: true # enable grafana or not - - -## Container Engine Acceleration -## Use nvidia gpu acceleration in containers -# nvidia_accelerator_enabled: true # enable Nvidia GPU accelerator or not. It supports hybrid node with GPU and CPU installed. -# nvidia_gpu_nodes: # The GPU nodes specified in hosts.ini. FOr now we only support Ubuntu 16.04 -# - kube-gpu-001 # The host name of the GPU node specified in hosts.ini -``` - -## How to Configure a GPU Node - -You may want to use GPU nodes for special purpose such as machine learning. Let's say you have a GPU node called `node2` in `hosts.ini`, then in the file `common.yaml` specify the following configuration. Please be aware the `- node2` has two spaces indent. - -```yaml - nvidia_accelerator_enabled: true - nvidia_gpu_nodes: - - node2 -``` - -> Note: The GPU node now only supports Ubuntu 16.04. \ No newline at end of file diff --git a/content/zh/docs/installing-on-kubernetes/on-prem-kubernetes/_index.md b/content/zh/docs/installing-on-kubernetes/on-prem-kubernetes/_index.md index cd927f966..e81dbde7e 100644 --- a/content/zh/docs/installing-on-kubernetes/on-prem-kubernetes/_index.md +++ b/content/zh/docs/installing-on-kubernetes/on-prem-kubernetes/_index.md @@ -1,7 +1,7 @@ --- -linkTitle: "Install on Linux" -weight: 2200 +linkTitle: "Installing on On-premises Kubernetes" +weight: 2300 _build: render: false ---- \ No newline at end of file +--- diff --git a/content/zh/docs/installing-on-kubernetes/on-prem-kubernetes/install-ks-on-linux-airgapped.md b/content/zh/docs/installing-on-kubernetes/on-prem-kubernetes/install-ks-on-linux-airgapped.md index 26b3e4f04..550766807 100644 --- a/content/zh/docs/installing-on-kubernetes/on-prem-kubernetes/install-ks-on-linux-airgapped.md +++ b/content/zh/docs/installing-on-kubernetes/on-prem-kubernetes/install-ks-on-linux-airgapped.md @@ -7,218 +7,4 @@ description: 'How to install KubeSphere on air-gapped Linux machines' weight: 2240 --- -The air-gapped installation is almost the same as the online installation except it creates a local registry to host the Docker images. We will demonstrate how to install KubeSphere and Kubernetes on air-gapped environment. - -> Note: The dependencies in different operating systems may cause upexpected problems. If you encounter any installation problems on air-gapped environment, please describe your OS information and error logs on [GitHub](https://github.com/kubesphere/kubesphere/issues). - -## Prerequisites - -- If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information. -> - Installer will use `/var/lib/docker` as the default directory where all Docker related files, including the images, are stored. We recommend you to add additional storage to a disk with at least 100G mounted at `/var/lib/docker` and `/mnt/registry` respectively, use the [fdisk](https://www.computerhope.com/unix/fdisk.htm) command for reference. -- Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [OpenEBS](https://openebs.io/) to provide storage service with dynamic provisioning. It is convenient for testing and development. For production, please [configure supported persistent storage service](../storage-configuration) and prepare [high availability configuration](../master-ha) before installation. -- Since the air-gapped machines cannot connect to apt or yum source, please use clean Linux machine to avoid this problem. - -## Step 1: Prepare Linux Hosts - -The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements. - -- Supported OSes: CentOS 7.4 ~ 7.7 (64-bit), Ubuntu 16.04.5/16.04.6/18.04.1/18.04.2/18.04.3 LTS (64-bit) -- Time synchronization is required across all nodes, otherwise the installation may not succeed; -- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`; -- If you are using `Ubuntu 18.04`, you need to use the user `root`. -- Ensure your disk of each node is at least 100G. -- CPU and memory in total of all machines: 2 cores and 4 GB for minimal installation; 8 cores and 16 GB for complete installation. - - -The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes. - -> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide. - -| Host IP | Host Name | Role | -| --- | --- | --- | -|192.168.0.1|master|master, etcd| -|192.168.0.2|node1|node| -|192.168.0.3|node2|node| - -### Cluster Architecture - -#### Single Master, Single Etcd, Two Nodes - -![Architecture](/cluster-architecture.svg) - -## Step 2: Download Installer Package - -Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`. - -```bash -curl -L https://kubesphere.io/download/offline/latest > kubesphere-all-offline-v2.1.1.tar.gz \ -&& tar -zxf kubesphere-all-offline-v2.1.1.tar.gz && cd kubesphere-all-offline-v2.1.1/conf -``` - -## Step 3: Configure Host Template - -> This step is only for multi-node installation, you can skip this step if you choose all-in-one installation. - -Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file. - -> Note: -> -> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`. -> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`. -> - master, node1 and node2 are the host names of each node and all host names should be in lowercase. - -### hosts.ini - -```ini -[all] -master ansible_connection=local ip=192.168.0.1 -node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD -node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD - -[local-registry] -master - -[kube-master] -master - -[kube-node] -node1 -node2 - -[etcd] -master - -[k8s-cluster:children] -kube-node -kube-master -``` - -> Note: -> -> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here. -> - Installer will use a node as the local registry for docker images, defaults to "master" in the group `[local-registry]`. -> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively. -> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`. -> -> Parameters Specification: -> -> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection. -> - `ansible_host`: The name of the host to be connected. -> - `ip`: The ip of the host to be connected. -> - `ansible_user`: The default ssh user name to use. -> - `ansible_become_pass`: Allows you to set the privilege escalation password. -> - `ansible_ssh_pass`: The password of the host to be connected using root. - -## Step 4: Enable All Components - -> This is step is complete installation. You can skip this step if you choose a minimal installation. - -Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default. - -```yaml -# LOGGING CONFIGURATION -# logging is an optional component when installing KubeSphere, and -# Kubernetes builtin logging APIs will be used if logging_enabled is set to false. -# Builtin logging only provides limited functions, so recommend to enable logging. -logging_enabled: true # Whether to install logging system -elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number -elasticsearch_data_replica: 2 # total number of data nodes -elasticsearch_volume_size: 20Gi # Elasticsearch volume size -log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default. -elk_prefix: logstash # the string making up index names. The index name will be formatted as ks--log -kibana_enabled: false # Kibana Whether to install built-in Grafana -#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption. -#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port - -#DevOps Configuration -devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image) -jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default -jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default -jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default -jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters -jenkinsJavaOpts_Xmx: 6g -jenkinsJavaOpts_MaxRAM: 8g -sonarqube_enabled: true # Whether to install built-in SonarQube -#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption. -#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token - -# Following components are all optional for KubeSphere, -# Which could be turned on to install it before installation or later by updating its value to true -openpitrix_enabled: true # KubeSphere application store -metrics_server_enabled: true # For KubeSphere HPA to use -servicemesh_enabled: true # KubeSphere service mesh system(Istio-based) -notification_enabled: true # KubeSphere notification system -alerting_enabled: true # KubeSphere alerting system -``` - -## Step 5: Install KubeSphere to Linux Machines - -> Note: -> -> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default. -> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions. -> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation. -> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts. - -**1.** Enter `scripts` folder, and execute `install.sh` using `root` user: - -```bash -cd ../cripts -./install.sh -``` - -**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume. - -```bash -################################################ - KubeSphere Installer Menu -################################################ -* 1) All-in-one -* 2) Multi-node -* 3) Quit -################################################ -https://kubesphere.io/ 2020-02-24 -################################################ -Please input an option: 2 - -``` - -**3.** Verify the multi-node installation: - -**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go. - -```bash -successsful! -##################################################### -### Welcome to KubeSphere! ### -##################################################### - -Console: http://192.168.0.1:30880 -Account: admin -Password: P@88w0rd - -NOTE:Please modify the default password after login. -##################################################### -``` - -> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components). - -**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in. - -![Login](https://pek3b.qingstor.com/kubesphere-docs/png/20191017172215.png) - -Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up. - -![Landing Page](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png) - -## Enable Pluggable Components - -If you already have set up minimal installation, you still can edit the ConfigMap of ks-installer using the following command. Please make sure there is enough resource in your machines, see [Pluggable Components Overview](/en/installation/pluggable-components/). - -```bash -kubectl edit cm -n kubesphere-system ks-installer -``` - -## FAQ - -If you have further questions please do not hesitate to raise issues on [GitHub](https://github.com/kubesphere/kubesphere/issues). +TBD diff --git a/content/zh/docs/installing-on-kubernetes/uninstalling/_index.md b/content/zh/docs/installing-on-kubernetes/uninstalling/_index.md new file mode 100644 index 000000000..55d950cfd --- /dev/null +++ b/content/zh/docs/installing-on-kubernetes/uninstalling/_index.md @@ -0,0 +1,7 @@ +--- +linkTitle: "Uninstalling" +weight: 2300 + +_build: + render: false +--- diff --git a/content/en/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-linux-airgapped.md b/content/zh/docs/installing-on-kubernetes/uninstalling/uninstalling-kubesphere-from-k8s.md similarity index 99% rename from content/en/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-linux-airgapped.md rename to content/zh/docs/installing-on-kubernetes/uninstalling/uninstalling-kubesphere-from-k8s.md index 26b3e4f04..6f4531add 100644 --- a/content/en/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-linux-airgapped.md +++ b/content/zh/docs/installing-on-kubernetes/uninstalling/uninstalling-kubesphere-from-k8s.md @@ -1,7 +1,7 @@ --- -title: "Air-Gapped Installation" +title: "Uninstalling KubeSphere from Kubernetes" keywords: 'kubernetes, kubesphere, air gapped, installation' -description: 'How to install KubeSphere on air-gapped Linux machines' +description: 'How to uninstalling KubeSphere from Kubernetes' weight: 2240 diff --git a/content/zh/docs/installing-on-linux/_index.md b/content/zh/docs/installing-on-linux/_index.md index 2442646b9..08045fdd9 100644 --- a/content/zh/docs/installing-on-linux/_index.md +++ b/content/zh/docs/installing-on-linux/_index.md @@ -18,6 +18,6 @@ In this chapter, we will demonstrate how to use KubeKey to provision a new Kuber Below you will find some of the most common and helpful pages from this chapter. We highly recommend you to review them at first. -{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} +{{< popularPage icon="/images/docs/qingcloud-2.svg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} {{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} diff --git a/content/zh/docs/installing-on-linux/cluster-operation/_index.md b/content/zh/docs/installing-on-linux/cluster-operation/_index.md new file mode 100644 index 000000000..f57fde055 --- /dev/null +++ b/content/zh/docs/installing-on-linux/cluster-operation/_index.md @@ -0,0 +1,7 @@ +--- +linkTitle: "Cluster Operation" +weight: 2445 + +_build: + render: false +--- diff --git a/content/zh/docs/installing-on-linux/cluster-operation/add-new-nodes.md b/content/zh/docs/installing-on-linux/cluster-operation/add-new-nodes.md new file mode 100644 index 000000000..0f002c77a --- /dev/null +++ b/content/zh/docs/installing-on-linux/cluster-operation/add-new-nodes.md @@ -0,0 +1,66 @@ +--- +title: "Add New Nodes" +keywords: 'kubernetes, kubesphere, scale, add-nodes' +description: 'How to add new nodes in an existing cluster' + + +weight: 2340 +--- + +When you use KubeSphere for a certain time, most likely you need to scale out your cluster with workloads increasing. In this scenario, KubeSphere provides script to add new nodes to the cluster. Fundamentally the operation is based on Kubelet's registration mechanism, i.e., the new nodes will automatically join the existing Kubernetes cluster. + +{{< notice tip >}} +From v3.0.0, the brand-new installer [KubeKey](https://github.com/kubesphere/kubekey) supports scale master amd worker node from a sing-node (all-in-one) cluster. +{{}} + +### Step 1: Modify the Host Configuration + +KubeSphere supports hybrid environment, that is, the newly added host OS can be CentOS or Ubuntu. When new machines are ready, add the configurations about the new machine information in the `hosts` and `roleGroups` of the file `config-sample.yaml`. + +{{< notice warning >}} +Do not allowed to modify the host name of the original nodes (e.g. master1) when adding new nodes. +{{}} + +For example, if you started the installation with [all-in-one](../../quick-start/all-in-one-on-linux) and you want to add new nodes for the single-node cluster, you can create a configuration file use KubeKey. + +``` +# Assume your original Kubernetes cluster is v1.17.9 +./kk create config --with-kubesphere --with-kubernetes v1.17.9 +``` + +The following section demonstrates how to add two nodes (i.e. `node1` and `node2`) using `root` user as an example, it assumes your host name of the first machine is `master1` (Replace the following host name with yours). + +```yaml +spec: + hosts: + - {name: master1, address: 192.168.0.3, internalAddress: 192.168.0.3, user: root, password: Qcloud@123} + - {name: node1, address: 192.168.0.4, internalAddress: 192.168.0.4, user: root, password: Qcloud@123} + - {name: node2, address: 192.168.0.5, internalAddress: 192.168.0.5, user: root, password: Qcloud@123} + roleGroups: + etcd: + - master1 + master: + - master1 + worker: + - node1 + - node2 +··· +``` + +### Step 2: Execute the Add-node Command + +Execute the following command to apply the changes: + +```bash +./kk add nodes -f config-sample.yaml +``` + +Finally, you will be able to see the new nodes and their information on the KubeSphere console after a successful return. Select **Nodes → Cluster Nodes** from the left menu, or using `kubectl get node` command can also see the changes. + +``` +kubectl get node +NAME STATUS ROLES AGE VERSION +master1 Ready master,worker 20d v1.17.9 +node1 Ready worker 31h v1.17.9 +node2 Ready worker 31h v1.17.9 +``` diff --git a/content/zh/docs/installing-on-linux/cluster-operation/remove-nodes.md b/content/zh/docs/installing-on-linux/cluster-operation/remove-nodes.md new file mode 100644 index 000000000..6ccfe68af --- /dev/null +++ b/content/zh/docs/installing-on-linux/cluster-operation/remove-nodes.md @@ -0,0 +1,28 @@ +--- +title: "Remove Nodes" +keywords: 'kubernetes, kubesphere, scale, add-nodes' +description: 'How to add new nodes in an existing cluster' + + +weight: 2345 +--- + +## Cordon a Node + +Marking a node as unschedulable prevents the scheduler from placing new pods onto that Node, but does not affect existing Pods on the Node. This is useful as a preparatory step before a node reboot or other maintenance. + +To mark a Node unschedulable, you can choose **Nodes → Cluster Nodes** from the menu, then find a node you want to remove from the cluster and click the **Cordon** button. It takes the same effect with the command `kubectl cordon $NODENAME`, you can see the [Kubernetes Nodes](https://kubernetes.io/docs/concepts/architecture/nodes/) for more details. + +![Cordon a Node](https://ap3.qingstor.com/kubesphere-website/docs/20200828232951.png) + +{{< notice note >}} +Note: Pods that are part of a DaemonSet tolerate being run on an unschedulable Node. DaemonSets typically provide node-local services that should run on the Node even if it is being drained of workload applications. +{{}} + +## Delete a Node + +You can delete the node by the following command: + +``` +./kk delete node -f config-sample.yaml +``` diff --git a/content/zh/docs/installing-on-linux/introduction/_index.md b/content/zh/docs/installing-on-linux/introduction/_index.md index 2cf101ca5..341d72ff1 100644 --- a/content/zh/docs/installing-on-linux/introduction/_index.md +++ b/content/zh/docs/installing-on-linux/introduction/_index.md @@ -1,7 +1,7 @@ --- -linkTitle: "Installation" +linkTitle: "Introduction" weight: 2100 _build: render: false ---- \ No newline at end of file +--- diff --git a/content/zh/docs/installing-on-linux/introduction/intro.md b/content/zh/docs/installing-on-linux/introduction/intro.md index a176c3255..18b7733a4 100644 --- a/content/zh/docs/installing-on-linux/introduction/intro.md +++ b/content/zh/docs/installing-on-linux/introduction/intro.md @@ -1,76 +1,81 @@ --- -title: "Introduction" -keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus' -description: 'KubeSphere Installation Overview' +title: "Overview" +keywords: 'Kubernetes, KubeSphere, Linux, Installation' +description: 'Overview of Installing KubeSphere on Linux' -linkTitle: "Introduction" +linkTitle: "Overview" weight: 2110 --- -[KubeSphere](https://kubesphere.io/) is an enterprise-grade multi-tenant container platform built on [Kubernetes](https://kubernetes.io). It provides an easy-to-use UI for users to manage application workloads and computing resources with a few clicks, which greatly reduces the learning curve and the complexity of daily work such as development, testing, operation and maintenance. KubeSphere aims to alleviate the pain points of Kubernetes including storage, network, security and ease of use, etc. +For the installation on Linux, KubeSphere can be installed both in clouds and in on-premises environments, such as AWS EC2, Azure VM and bare metal. Users can install KubeSphere on Linux hosts as they provision fresh Kubernetes clusters. The installation process is easy and friendly. Meanwhile, KubeSphere offers not only the online installer, or [KubeKey](https://github.com/kubesphere/kubekey), but also an air-gapped installation solution for the environment with no Internet access. -KubeSphere supports installing on cloud-hosted and on-premises Kubernetes cluster, e.g. native K8s, GKE, EKS, RKE, etc. It also supports installing on Linux host including virtual machine and bare metal with provisioning fresh Kubernetes cluster. Both of the two methods are easy and friendly to install KubeSphere. Meanwhile, KubeSphere offers not only online installer, but air-gapped installer for such environment with no access to the internet. +As an open-source project on [GitHub](https://github.com/kubesphere), KubeSphere is home to a community with thousands of users. Many of them are running KubeSphere for their production workloads. -KubeSphere is open source project on [GitHub](https://github.com/kubesphere). There are thousands of users are using KunbeSphere, and many of them are running KubeSphere for their production workloads. +Users are provided with multiple installation options. Please note not all options are mutually exclusive. For instance, you can deploy KubeSphere with minimal packages on multiple nodes in an air-gapped environment. -In summary, there are several installation options you can choose. Please note not all options are mutually exclusive. For instance, you can deploy KubeSphere with minimal packages on existing K8s cluster on multiple nodes in air-gapped environment. Here is the decision tree shown in the following graph you may reference for your own situation. - -- [All-in-One](../all-in-one): Intall KubeSphere on a singe node. It is only for users to quickly get familar with KubeSphere. +- [All-in-One](../all-in-one): Install KubeSphere on a single node. It is only for users to quickly get familiar with KubeSphere. - [Multi-Node](../multi-node): Install KubeSphere on multiple nodes. It is for testing or development. -- [Install KubeSphere on Air Gapped Linux](../install-ks-on-linux-airgapped): All images of KubeSphere have been encapsulated into a package, it is convenient for air gapped installation on Linux machines. -- [High Availability Multi-Node](../master-ha): Install high availability KubeSphere on multiple nodes which is used for production environment. -- [KubeSphere on Existing K8s](../install-on-k8s): Deploy KubeSphere on your Kubernetes cluster including cloud-hosted services such as GKE, EKS, etc. -- [KubeSphere on Air-Gapped K8s](../install-on-k8s-airgapped): Install KubeSphere on a disconnected Kubernetes cluster. -- Minimal Packages: Only install minimal required system components of KubeSphere. The minimum of resource requirement is down to 1 core and 2G memory. -- [Full Packages](../complete-installation): Install all available system components of KubeSphere including DevOps, service mesh, application store, etc. +- [Install KubeSphere on Air-gapped Linux](../install-ks-on-linux-airgapped): All images of KubeSphere have been encapsulated into a package. It is convenient for air-gapped installation on Linux machines. +- [High Availability Installation](../master-ha): Install high availability KubeSphere on multiple nodes which is used for the production environment. +- Minimal Packages: Only install the minimum required system components of KubeSphere. Here is the minimum resource requirement: + - 2vCPUs + - 4GB RAM + - 40GB Storage +- [Full Packages](../complete-installation): Install all available system components of KubeSphere such as DevOps, service mesh, and alerting. -![Installer Options](https://pek3b.qingstor.com/kubesphere-docs/png/20200305093158.png) +For the installation on Kubernetes, see Overview of Installing on Kubernetes. ## Before Installation -- As the installation will pull images and update operating system from the internet, your environment must have the internet access. If not, then you need to use the air-gapped installer instead. +- As images will be pulled and operating systems will be downloaded from the Internet, your environment must have Internet access. Otherwise, you need to use the air-gapped installer instead. - For all-in-one installation, the only one node is both the master and the worker. -- For multi-node installation, you are asked to specify the node roles in the configuration file before installation. +- For multi-node installation, you need to specify the node roles in the configuration file before installation. - Your linux host must have OpenSSH Server installed. - Please check the [ports requirements](../port-firewall) before installation. -## Quick Install For Development and Testing +## KubeKey -KubeSphere has decoupled some components since v2.1.0. The installer only installs required components by default which brings the benefits of fast installation and minimal resource consumption. If you want to install any optional component, please check the following section [Pluggable Components Overview](../intro#pluggable-components-overview) for details. +Developed in Go language, KubeKey represents a brand-new installation tool as a replacement for the ansible-based installer used before. KubeKey provides users with flexible installation choices, as they can install KubeSphere and Kubernetes separately or install them at one time, which is convenient and efficient. -The quick install of KubeSphere is only for development or testing since it uses local volume for storage by default. If you want a production install please refer to the section [High Availability Installation for Production Environment](../intro#high-availability-installation-for-production-environment). +Three scenarios to use KubeKey: -### 1. Install KubeSphere on Linux +- Install Kubernetes only; +- Install Kubernetes and KubeSphere together in one command; +- Install Kubernetes first, and deploy KubeSphere on it using [ks-installer](https://github.com/kubesphere/ks-installer). -- [All-in-One](../all-in-one): It means a single-node hassle-free configuration installation with one-click. -- [Multi-Node](../multi-node): It allows you to install KubeSphere on multiple instances using local volume, which means it is not required to install storage server such as Ceph, GlusterFS. +{{< notice note >}} -> Note:With regard to air-gapped installation please refer to [Install KubeSphere on Air Gapped Linux Machines](../install-ks-on-linux-airgapped). +If you have existing Kubernetes clusters, please refer to [Installing on Kubernetes](https://kubesphere-v3.netlify.app/docs/installing-on-kubernetes/). -### 2. Install KubeSphere on Existing Kubernetes +{{}} -You can install KubeSphere on your existing Kubernetes cluster. Please refer [Install KubeSphere on Kubernetes](../install-on-k8s) for instructions. +## Quick Installation for Development and Testing -## High Availability Installation for Production Environment +KubeSphere has decoupled some components since v2.1.0. KubeKey only installs necessary components by default as this way features fast installation and minimal resource consumption. If you want to enable enhanced pluggable functionalities, see [Overview of Pluggable Components](../intro#pluggable-components-overview) for details. -### 1. Install HA KubeSphere on Linux +The quick installation of KubeSphere is only for development or testing since it uses local volume for storage by default. If you want a production installation, see HA Cluster Configuration. -KubeSphere installer supports installing a highly available cluster for production with the prerequisites being a load balancer and persistent storage service set up in advance. +- **All-in-one**. It means a single-node hassle-free installation with just one command. +- **Multi-node**. It allows you to install KubeSphere on multiple instances using the default storage class (local volume), which means it is not required to install storage server such as Ceph and GlusterFS. -- [Persistent Service Configuration](../storage-configuration): By default, KubeSphere Installer uses [Local Volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning in Kubernetes cluster. It is convenient for quick install of testing environment. In production environment, it must have a storage server set up. Please refer [Persistent Service Configuration](../storage-configuration) for details. -- [Load Balancer Configuration for HA install](../master-ha): Before you get started with multi-node installation in production environment, you need to configure a load balancer. Either cloud LB or `HAproxy + keepalived` works for the installation. +{{< notice note >}} -### 2. Install HA KubeSphere on Existing Kubernetes +For air-gapped installation, please refer to [Install KubeSphere on Air Gapped Linux Machines](../install-ks-on-linux-airgapped). -Before you install KubeSphere on existing Kubernetes, please check the prerequisites of the installation on Linux described above, and verify the existing Kubernetes to see if it satisfies these prerequisites or not, i.e., a load balancer and persistent storage service. +{{}} -If your Kubernetes is ready, please refer [Install KubeSphere on Kubernetes](../install-on-k8s) for instructions. +## Install HA KubeSphere on Linux -> You can install KubeSphere on cloud Kubernetes service such as [Installing KubeSphere on GKE cluster](../install-on-gke) +KubeKey allows users to install a highly available cluster for production. Users need to configure load balancers and persistent storage services in advance. -## Pluggable Components Overview +- [Persistent Storage Configuration](../storage-configuration): By default, KubeKey uses [Local Volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage services with dynamic provisioning in Kubernetes clusters. It is convenient for the quick installation of a testing environment. In a production environment, it must have a storage server set up. Please refer to [Persistent Storage Configuration](../storage-configuration) for details. +- [Load Balancer Configuration for HA installation](../master-ha): Before you get started with multi-node installation in a production environment, you need to configure load balancers. Cloud load balancers, Nginx and `HAproxy + Keepalived` all work for the installation. -KubeSphere has decoupled some core feature components since v2.1.0. These components are designed to be pluggable, which means you can enable any of them before or after installation. The installer by default does not install the pluggable components. Please check the guide [Enable Pluggable Components Installation](../pluggable-components) for your requirement. +For more information, see HA Cluster Configuration. You can also see the specific step of HA installations across major cloud providers in Installing on Public Cloud. + +## Overview of Pluggable Components + +KubeSphere has decoupled some core feature components since v2.1.0. These components are designed to be pluggable, which means you can enable any of them both before and after the installation. By default, KubeKey does not install these pluggable components. For more information, see Enable Pluggable Components. ![Pluggable Components](https://pek3b.qingstor.com/kubesphere-docs/png/20191207140846.png) @@ -84,10 +89,24 @@ The following links explain how to configure different types of persistent stora - [QingCloud Block Storage](https://docs.qingcloud.com/product/storage/volume/) - [QingStor NeonSAN](https://docs.qingcloud.com/product/storage/volume/super_high_performance_shared_volume/) -## Add New Nodes +## Cluster Operation and Maintenance -KubeSphere Installer allows you to scale the number of nodes, see [Add New Nodes](../add-nodes). +### Add New Nodes + +With KubeKey, you can scale the number of nodes to meet higher resource needs after the installation, especially in a production environment. For more information, see [Add New Nodes](../add-nodes). + +### Remove Nodes + +You need to drain a node before you remove. For more information, see Remove Nodes. + +### Add New Storage Classes + +KubeKey allows you to set a new storage class after the installation. You can set different storage classes for KubeSphere itself and your workloads. + +For more information, see Add New Storage Classes. ## Uninstall -Uninstall will remove KubeSphere from the machines. This operation is irreversible and dangerous. Please check [Uninstall](../uninstall). +Uninstalling KubeSphere means it will be removed from the machines, which is irreversible. Please be cautious with the operation. + +For more information, see [Uninstall](../uninstall). \ No newline at end of file diff --git a/content/zh/docs/installing-on-linux/introduction/multioverview.md b/content/zh/docs/installing-on-linux/introduction/multioverview.md new file mode 100644 index 000000000..7e2f8f9a9 --- /dev/null +++ b/content/zh/docs/installing-on-linux/introduction/multioverview.md @@ -0,0 +1,299 @@ +--- +title: "Multi-node Installation" +keywords: 'Multi-node, Installation, KubeSphere' +description: 'Multi-node Installation Overview' + +linkTitle: "Multi-node Installation" +weight: 2112 +--- + +In a production environment, a single-node cluster cannot satisfy most of the needs as the cluster has limited resources with insufficient compute capabilities. Thus, single-node clusters are not recommended for large-scale data processing. Besides, a cluster of this kind is not available with high availability as it only has one node. On the other hand, a multi-node architecture is the most common and preferred choice in terms of application deployment and distribution. + +This section gives you an overview of multi-node installation, including the concept, KubeKey and steps. For information about HA installation, refer to Installing on Public Cloud and Installing in On-premises Environment. + +## Concept + +A multi-node cluster is composed of at least one master node and one worker node. You can use any node as the **taskbox** to carry out the installation task. You can add additional nodes based on your needs (e.g. for high availability) both before and after the installation. + +- **Master**. A master node generally hosts the control plane that controls and manages the whole system. +- **Worker**. Worker nodes run the actual applications deployed on them. + +## Why KubeKey + +If you are not familiar with Kubernetes components, you may find it difficult to set up a highly-functional multi-node Kubernetes cluster. Starting from the version 3.0.0, KubeSphere uses a brand-new installer called KubeKey to replace the old ansible-based installer. Developed in Go language, KubeKey allows users to quickly deploy a multi-node architecture. + +For users who do not have an existing Kubernetes cluster, they only need to create a configuration file with few commands and add node information (e.g. IP address and node roles) in it after KubeKey is downloaded. With one command, the installation will start and no additional operation is needed. + +### Motivation + +- The previous ansible-based installer has a bunch of software dependencies such as Python. KubeKey is developed in Go language to get rid of the problem in a variety of environments, making sure the installation is successful. +- KubeKey uses Kubeadm to install Kubernetes clusters on nodes in parallel as much as possible in order to reduce installation complexity and improve efficiency. It will greatly save installation time compared to the older installer. +- With KubeKey, users can scale clusters from an all-in-one cluster to a multi-node cluster, even an HA cluster. +- KubeKey aims to install clusters as an object, i.e., CaaO. + +## Step 1: Prepare Linux Hosts + +Please see the requirements for hardware and operating system shown below. To get started with multi-node installation, you need to prepare at least three hosts according to the following requirements. + +### System Requirements + +| Systems | Minimum Requirements (Each node) | +| ------------------------------------------------------ | ------------------------------------------- | +| **Ubuntu** *16.04, 18.04* | CPU: 2 Cores, Memory: 4 G, Disk Space: 40 G | +| **Debian** *Buster, Stretch* | CPU: 2 Cores, Memory: 4 G, Disk Space: 40 G | +| **CentOS** *7*.x | CPU: 2 Cores, Memory: 4 G, Disk Space: 40 G | +| **Red Hat Enterprise Linux 7** | CPU: 2 Cores, Memory: 4 G, Disk Space: 40 G | +| **SUSE Linux Enterprise Server 15/openSUSE Leap 15.2** | CPU: 2 Cores, Memory: 4 G, Disk Space: 40 G | + +{{< notice note >}} + +The path `/var/lib/docker` is mainly used to store the container data, and will gradually increase in size during use and operation. In the case of a production environment, it is recommended that `/var/lib/docker` should mount a drive separately. + +{{}} + +### Node Requirements + +- All nodes must be accessible through `SSH`. +- Time synchronization for all nodes. +- `sudo`/`curl`/`openssl` should be used in all nodes. +- `ebtables`/`socat`/`ipset`/`conntrack` should be installed in all nodes. +- `docker` can be installed by yourself or by KubeKey. + +### Network and DNS Requirements + +- Make sure the DNS address in `/etc/resolv.conf` is available. Otherwise, it may cause some issues of DNS in clusters. +- If your network configuration uses Firewall or Security Group, you must ensure infrastructure components can communicate with each other through specific ports. It's recommended that you turn off the firewall or follow the guide [Network Access](https://github.com/kubesphere/kubekey/blob/master/docs/network-access.md). + +{{< notice tip >}} + +- It's recommended that your OS be clean (without any other software installed). Otherwise, there may be conflicts. +- A container image mirror (accelerator) is recommended to be prepared if you have trouble downloading images from dockerhub.io. See [Configure registry mirrors for the Docker daemon](https://docs.docker.com/registry/recipes/mirror/#configure-the-docker-daemon). + +{{}} + +This example includes three hosts as below with the master node serving as the taskbox. + +| Host IP | Host Name | Role | +| ----------- | --------- | ------------ | +| 192.168.0.2 | master | master, etcd | +| 192.168.0.3 | node1 | worker | +| 192.168.0.4 | node2 | worker | + +## Step 2: Download KubeKey + +As below, you can either download the binary file. + +Download the Installer for KubeSphere v3.0.0. + +{{< tabs >}} + +{{< tab "For users with poor network to GitHub" >}} + +For users in China, you can download the installer using this link. + +```bash +wget https://kubesphere.io/kubekey/releases/v1.0.0 +``` +{{}} + +{{< tab "For users with good network to GitHub" >}} + +For users with good network to GitHub, you can download it from [GitHub Release Page](https://github.com/kubesphere/kubekey/releases/tag/v1.0.0) or use the following link directly. + +```bash +wget https://github.com/kubesphere/kubekey/releases/download/v1.0.0/kubekey-v1.0.0-linux-amd64.tar.gz +``` +{{}} + +{{}} + +Unzip it. + +```bash +tar -zxvf v1.0.0 +``` + +Grant the execution right to `kk`: + +```bash +chmod +x kk +``` + +## Step 3: Create a Cluster + +For multi-node installation, you need to create a cluster by specifying a configuration file. + +### 1. Create an example configuration file + +Command: + +```bash +./kk create config [--with-kubernetes version] [--with-kubesphere version] [(-f | --file) path] +``` + +{{< notice info >}} + +Supported Kubernetes versions: *v1.15.12*, *v1.16.13*, *v1.17.9* (default), *v1.18.6*. + +{{}} + +Here are some examples for your reference: + +- You can create an example configuration file with default configurations. You can also specify the file with a different filename, or in a different folder. + +```bash +./kk create config [-f ~/myfolder/abc.yaml] +``` + +- You can customize the persistent storage plugins (e.g. NFS Client, Ceph RBD, and GlusterFS) in `sample-config.yaml`. + +```bash +./kk create config --with-storage localVolume +``` + +{{< notice note >}} + +KubeKey will install [OpenEBS](https://openebs.io/) to provision [LocalPV](https://kubernetes.io/docs/concepts/storage/volumes/#local) for development and testing environment by default, which is convenient for new users. For this example of multi-cluster installation, we will use the default storage class (local volume). For production, please use NFS/Ceph/GlusterFS/CSI or commercial products as persistent storage solutions, you need to specify them in `addons` of `sample-config.yaml`, see [Persistent Storage Configuration](../storage-configuration). + +{{}} + +- You can specify a KubeSphere version that you want to install (e.g. `--with-kubesphere v3.0.0`). + +```bash +./kk create config --with-kubesphere [version] +``` + +### 2. Edit the configuration file + +A default file **config-sample.yaml** will be created if you do not change the name. Edit the file and here is an example of the configuration file of a multi-node cluster with one master node. + +```yaml +spec: + hosts: + - {name: master, address: 192.168.0.2, internalAddress: 192.168.0.2, user: ubuntu, password: Testing123} + - {name: node1, address: 192.168.0.3, internalAddress: 192.168.0.3, user: ubuntu, password: Testing123} + - {name: node2, address: 192.168.0.4, internalAddress: 192.168.0.4, user: ubuntu, password: Testing123} + roleGroups: + etcd: + - master + master: + - master + worker: + - node1 + - node2 + controlPlaneEndpoint: + domain: lb.kubesphere.local + address: "" + port: "6443" +``` + +#### Hosts + +- List all your machines under `hosts` and add their detailed information as above. In this case, port 22 is the default port of SSH. Otherwise, you need to add the port number after the IP address. For example: + +```yaml +hosts: + - {name: master, address: 192.168.0.2, internalAddress: 192.168.0.2, port: 8022, user: ubuntu, password: Testing123} +``` + +- For default root user: + +```yaml +hosts: + - {name: master, address: 192.168.0.2, internalAddress: 192.168.0.2, password: Testing123} +``` + +- For passwordless login with SSH keys: + +```yaml +hosts: + - {name: master, address: 192.168.0.2, internalAddress: 192.168.0.2, privateKeyPath: "~/.ssh/id_rsa"} +``` + +#### roleGroups + +- `etcd`: etcd node names +- `master`: Master node names +- `worker`: Worker node names + +#### controlPlaneEndpoint (for HA installation only) + +`controlPlaneEndpoint` allows you to define an external load balancer for an HA cluster. You need to prepare and configure an external load balancer if and only if you need to install more than 3 master nodes. Please note that the address and port should be indented by two spaces in `config-sample.yaml`, and the `address` should be VIP. See HA Configuration for details. + +{{< notice tip >}} + +- You can enable the multi-cluster feature by editing the configuration file. For more information, see Multi-cluster Management. +- You can also select the components you want to install. For more information, see Enable Pluggable Components. For an example of a complete config-sample.yaml file, see [this file](https://github.com/kubesphere/kubekey/blob/master/docs/config-example.md). + +{{}} + +When you finish editing, save the file. + +### 3. Create a cluster using the configuration file + +```bash +./kk create cluster -f config-sample.yaml +``` + +{{< notice note >}} + +You need to change `config-sample.yaml` above to your own file if you use a different name. + +{{}} + +The whole installation process may take 10-20 minutes, depending on your machine and network. + +### 4. Verify the installation + +When the installation finishes, you can see the content as follows: + +```bash +##################################################### +### Welcome to KubeSphere! ### +##################################################### + +Console: http://192.168.0.2:30880 +Account: admin +Password: P@88w0rd + +NOTES: + 1. After logging into the console, please check the + monitoring status of service components in + the "Cluster Management". If any service is not + ready, please wait patiently until all components + are ready. + 2. Please modify the default password after login. + +##################################################### +https://kubesphere.io 20xx-xx-xx xx:xx:xx +##################################################### +``` + +Now, you will be able to access the web console of KubeSphere at `http://{IP}:30880` (e.g. you can use the EIP) with the account and password `admin/P@88w0rd`. + +{{< notice note >}} + +To access the console, you may need to forward the source port to the intranet port of the intranet IP depending on the platform of your cloud providers. Please also make sure port 30880 is opened in the security group. + +{{}} + +![kubesphere-login](https://ap3.qingstor.com/kubesphere-website/docs/login.png) + +## Enable kubectl Autocompletion + +KubeKey doesn't enable kubectl autocompletion. See the content below and turn it on: + +**Prerequisite**: make sure bash-autocompletion is installed and works. + +```bash +# Install bash-completion +apt-get install bash-completion + +# Source the completion script in your ~/.bashrc file +echo 'source <(kubectl completion bash)' >>~/.bashrc + +# Add the completion script to the /etc/bash_completion.d directory +kubectl completion bash >/etc/bash_completion.d/kubectl +``` + +Detailed information can be found [here](https://kubernetes.io/docs/tasks/tools/install-kubectl/#enabling-shell-autocompletion). diff --git a/content/zh/docs/installing-on-linux/introduction/port-firewall.md b/content/zh/docs/installing-on-linux/introduction/port-firewall.md index 875c2e9b0..e2721d668 100644 --- a/content/zh/docs/installing-on-linux/introduction/port-firewall.md +++ b/content/zh/docs/installing-on-linux/introduction/port-firewall.md @@ -1,33 +1,32 @@ --- title: "Port Requirements" keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus' -description: '' +description: 'How to set the port in firewall rules' -linkTitle: "Requirements" +linkTitle: "Port Requirements" weight: 2120 --- -KubeSphere requires certain ports to communicate among services, so you need to make sure the following ports open for use. +KubeSphere requires certain ports to communicate among services. If your network configuration uses a firewall,you need to ensure infrastructure components can communicate with each other through specific ports that act as communication endpoints for certain processes or services. -| Service | Protocol | Action | Start Port | End Port | Notes | +|services|protocol|action|start port|end port|comment |---|---|---|---|---|---| -| ssh | TCP | allow | 22 | | | -| etcd | TCP | allow | 2379 | 2380 | | -| apiserver | TCP | allow | 6443 | | | -| calico | TCP | allow | 9099 | 9100 | | -| bgp | TCP | allow | 179 | | | -| nodeport | TCP | allow | 30000 | 32767 | | -| master | TCP | allow | 10250 | 10258 | | -| dns | TCP | allow | 53 | | | -| dns | UDP | allow | 53 | | | -| local-registry | TCP | allow | 5000 | | Required for air gapped environment| -| local-apt | TCP | allow | 5080 | | Required for air gapped environment| -| rpcbind | TCP | allow | 111 | | When using NFS as storage server | -| ipip | IPIP | allow | | | Calico network requires ipip protocol | +|ssh|TCP|allow|22| +|etcd|TCP|allow|2379|2380| +|apiserver|TCP|allow|6443| +|calico|TCP|allow|9099|9100| +|bgp|TCP|allow|179|| +|nodeport|TCP|allow|30000|32767| +|master|TCP|allow|10250|10258| +|dns|TCP|allow|53| +|dns|UDP|allow|53| +|local-registry|TCP|allow|5000||offline environment| +|local-apt|TCP|allow|5080||offline environment| +|rpcbind|TCP|allow|111|| use NFS | +|ipip| IPENCAP / IPIP|allow| | |calico needs to allow the ipip protocol | -**Note** -Please note when you use Calico network plugin and run your cluster in classic network in cloud environment, you need to open IPIP protocol for souce IP. For instance, the following is the sample on QingCloud showing how to open IPIP protocol. - -![](https://pek3b.qingstor.com/kubesphere-docs/png/20200304200605.png) +{{< notice note >}} +Please note when you use Calico network plugin and run your cluster in classic network in cloud environment, you need to open both IPENCAP and IPIP protocol for source IP. +{{}} diff --git a/content/zh/docs/installing-on-linux/introduction/storage-configuration.md b/content/zh/docs/installing-on-linux/introduction/storage-configuration.md new file mode 100644 index 000000000..a88aea31c --- /dev/null +++ b/content/zh/docs/installing-on-linux/introduction/storage-configuration.md @@ -0,0 +1,127 @@ +--- +title: "Persistent Storage Configuration" +keywords: 'kubernetes, docker, kubesphere, storage, volume, PVC' +description: 'Persistent Storage Configuration' + +linkTitle: "Persistent Storage Configuration" +weight: 2140 +--- +# Overview +Persistence volume is **Must** for Kubesphere. So before installation of Kubesphere, one **default** +[StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/) and corresponding storage plugin should be installed on the Kubernetes cluster. +As different users may choose different storage plugin, [KubeKey](https://github.com/kubesphere/kubekey) supports to install storage plugin by the way of +[add-on](https://github.com/kubesphere/kubekey/blob/v1.0.0/docs/addons.md). This passage will introduce add-on configuration for some mainly used storage plugin. + +# QingCloud-CSI +[QingCloud-CSI](https://github.com/yunify/qingcloud-csi) plugin implements an interface between CSI-enabled Container Orchestrator (CO) and the disk of QingCloud. +Here is a helm-chart example of installing by KubeKey add-on. +```bash +addons: +- name: csi-qingcloud + namespace: kube-system + sources: + chart: + name: csi-qingcloud + repo: https://charts.kubesphere.io/test + values: + - config.qy_access_key_id=SHOULD_BE_REPLACED + - config.qy_secret_access_key=SHOULD_BE_REPLACED + - config.zone=SHOULD_BE_REPLACED + - sc.isDefaultClass=true +``` +For more information about QingCloud, see [QingCloud](https://www.qingcloud.com/). +For more chart values, see [configuration](https://github.com/kubesphere/helm-charts/tree/master/src/test/csi-qingcloud#configuration). + +# NFS-client +The [nfs-client-provisioner](https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client) is an automatic provisioner for Kubernetes that uses your +*already configured* NFS server, dynamically creating Persistent Volumes. +Hear is a helm-chart example of installing by KubeKey add-on. +```yaml +addons: +- name: nfs-client + namespace: kube-system + sources: + chart: + name: nfs-client-provisioner + repo: https://charts.kubesphere.io/main + values: + - nfs.server=SHOULD_BE_REPLACED + - nfs.path=SHOULD_BE_REPLACED + - storageClass.defaultClass=true +``` +For more chart values, see [configuration](https://github.com/kubesphere/helm-charts/tree/master/src/main/csi-nfs-provisioner#configuration) + +# Ceph RBD +Ceph RBD is an in-tree storage plugin on Kubernetes. As **hyperkube** images were [deprecated since 1.17](https://github.com/kubernetes/kubernetes/pull/85094), +**KubeKey** will never use **hyperkube** images. So in-tree Ceph rbd may not work on Kubernetes installed by **KubeKey**. +We could use [rbd provisioner](https://github.com/kubernetes-incubator/external-storage/tree/master/ceph/rbd) as substitute, which is same format with in-tree Ceph rbd. +Here is an example of rbd-provisioner. +```yaml +- name: rbd-provisioner + namespace: kube-system + sources: + chart: + name: rbd-provisioner + repo: https://charts.kubesphere.io/test + values: + - ceph.mon=SHOULD_BE_REPLACED # like 192.168.0.10:6789 + - ceph.adminKey=SHOULD_BE_REPLACED + - ceph.userKey=SHOULD_BE_REPLACED + - sc.isDefault=true +``` +For more values, see [configuration](https://github.com/kubesphere/helm-charts/tree/master/src/test/rbd-provisioner#configuration)) + +# Glusterfs +Glusterfs is an in-tree storage plugin on Kubernetes, only StorageClass is need to been installed. +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: heketi-secret + namespace: kube-system +type: kubernetes.io/glusterfs +data: + key: SHOULD_BE_REPLACED +--- +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + annotations: + storageclass.beta.kubernetes.io/is-default-class: "true" + storageclass.kubesphere.io/supported-access-modes: '["ReadWriteOnce","ReadOnlyMany","ReadWriteMany"]' + name: glusterfs +parameters: + clusterid: SHOULD_BE_REPLACED + gidMax: "50000" + gidMin: "40000" + restauthenabled: "true" + resturl: SHOULD_BE_REPLCADED # like "http://192.168.0.14:8080" + restuser: admin + secretName: heketi-secret + secretNamespace: kube-system + volumetype: SHOULD_BE_REPLACED # like replicate:2 +provisioner: kubernetes.io/glusterfs +reclaimPolicy: Delete +volumeBindingMode: Immediate +allowVolumeExpansion: true +``` +For detailed information, see [configuration](https://kubernetes.io/docs/concepts/storage/storage-classes/#glusterfs) + +Save the YAML file of StorageClass in local, **/root/glusterfs-sc.yaml** for example. The add-on configuration could be set like: +```bash +- addon +- name: glusterfs + sources: + yaml: + path: + - /root/glusterfs-sc.yaml +``` + +# OpenEBS/LocalVolumes +[OpenEBS](https://github.com/openebs/openebs) Dynamic Local PV provisioner can create Kubernetes Local Persistent Volumes using a unique +HostPath (directory) on the node to persist data. It's very convenient for experience KubeSphere when you has no special storage system. +If no default StorageClass configured of **KubeKey** add-on, OpenEBS/LocalVolumes will be installed. + +# Multi-Storage +If you intend to install more than one storage plugins, remind to set only one to be default. +Otherwise [ks-installer](https://github.com/kubesphere/ks-installer) will be confused about which StorageClass to use. diff --git a/content/zh/docs/installing-on-linux/introduction/vars.md b/content/zh/docs/installing-on-linux/introduction/vars.md index cda3aa5db..d7b7a2685 100644 --- a/content/zh/docs/installing-on-linux/introduction/vars.md +++ b/content/zh/docs/installing-on-linux/introduction/vars.md @@ -1,107 +1,36 @@ --- -title: "Common Configurations" -keywords: 'KubeSphere, kubernetes, docker, helm, jenkins, istio, prometheus' +title: "Kubernetes Cluster Configuration" +keywords: 'KubeSphere, kubernetes, docker, cluster, jenkins, prometheus' description: 'Configure cluster parameters before installing' linkTitle: "Kubernetes Cluster Configuration" weight: 2130 --- -This tutorial explains how to customize KubeSphere configurations in `conf/common.yaml`. You can reference the following section to understand each parameter. +This tutorial explains how to customize the Kubernetes cluster configurations in `config-example.yaml` when you start to use [KubeKey](https://github.com/kubesphere/kubekey) to provision a cluster. You can reference the following section to understand each parameter. ```yaml ######################### Kubernetes ######################### -# The default k8s version will be installed -kube_version: v1.16.7 -# The default etcd version will be installed -etcd_version: v3.2.18 - -# Configure a cron job to backup etcd data, which is running on etcd machines. -# Period of running backup etcd job, the unit is minutes. -# The default value 30 means backup etcd every 30 minutes. -etcd_backup_period: 30 - -# How many backup replicas to keep. -# The default value5 means to keep latest 5 backups, older ones will be deleted by order. -keep_backup_number: 5 - -# The location to store etcd backups files on etcd machines. -etcd_backup_dir: "/var/backups/kube_etcd" - -# Add other registry. (For users who need to accelerate image download) -docker_registry_mirrors: - - https://docker.mirrors.ustc.edu.cn - - https://registry.docker-cn.com - - https://mirror.aliyuncs.com - -# Kubernetes network plugin, Calico will be installed by default. Note that Calico and flannel are recommended, which are tested and verified by KubeSphere. -kube_network_plugin: calico - -# A valid CIDR range for Kubernetes services, -# 1. should not overlap with node subnet -# 2. should not overlap with Kubernetes pod subnet -kube_service_addresses: 10.233.0.0/18 - -# A valid CIDR range for Kubernetes pod subnet, -# 1. should not overlap with node subnet -# 2. should not overlap with Kubernetes services subnet -kube_pods_subnet: 10.233.64.0/18 - -# Kube-proxy proxyMode configuration, either ipvs, or iptables -kube_proxy_mode: ipvs - -# Maximum pods allowed to run on every node. -kubelet_max_pods: 110 - -# Enable nodelocal dns cache, see https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.md#nodelocal-dns-cache for further information -enable_nodelocaldns: true - -# Highly Available loadbalancer example config -# apiserver_loadbalancer_domain_name: "lb.kubesphere.local" # Loadbalancer domain name -# loadbalancer_apiserver: # Loadbalancer apiserver configuration, please uncomment this line when you prepare HA install -# address: 192.168.0.10 # Loadbalancer apiserver IP address -# port: 6443 # apiserver port - -######################### KubeSphere ######################### - -# Version of KubeSphere -ks_version: v2.1.0 - -# KubeSphere console port, range 30000-32767, -# but 30180/30280/30380 are reserved for internal service -console_port: 30880 # KubeSphere console nodeport - -#CommonComponent -mysql_volume_size: 20Gi # MySQL PVC size -minio_volume_size: 20Gi # Minio PVC size -etcd_volume_size: 20Gi # etcd PVC size -openldap_volume_size: 2Gi # openldap PVC size -redis_volume_size: 2Gi # Redis PVC size - - -# Monitoring -prometheus_replica: 2 # Prometheus replicas with 2 by default which are responsible for monitoring different segments of data source and provide high availability as well. -prometheus_memory_request: 400Mi # Prometheus request memory -prometheus_volume_size: 20Gi # Prometheus PVC size -grafana_enabled: true # enable grafana or not - - -## Container Engine Acceleration -## Use nvidia gpu acceleration in containers -# nvidia_accelerator_enabled: true # enable Nvidia GPU accelerator or not. It supports hybrid node with GPU and CPU installed. -# nvidia_gpu_nodes: # The GPU nodes specified in hosts.ini. FOr now we only support Ubuntu 16.04 -# - kube-gpu-001 # The host name of the GPU node specified in hosts.ini +kubernetes: + version: v1.17.9 # The default k8s version is v1.17.9, you can specify 1.15.2, v1.16.13, v1.18.6 as you want + imageRepo: kubesphere # DockerHub Repo + clusterName: cluster.local # Kubernetes Cluster Name + masqueradeAll: false # masqueradeAll tells kube-proxy to SNAT everything if using the pure iptables proxy mode. [Default: false] + maxPods: 110 # maxPods is the number of pods that can run on this Kubelet. [Default: 110] + nodeCidrMaskSize: 24 # internal network node size allocation. This is the size allocated to each node on your network. [Default: 24] + proxyMode: ipvs # mode specifies which proxy mode to use. [Default: ipvs] + network: + plugin: calico # Calico by default, KubeSphere Network Policy is based on Calico. You can also specify Flannel as you want + calico: + ipipMode: Always # IPIP Mode to use for the IPv4 POOL created at start up. If set to a value other than Never, vxlanMode should be set to "Never". [Always | CrossSubnet | Never] [Default: Always] + vxlanMode: Never # VXLAN Mode to use for the IPv4 POOL created at start up. If set to a value other than Never, ipipMode should be set to "Never". [Always | CrossSubnet | Never] [Default: Never] + vethMTU: 1440 # The maximum transmission unit (MTU) setting determines the largest packet size that can be transmitted through your network. [Default: 1440] + kubePodsCIDR: 10.233.64.0/18 # A valid CIDR range for Kubernetes pod subnet, it should not overlap with node subnet, and it should not overlap with Kubernetes services subnet. + kubeServiceCIDR: 10.233.0.0/18 # A valid CIDR range for Kubernetes services, it should not overlap with node subnet, and it should not overlap with Kubernetes pod subnet + registry: + registryMirrors: [] # For users who need to accelerate image download speed + insecureRegistries: [] # Configure an address of Insecure image Registry, see https://docs.docker.com/registry/insecure/ + privateRegistry: "" # Configure a private image registry for air-gapped installation (e.g. docker local registry or Harbor) + addons: [] # You can specify any add-ons with one or more Helm Charts or YAML files in this field, e.g. CSI plugins or cloud provider plugins. ``` - -## How to Configure a GPU Node - -You may want to use GPU nodes for special purpose such as machine learning. Let's say you have a GPU node called `node2` in `hosts.ini`, then in the file `common.yaml` specify the following configuration. Please be aware the `- node2` has two spaces indent. - -```yaml - nvidia_accelerator_enabled: true - nvidia_gpu_nodes: - - node2 -``` - -> Note: The GPU node now only supports Ubuntu 16.04. \ No newline at end of file diff --git a/content/zh/docs/installing-on-linux/on-premise/_index.md b/content/zh/docs/installing-on-linux/on-premise/_index.md deleted file mode 100644 index cd927f966..000000000 --- a/content/zh/docs/installing-on-linux/on-premise/_index.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -linkTitle: "Install on Linux" -weight: 2200 - -_build: - render: false ---- \ No newline at end of file diff --git a/content/zh/docs/installing-on-linux/on-premise/install-ks-on-linux-airgapped.md b/content/zh/docs/installing-on-linux/on-premise/install-ks-on-linux-airgapped.md deleted file mode 100644 index 26b3e4f04..000000000 --- a/content/zh/docs/installing-on-linux/on-premise/install-ks-on-linux-airgapped.md +++ /dev/null @@ -1,224 +0,0 @@ ---- -title: "Air-Gapped Installation" -keywords: 'kubernetes, kubesphere, air gapped, installation' -description: 'How to install KubeSphere on air-gapped Linux machines' - - -weight: 2240 ---- - -The air-gapped installation is almost the same as the online installation except it creates a local registry to host the Docker images. We will demonstrate how to install KubeSphere and Kubernetes on air-gapped environment. - -> Note: The dependencies in different operating systems may cause upexpected problems. If you encounter any installation problems on air-gapped environment, please describe your OS information and error logs on [GitHub](https://github.com/kubesphere/kubesphere/issues). - -## Prerequisites - -- If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information. -> - Installer will use `/var/lib/docker` as the default directory where all Docker related files, including the images, are stored. We recommend you to add additional storage to a disk with at least 100G mounted at `/var/lib/docker` and `/mnt/registry` respectively, use the [fdisk](https://www.computerhope.com/unix/fdisk.htm) command for reference. -- Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [OpenEBS](https://openebs.io/) to provide storage service with dynamic provisioning. It is convenient for testing and development. For production, please [configure supported persistent storage service](../storage-configuration) and prepare [high availability configuration](../master-ha) before installation. -- Since the air-gapped machines cannot connect to apt or yum source, please use clean Linux machine to avoid this problem. - -## Step 1: Prepare Linux Hosts - -The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements. - -- Supported OSes: CentOS 7.4 ~ 7.7 (64-bit), Ubuntu 16.04.5/16.04.6/18.04.1/18.04.2/18.04.3 LTS (64-bit) -- Time synchronization is required across all nodes, otherwise the installation may not succeed; -- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`; -- If you are using `Ubuntu 18.04`, you need to use the user `root`. -- Ensure your disk of each node is at least 100G. -- CPU and memory in total of all machines: 2 cores and 4 GB for minimal installation; 8 cores and 16 GB for complete installation. - - -The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes. - -> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide. - -| Host IP | Host Name | Role | -| --- | --- | --- | -|192.168.0.1|master|master, etcd| -|192.168.0.2|node1|node| -|192.168.0.3|node2|node| - -### Cluster Architecture - -#### Single Master, Single Etcd, Two Nodes - -![Architecture](/cluster-architecture.svg) - -## Step 2: Download Installer Package - -Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`. - -```bash -curl -L https://kubesphere.io/download/offline/latest > kubesphere-all-offline-v2.1.1.tar.gz \ -&& tar -zxf kubesphere-all-offline-v2.1.1.tar.gz && cd kubesphere-all-offline-v2.1.1/conf -``` - -## Step 3: Configure Host Template - -> This step is only for multi-node installation, you can skip this step if you choose all-in-one installation. - -Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file. - -> Note: -> -> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`. -> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`. -> - master, node1 and node2 are the host names of each node and all host names should be in lowercase. - -### hosts.ini - -```ini -[all] -master ansible_connection=local ip=192.168.0.1 -node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD -node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD - -[local-registry] -master - -[kube-master] -master - -[kube-node] -node1 -node2 - -[etcd] -master - -[k8s-cluster:children] -kube-node -kube-master -``` - -> Note: -> -> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here. -> - Installer will use a node as the local registry for docker images, defaults to "master" in the group `[local-registry]`. -> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively. -> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`. -> -> Parameters Specification: -> -> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection. -> - `ansible_host`: The name of the host to be connected. -> - `ip`: The ip of the host to be connected. -> - `ansible_user`: The default ssh user name to use. -> - `ansible_become_pass`: Allows you to set the privilege escalation password. -> - `ansible_ssh_pass`: The password of the host to be connected using root. - -## Step 4: Enable All Components - -> This is step is complete installation. You can skip this step if you choose a minimal installation. - -Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default. - -```yaml -# LOGGING CONFIGURATION -# logging is an optional component when installing KubeSphere, and -# Kubernetes builtin logging APIs will be used if logging_enabled is set to false. -# Builtin logging only provides limited functions, so recommend to enable logging. -logging_enabled: true # Whether to install logging system -elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number -elasticsearch_data_replica: 2 # total number of data nodes -elasticsearch_volume_size: 20Gi # Elasticsearch volume size -log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default. -elk_prefix: logstash # the string making up index names. The index name will be formatted as ks--log -kibana_enabled: false # Kibana Whether to install built-in Grafana -#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption. -#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port - -#DevOps Configuration -devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image) -jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default -jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default -jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default -jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters -jenkinsJavaOpts_Xmx: 6g -jenkinsJavaOpts_MaxRAM: 8g -sonarqube_enabled: true # Whether to install built-in SonarQube -#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption. -#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token - -# Following components are all optional for KubeSphere, -# Which could be turned on to install it before installation or later by updating its value to true -openpitrix_enabled: true # KubeSphere application store -metrics_server_enabled: true # For KubeSphere HPA to use -servicemesh_enabled: true # KubeSphere service mesh system(Istio-based) -notification_enabled: true # KubeSphere notification system -alerting_enabled: true # KubeSphere alerting system -``` - -## Step 5: Install KubeSphere to Linux Machines - -> Note: -> -> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default. -> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions. -> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation. -> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts. - -**1.** Enter `scripts` folder, and execute `install.sh` using `root` user: - -```bash -cd ../cripts -./install.sh -``` - -**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume. - -```bash -################################################ - KubeSphere Installer Menu -################################################ -* 1) All-in-one -* 2) Multi-node -* 3) Quit -################################################ -https://kubesphere.io/ 2020-02-24 -################################################ -Please input an option: 2 - -``` - -**3.** Verify the multi-node installation: - -**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go. - -```bash -successsful! -##################################################### -### Welcome to KubeSphere! ### -##################################################### - -Console: http://192.168.0.1:30880 -Account: admin -Password: P@88w0rd - -NOTE:Please modify the default password after login. -##################################################### -``` - -> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components). - -**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in. - -![Login](https://pek3b.qingstor.com/kubesphere-docs/png/20191017172215.png) - -Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up. - -![Landing Page](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png) - -## Enable Pluggable Components - -If you already have set up minimal installation, you still can edit the ConfigMap of ks-installer using the following command. Please make sure there is enough resource in your machines, see [Pluggable Components Overview](/en/installation/pluggable-components/). - -```bash -kubectl edit cm -n kubesphere-system ks-installer -``` - -## FAQ - -If you have further questions please do not hesitate to raise issues on [GitHub](https://github.com/kubesphere/kubesphere/issues). diff --git a/content/zh/docs/installing-on-linux/on-premises/_index.md b/content/zh/docs/installing-on-linux/on-premises/_index.md new file mode 100644 index 000000000..29b1044f0 --- /dev/null +++ b/content/zh/docs/installing-on-linux/on-premises/_index.md @@ -0,0 +1,9 @@ +--- +linkTitle: "Install on On-premises environment" +weight: 2200 + +_build: + render: false +--- + +In this chapter, we will demonstrate how to use KubeKey or Kubeadm to provision a new Kubernetes and KubeSphere cluster on some on on-premises environments, such as VMware vSphere, OpenStack, Bare Metal, etc. You just need prepare the machines with supported operating system before you start installation. The air-gapped installation guide is also included in this chapter. diff --git a/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/install-ks-on-linux-airgapped.md b/content/zh/docs/installing-on-linux/on-premises/install-ks-on-linux-airgapped.md similarity index 100% rename from content/zh/docs/installing-on-kubernetes/hosted-kubernetes/install-ks-on-linux-airgapped.md rename to content/zh/docs/installing-on-linux/on-premises/install-ks-on-linux-airgapped.md diff --git a/content/zh/docs/installing-on-linux/on-premise/install-kubesphere-on-vmware-vsphere.md b/content/zh/docs/installing-on-linux/on-premises/install-kubesphere-on-vmware-vsphere.md similarity index 54% rename from content/zh/docs/installing-on-linux/on-premise/install-kubesphere-on-vmware-vsphere.md rename to content/zh/docs/installing-on-linux/on-premises/install-kubesphere-on-vmware-vsphere.md index 82e976860..7befd94ac 100644 --- a/content/zh/docs/installing-on-linux/on-premise/install-kubesphere-on-vmware-vsphere.md +++ b/content/zh/docs/installing-on-linux/on-premises/install-kubesphere-on-vmware-vsphere.md @@ -2,28 +2,33 @@ title: "VMware vSphere Installation" keywords: 'kubernetes, kubesphere, VMware vSphere, installation' description: 'How to install KubeSphere on VMware vSphere Linux machines' + + +weight: 2260 --- -# 在 vSphere 部署高可用的 KubeSphere +# Introduction -对于生产环境,我们需要考虑集群的高可用性。如果关键组件(例如 kube-apiserver,kube-scheduler 和 kube-controller-manager)都在同一主节点上运行,则一旦主节点出现故障,Kubernetes 和 KubeSphere 将不可用。因此,我们需要通过为负载均衡器配置多个主节点来设置高可用性集群。您可以使用任何云负载平衡器或任何硬件负载平衡器(例如F5)。另外,Keepalived 和HAproxy 或 Nginx 也是创建高可用性集群的替代方法。 +For a production environment, we need to consider the high availability of the cluster. If the key components (e.g. kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same master node, Kubernetes and KubeSphere will be unavailable once the master node goes down. Therefore, we need to set up a high-availability cluster by provisioning load balancers with multiple master nodes. You can use any cloud load balancer, or any hardware load balancer (e.g. F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters. -本教程为您提供了一个示例,说明如何使用 [keepalived + haproxy](https://kubesphere.com.cn/forum/d/1566-kubernetes-keepalived-haproxy) 对 kube-apiserver 进行负载均衡,实现高可用 kubernetes 集群。 +This tutorial walks you through an example of how to create keepalived + haproxy, and implement high availability of master and etcd nodes using the load balancers. -## 前提条件 +## Prerequisites -- 请遵循该[指南](https://github.com/kubesphere/kubekey),确保您已经知道如何将 KubeSphere 与多节点集群一起安装。有关用于安装的 config yaml 文件的详细信息,请参阅多节点安装。本教程重点介绍如何配置负载均衡器。 -- 您需要一个 VMware vSphere 帐户来创建VM资源。 -- 考虑到数据的持久性,对于生产环境,我们建议您准备持久性存储并预先创建 StorageClass 。为了进行开发和测试,您可以使用集成的 OpenEBS 直接将 LocalPV设置为存储服务。 +- Please make sure that you already know how to install KubeSphere with a multi-node cluster by following the [guide](https://github.com/kubesphere/kubekey). For the detailed information about the config yaml file that is used for installation, see Multi-node Installation. This tutorial focuses more on how to configure load balancers. +- You need a VMware vSphere account to create VMs. +- Considering data persistence, for a production environment, we recommend you to prepare persistent storage and create a StorageClass in advance. For development and testing, you can use the integrated OpenEBS to provision LocalPV as the storage service directly. -## 部署架构 -![部署架构](/images/docs/vsphere/kubesphereOnVsphere-zh-architecture.png) +## Architecture -## 创建主机 +![Architecture](/images/docs/vsphere/kubesphereOnVsphere-zh-architecture.png) -本示例创建 9 台 **CentOS Linux release 7.6.1810(Core)** 的虚拟机,默认的最小化安装,每台配置为 2 Core 4 GB 40 G 即可。 +## Prepare Linux Hosts -| 主机 IP | 主机名称 | 角色 | +This tutorial creates 9 virtual machines with **CentOS Linux release 7.6.1810 (Core)**, the default minimal installation, each configuration is 2 Core 4 GB 40 G. + + +| Host IP | Host Name | Role | | --- | --- | --- | |10.10.71.214|master1|master1, etcd| |10.10.71.73|master2|master2, etcd| @@ -35,58 +40,63 @@ description: 'How to install KubeSphere on VMware vSphere Linux machines' |10.10.71.77|lb-0|lb(keepalived + haproxy)| |10.10.71.66|lb-1|lb(keepalived + haproxy)| -选择可创建的资源池,点击右键-新建虚拟机(创建虚拟机入口请好几个,自己选择) +Start the Virtual Machine Creation Process in the VMware Host Client +You use the New Virtual Machine wizard to create a virtual machine to place in the VMware Host Client inventory -![0-1-新创](/images/docs/vsphere/kubesphereOnVsphere-zh-0-1-1-create-type.png) +![create](/images/docs/vsphere/kubesphereOnVsphere-en-0-1-create.png) -选择创建类型,创建新虚拟机。 +You use the Select creation type page of the New Virtual Machine wizard to create a new virtual machine, deploy a virtual machine from an OVF or OVA file, or register an existing virtual machine -![0-1-1创建类型](/images/docs/vsphere/kubesphereOnVsphere-zh-0-1-create.png) +![kubesphereOnVsphere-en-0-1-1-create-type](/images/docs/vsphere/kubesphereOnVsphere-en-0-1-1-create-type.png) -填写虚拟机名称和存放文件夹。 +When you create a new virtual machine, provide a unique name for the virtual machine to distinguish it from existing virtual machines on the host you are managing. -![0-1-2-name](/images/docs/vsphere/kubesphereOnVsphere-zh-0-1-2-name.png) +![kubesphereOnVsphere-en-0-1-2-name](/images/docs/vsphere/kubesphereOnVsphere-en-0-1-2-name.png) -选择计算资源。 +Select the datastore or datastore cluster to store the virtual machine configuration files and all of the virtual disks in. You can select the datastore that has the most suitable properties, such as size, speed, and availability, for your virtual machine storage. -![0-1-3-资源](/images/docs/vsphere/kubesphereOnVsphere-zh-0-1-3-resource.png) +![kubesphereOnVsphere-en-0-1-3-resource](/images/docs/vsphere/kubesphereOnVsphere-en-0-1-3-resource.png) -选择存储。 +![kubesphereOnVsphere-en-0-1-4-storage](/images/docs/vsphere/kubesphereOnVsphere-en-0-1-4-storage.png) -![0-1-4-存储](/images/docs/vsphere/kubesphereOnVsphere-zh-0-1-4-storage.png) +![kubesphereOnVsphere-en-0-1-5-compatibility](/images/docs/vsphere/kubesphereOnVsphere-en-0-1-5-compatibility.png) -选择兼容性,这里是 ESXi 7.0 及更高版本。 + you select a guest operating system, the wizard provides the appropriate defaults for the operating system installation. -![0-1-5-兼容性](/images/docs/vsphere/kubesphereOnVsphere-zh-0-1-5-compatibility.png) +![kubesphereOnVsphere-en-0-1-6-system](/images/docs/vsphere/kubesphereOnVsphere-en-0-1-6-system.png) -选择客户机操作系统,Linux CentOS 7 (64 位)。 +Before you deploy a new virtual machine, you have the option to configure the virtual machine hardware and the virtual machine options -![0-1-6-系统](/images/docs/vsphere/kubesphereOnVsphere-zh-0-1-6-system.png) +![kubesphereOnVsphere-en-0-1-7-hardware-1](/images/docs/vsphere/kubesphereOnVsphere-en-0-1-7-hardware-1.png) -自定义硬件,这里操作系统是挂载的 ISO 文件(打开电源时连接),网络是 VLAN71(勾选)。 +![kubesphereOnVsphere-en-0-1-7-hardware-2](/images/docs/vsphere/kubesphereOnVsphere-en-0-1-7-hardware-2.png) -![0-1-7-硬件](/images/docs/vsphere/kubesphereOnVsphere-zh-0-1-7-hardware.png) +![kubesphereOnVsphere-en-0-1-7-hardware-3](/images/docs/vsphere/kubesphereOnVsphere-en-0-1-7-hardware-3.png) -在`即将完成`页面上可查看为虚拟机选择的配置。 +![kubesphereOnVsphere-en-0-1-7-hardware-4](/images/docs/vsphere/kubesphereOnVsphere-en-0-1-7-hardware-4.png) -![0-1-8](/images/docs/vsphere/kubesphereOnVsphere-zh-0-1-8.png) +In the Ready to complete page, you review the configuration selections that you made for the virtual machine. -## 部署 keepalived+haproxy -### yum 安装 +![kubesphereOnVsphere-en-0-1-8](/images/docs/vsphere/kubesphereOnVsphere-en-0-1-8.png) -在主机为lb-0和lb-1中部署keepalived+haproxy 即IP为10.10.71.77与10.10.71.66的服务器上安装部署haproxy、keepalived、psmisc + +## Install a Load Balancer using Keepalived and Haproxy (Optional) + +For production environment, you have to prepare an external Load Balancer. If you do not have a Load Balancer, you can install it using Keepalived and Haproxy. If you are provisioning a development or testing environment, please skip this section. + +### Yum Install + +host lb-0(10.10.71.77) and host lb-1(10.10.71.66) ```bash yum install keepalived haproxy psmisc -y ``` -### 配置 haproxy - -在IP为 10.10.71.77 与 10.10.71.66 的服务器 ,配置 haproxy (两台 lb 机器配置一致即可,注意后端服务地址)。 - -Haproxy 配置 /etc/haproxy/haproxy.cfg +### Configure Haproxy +On the servers with IP 10.10.71.77 and 10.10.71.66, configure haproxy (the configuration of the two lb machines is the same, pay attention to the back-end service address). ```bash +#Haproxy Configure /etc/haproxy/haproxy.cfg global log 127.0.0.1 local2 chroot /var/lib/haproxy @@ -128,169 +138,198 @@ backend kube-apiserver server kube-apiserver-2 10.10.71.73:6443 check server kube-apiserver-3 10.10.71.62:6443 check ``` - - - -启动之前检查语法是否有问题 +Check for grammar before starting ```bash haproxy -f /etc/haproxy/haproxy.cfg -c ``` - -启动 Haproxy,并设置开机自启动 +Start Haproxy and set it to enable haproxy ```bash systemctl restart haproxy && systemctl enable haproxy ``` - -停止 Haproxy +Stop Haproxy ```bash systemctl stop haproxy ``` +### Configure Keepalived -### 配置 keepalived - -主 haproxy 77 lb-0-10.10.71.77 (/etc/keepalived/keepalived.conf) +Main haproxy 77 lb-0-10.10.71.77 (/etc/keepalived/keepalived.conf) ```bash global_defs { -notification_email { -} -smtp_connect_timeout 30 #连接超时时间 -router_id LVS_DEVEL01 ##相当于给这个服务器起个昵称 -vrrp_skip_check_adv_addr -vrrp_garp_interval 0 -vrrp_gna_interval 0 + notification_email { + } + smtp_connect_timeout 30 + router_id LVS_DEVEL01 + vrrp_skip_check_adv_addr + vrrp_garp_interval 0 + vrrp_gna_interval 0 } vrrp_script chk_haproxy { -script "killall -0 haproxy" -interval 2 -weight 2 + script "killall -0 haproxy" + interval 2 + weight 2 } vrrp_instance haproxy-vip { -state MASTER #主服务器 是MASTER -priority 100 #主服务器优先级要比备服务器高 -interface ens192 #实例绑定的网卡 -virtual_router_id 60 #定义一个热备组,可以认为这是60号热备组 -advert_int 1 #1秒互相通告一次,检查对方死了没。 -authentication { - auth_type PASS #认证类型 - auth_pass 1111 #认证密码 这些相当于暗号 -} -unicast_src_ip 10.10.71.77 #当前机器地址 -unicast_peer { - 10.10.71.66 #peer中其它机器地址 -} -virtual_ipaddress { - #vip地址 - 10.10.71.67/24 -} -track_script { - chk_haproxy -} + state MASTER + priority 100 + interface ens192 + virtual_router_id 60 + advert_int 1 + authentication { + auth_type PASS + auth_pass 1111 + } + unicast_src_ip 10.10.71.77 + unicast_peer { + 10.10.71.66 + } + virtual_ipaddress { + #vip + 10.10.71.67/24 + } + track_script { + chk_haproxy + } } ``` +remarks haproxy 66 lb-1-10.10.71.66 (/etc/keepalived/keepalived.conf) -备 haproxy 66 lb-1-10.10.71.66 (/etc/keepalived/keepalived.conf) ```bash global_defs { -notification_email { -} -router_id LVS_DEVEL02 ##相当于给这个服务器起个昵称 -vrrp_skip_check_adv_addr -vrrp_garp_interval 0 -vrrp_gna_interval 0 + notification_email { + } + router_id LVS_DEVEL02 + vrrp_skip_check_adv_addr + vrrp_garp_interval 0 + vrrp_gna_interval 0 } vrrp_script chk_haproxy { -script "killall -0 haproxy" -interval 2 -weight 2 + script "killall -0 haproxy" + interval 2 + weight 2 } vrrp_instance haproxy-vip { -state BACKUP #备份服务器 是 backup -priority 90 #优先级要低(把备份的90修改为100) -interface ens192 #实例绑定的网卡 -virtual_router_id 60 -advert_int 1 -authentication { - auth_type PASS - auth_pass 1111 -} -unicast_src_ip 10.10.71.66 #当前机器地址 -unicast_peer { - 10.10.71.77 #peer 中其它机器地址 -} -virtual_ipaddress { - #加/24 - 10.10.71.67/24 -} -track_script { - chk_haproxy -} + state BACKUP + priority 90 + interface ens192 + virtual_router_id 60 + advert_int 1 + authentication { + auth_type PASS + auth_pass 1111 + } + unicast_src_ip 10.10.71.66 + unicast_peer { + 10.10.71.77 + } + virtual_ipaddress { + 10.10.71.67/24 + } + track_script { + chk_haproxy + } } ``` +start keepalived and enable keepalived -启动 keepalived,设置开机自启动 ```bash systemctl restart keepalived && systemctl enable keepalived systemctl stop keepaliv - +systemctl start keepalived ``` -开启 keepalived服务 +### Verify availability -```bash -systemctl start keepalivedb -``` +Use `ip a s` to view the vip binding status of each lb node -### 验证可用性 - -使用 `ip a s` 查看各 lb 节点 vip 绑定情况 ```bash ip a s ``` -暂停vip所在节点 haproxy:`systemctl stop haproxy` -```bash + +Pause VIP node haproxy:`systemctl stop haproxy` + +``` systemctl stop haproxy ``` -再次使用 `ip a s ` 查看各 lb 节点 vip 绑定情况,查看 vip 是否发生漂移 + +Use `ip a s` again to check the vip binding of each lb node, and check whether vip drifts + ```bash -ip a s +ip a s ``` -或者使用 `systemctl status -l keepalived` 命令查看 + +Or use `systemctl status -l keepalived` command to view + ```bash systemctl status -l keepalived ``` +## Get the Installer Excutable File -## 获取安装程序可执行文件 +Download the Installer for KubeSphere v3.0.0. -下载 installer 至一台目标机器 +{{< tabs >}} + +{{< tab "For users with poor network to GitHub" >}} + +For users in China, you can download the installer using this link. + +```bash +wget https://kubesphere.io/kubekey/releases/v1.0.0 +``` +{{}} + +{{< tab "For users with good network to GitHub" >}} + +For users with good network to GitHub, you can download it from [GitHub Release Page](https://github.com/kubesphere/kubekey/releases/tag/v1.0.0) or use the following link directly. + +```bash +wget https://github.com/kubesphere/kubekey/releases/download/v1.0.0/kubekey-v1.0.0-linux-amd64.tar.gz +``` +{{}} + +{{}} + +Unzip it. + +```bash +tar -zxvf v1.0.0 +``` + +Grant the execution right to `kk`: ```bash -curl -O -k https://kubernetes.pek3b.qingstor.com/tools/kubekey/kk chmod +x kk ``` -## 创建多节点群集 +## Create a Multi-node Cluster -您可以使用高级安装来控制自定义参数或创建多节点群集。具体来说,通过指定配置文件来创建集群。 +You have more control to customize parameters or create a multi-node cluster using the advanced installation. Specifically, create a cluster by specifying a configuration file.。 -### kubekey 部署 k8s 集群 +With KubeKey, you can install Kubernetes and KubeSphere -创建配置文件(一个示例配置文件)|包含 kubesphere 的配置文件 +Create a Kubernetes cluster with KubeSphere installed (e.g. --with-kubesphere v3.0.0) ```bash -./kk create config --with-kubesphere v3.0.0 -f ~/config-sample.yaml +./kk create config --with-kubernetes v1.17.9 --with-kubesphere v3.0.0 -f ~/config-sample.yaml ``` -#### 集群节点配置 +> The following Kubernetes versions has been fully tested with KubeSphere: +> - v1.15:   v1.15.12 +> - v1.16:   v1.16.13 +> - v1.17:   v1.17.9 (default) +> - v1.18:   v1.18.6 -vi ~/config-sample.yaml +Modify the file config-sample.yaml according to your environment + +```bash +vi config-sample.yaml +``` ```yaml -#vi ~/config-sample.yaml apiVersion: kubekey.kubesphere.io/v1alpha1 kind: Cluster metadata: @@ -308,7 +347,7 @@ spec: - master1 - master2 - master3 - master: + master: - master1 - master2 - master3 @@ -418,22 +457,21 @@ spec: servicemesh: # Whether to install KubeSphere Service Mesh (Istio-based). It provides fine-grained traffic management, observability and tracing, and offer visualization for traffic topology enabled: false ``` +Create a cluster using the configuration file you customized above: -使用您在上面自定义的配置文件创建集群: - -``` +```bash ./kk create cluster -f config-sample.yaml ``` -#### 验证安装结果 +#### Verify the multi-node installation -检查安装日志,然后等待一段时间 +Inspect the logs of installation, and wait a while: ```bash kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f ``` -如果在创建集群,最后返回 `Welcome to KubeSphere` ,则表示已安装成功。 +If you can see the welcome log return, it means the installation is successful. You are ready to go. ```bash ************************************************** @@ -447,7 +485,7 @@ NOTES: 1. After logging into the console, please check the monitoring status of service components in the "Cluster Management". If any service is not - ready, please wait patiently until all components + ready, please wait patiently until all components are ready. 2. Please modify the default password after login. ##################################################### @@ -455,15 +493,11 @@ https://kubesphere.io 2020-08-15 23:32:12 ##################################################### ``` -#### 登录 console 界面 +#### Log in the console -使用给定的访问地址进行访问,进入到 KubeSphere 的登陆界面并使用默认账号(用户名 `admin`,密码 `P@88w0rd`)即可登陆平台。 +You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in. -![登录](/images/docs/vsphere/login.png) - -![默认界面](/images/docs/vsphere/default.png) - -#### 开启可插拔功能组件(可选) - -上面的示例演示了默认最小安装的过程。若要在 KubeSphere 中启用其他组件,请参阅[启用可插拔组件](https://github.com/kubesphere/ks-installer/blob/master/README_zh.md#安装功能组件)了解更多详细信息。 +![](/images/docs/vsphere/login.png) +#### Enable Pluggable Components (Optional) +The example above demonstrates the process of a default minimal installation. To enable other components in KubeSphere, see [Enable Pluggable Components for more details](https://github.com/kubesphere/ks-installer#enable-pluggable-components). diff --git a/content/zh/docs/installing-on-linux/public-cloud/_index.md b/content/zh/docs/installing-on-linux/public-cloud/_index.md index cd927f966..4acea12f4 100644 --- a/content/zh/docs/installing-on-linux/public-cloud/_index.md +++ b/content/zh/docs/installing-on-linux/public-cloud/_index.md @@ -1,7 +1,7 @@ --- -linkTitle: "Install on Linux" +linkTitle: "Installing on Public Cloud" weight: 2200 _build: render: false ---- \ No newline at end of file +--- diff --git a/content/zh/docs/installing-on-linux/public-cloud/all-in-one.md b/content/zh/docs/installing-on-linux/public-cloud/all-in-one.md deleted file mode 100644 index 8214171ef..000000000 --- a/content/zh/docs/installing-on-linux/public-cloud/all-in-one.md +++ /dev/null @@ -1,116 +0,0 @@ ---- -title: "All-in-One Installation" -keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus' -description: 'The guide for installing all-in-one KubeSphere for developing or testing' - -linkTitle: "All-in-One" -weight: 2210 ---- - -For those who are new to KubeSphere and looking for a quick way to discover the platform, the all-in-one mode is your best choice to install it since it is one-click and hassle-free configuration installation with provisioning KubeSphere and Kubernetes on your machine. - -- The following instructions are for the default installation without enabling any optional components as we have made them pluggable since v2.1.0. If you want to enable any one, please see the section [Enable Pluggable Components](../all-in-one#enable-pluggable-components) below. -- If your machine has >= 8 cores and >= 16G memory, we recommend you to install the full package of KubeSphere by [enabling optional components](../complete-installation). - -## Prerequisites - -If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirement](../port-firewall) for more information. - -## Step 1: Prepare Linux Machine - -The following describes the requirements of hardware and operating system. - -- For `Ubuntu 16.04` OS, it is recommended to select the latest `16.04.5`. -- If you are using Ubuntu 18.04, you need to use the root user to install. -- If the Debian system does not have the sudo command installed, you need to execute the `apt update && apt install sudo` command using root before installation. - -### Hardware Recommendation - -| System | Minimum Requirements | -| ------- | ----------- | -| CentOS 7.4 ~ 7.7 (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:100 G | -| Ubuntu 16.04/18.04 LTS (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:100 G | -| Red Hat Enterprise Linux Server 7.4 (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:100 G | -| Debian Stretch 9.5 (64 bit)| CPU:2 Core, Memory:4 G, Disk Space:100 G | - -## Step 2: Download Installer Package - -Execute the following commands to download Installer 2.1.1 and unpack it. - -```bash -curl -L https://kubesphere.io/download/stable/latest > installer.tar.gz \ -&& tar -zxf installer.tar.gz && cd kubesphere-all-v2.1.1/scripts -``` - -## Step 3: Get Started with Installation - -You should not do anything except executing one command as follows. The installer will complete all things for you automatically including installing/updating dependency packages, installing Kubernetes with default version 1.16.7, storage service and so on. - -> Note: -> -> - Generally speaking, do not modify any configuration. -> - KubeSphere installs `calico` by default. If you would like to use a different network plugin, you are allowed to change the configuration in `conf/common.yaml`. You are also allowed to modify other configurations such as storage class, pluggable components, etc. -> - The default storage class is [OpenEBS](https://openebs.io/) which is a kind of [Local Volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) to provision persistence storage service. OpenEBS supports [dynamic provisioning PV](https://docs.openebs.io/docs/next/uglocalpv.html#Provision-OpenEBS-Local-PV-based-on-hostpath). It will be installed automatically for your testing purpose. -> - Please refer [storage configurations](../storage-configuration) for supported storage class. -> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts. - -**1.** Execute the following command: - -```bash -./install.sh -``` - -**2.** Enter `1` to select `All-in-one` mode and type `yes` if your machine satisfies the requirements to start: - -```bash -################################################ - KubeSphere Installer Menu -################################################ -* 1) All-in-one -* 2) Multi-node -* 3) Quit -################################################ -https://kubesphere.io/ 2020-02-24 -################################################ -Please input an option: 1 -``` - -**3.** Verify if KubeSphere is installed successfully or not: - -**(1).** If you see "Successful" returned after completed, it means the installation is successful. The console service is exposed through nodeport 30880 by default. You may need to bind EIP and configure port forwarding in your environment for outside users to access. Make sure you disable the related firewall. - -```bash -successsful! -##################################################### -### Welcome to KubeSphere! ### -##################################################### - -Console: http://192.168.0.8:30880 -Account: admin -Password: P@88w0rd - -NOTE:Please modify the default password after login. -##################################################### -``` - -> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components). - -**(2).** You will be able to use default account and password to log in the console to take a tour of KubeSphere. - -Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up. - -![Dashboard](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png) - -## Enable Pluggable Components - -The guide above is only used for minimal installation by default. You can execute the following command to open the configure map and enable pluggable components. Make sure your cluster has enough CPU and memory in advance, see [Enable Pluggable Components](../pluggable-components). - -```bash -kubectl edit cm -n kubesphere-system ks-installer -``` - -## FAQ - -The installer has been tested on Aliyun, AWS, Huawei Cloud, QingCloud and Tencent Cloud. Please check the [results](https://github.com/kubesphere/ks-installer/issues/23) for details. Also please read the [FAQ of installation](../../faq/faq-install). - -If you have any further questions please do not hesitate to file issues on [GitHub](https://github.com/kubesphere/kubesphere/issues). diff --git a/content/zh/docs/installing-on-linux/public-cloud/complete-installation.md b/content/zh/docs/installing-on-linux/public-cloud/complete-installation.md deleted file mode 100644 index e0ab92099..000000000 --- a/content/zh/docs/installing-on-linux/public-cloud/complete-installation.md +++ /dev/null @@ -1,76 +0,0 @@ ---- -title: "Install All Optional Components" -keywords: 'kubesphere, kubernetes, docker, devops, service mesh, openpitrix' -description: 'Install KubeSphere with all optional components enabled on Linux machine' - - -weight: 2260 ---- - -The installer only installs required components (i.e. minimal installation) by default since v2.1.0. Other components are designed to be pluggable, which means you can enable any of them before or after installation. If your machine meets the following minimum requirements, we recommend you to **enable all components before installation**. A complete installation gives you an opportunity to comprehensively discover the container platform. - - -Minimum Requirements - -- CPU: 8 cores in total of all machines -- Memory: 16 GB in total of all machines - - - -> Note: -> -> - If your machines do not meet the minimum requirements of a complete installation, you can enable any of components at your will. Please refer to [Enable Pluggable Components Installation](../pluggable-components). -> - It works for [All-in-One](../all-in-one) and [Multi-Node](../multi-node). - -This tutorial will walk you through how to enable all components of KubeSphere. - -## Download Installer Package - -If you do not have the package yet, please run the following commands to download Installer 2.1.1 and unpack it, then enter `conf` folder. - -```bash -$ curl -L https://kubesphere.io/download/stable/v2.1.1 > installer.tar.gz \ -&& tar -zxf installer.tar.gz && cd kubesphere-all-v2.1.1/conf -``` - -## Enable All Components - -Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default. - -```yaml -# LOGGING CONFIGURATION -# logging is an optional component when installing KubeSphere, and -# Kubernetes builtin logging APIs will be used if logging_enabled is set to false. -# Builtin logging only provides limited functions, so recommend to enable logging. -logging_enabled: true # Whether to install logging system -elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number -elasticsearch_data_replica: 2 # total number of data nodes -elasticsearch_volume_size: 20Gi # Elasticsearch volume size -log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default. -elk_prefix: logstash # the string making up index names. The index name will be formatted as ks--log -kibana_enabled: false # Kibana Whether to install built-in Grafana -#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption. -#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port - -#DevOps Configuration -devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image) -jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default -jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default -jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default -jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters -jenkinsJavaOpts_Xmx: 6g -jenkinsJavaOpts_MaxRAM: 8g -sonarqube_enabled: true # Whether to install built-in SonarQube -#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption. -#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token - -# Following components are all optional for KubeSphere, -# Which could be turned on to install it before installation or later by updating its value to true -openpitrix_enabled: true # KubeSphere application store -metrics_server_enabled: true # For KubeSphere HPA to use -servicemesh_enabled: true # KubeSphere service mesh system(Istio-based) -notification_enabled: true # KubeSphere notification system -alerting_enabled: true # KubeSphere alerting system -``` - -Save it, then you can continue the installation process. diff --git a/content/zh/docs/installing-on-linux/public-cloud/install-ks-on-azure-vms.md b/content/zh/docs/installing-on-linux/public-cloud/install-ks-on-azure-vms.md new file mode 100644 index 000000000..8d925a9a5 --- /dev/null +++ b/content/zh/docs/installing-on-linux/public-cloud/install-ks-on-azure-vms.md @@ -0,0 +1,240 @@ +--- +title: "Deploy KubeSphere on Azure VM Instance" +keywords: "Kubesphere, Installation, HA, high availability, load balancer, Azure" +description: "The tutorial is for installing a high-availability cluster on Azure." +--- + +## Before you begin + +Technically, you can either install, administer, and manage Kubernetes yourself or go for a managed Kubernetes solution. If you are looking for a way to take advantage of Kubernetes with a hands-off approach, a fully managed platform solution is what you’re looking for, please see [Deploy KubeSphere on AKS](../../../installing-on-kubernetes/hosted-kubernetes/install-ks-on-aks) for more details. But if you want a bit more control over your configuration and setup a highly available cluster on Azure, this instruction will help you to setup a production-ready Kubernetes and KubeSphere. + +## Introduction + +In this tutorial, we will use two key features of Azure virtual machines (VMs): + +- Virtual Machine Scale Sets: Azure VMSS let you create and manage a group of load balanced VMs. The number of VM instances can automatically increase or decrease in response to demand or a defined schedule(Kubernates Autoscaler is available, but not covered in this tutorial, see [autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider/azure) for more details), which perfectly fits the Worker Nodes. +- Availability sets: An availability set is a logical grouping of VMs within a datacenter that automatically distributed across fault domains. This approach limits the impact of potential physical hardware failures, network outages, or power interruptions. All the Master and ETCD VMs will be placed in an Availability sets to meet our High Availability goals. + +Besides those VMs, other resources like Load Balancer, Virtual Network and Network Security Group will be involved. + +## Prerequisites + +- You need an [Azure](https://portal.azure.com) account to create all the resources. +- Basic knowledge of [Azure Resource Manager](https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/)(ARM) templates, which are files that define the Azure infrastructure and configuration. +- Considering data persistence, for a production environment, we recommend you to prepare persistent storage and create a StorageClass in advance. For development and testing, you can use the integrated OpenEBS to provision LocalPV as the storage service directly. + +## Architecture + +Six machines of "Ubuntu 18.04" will be deployed in Azure Resources Group. Three of them are grouped into an Availability sets, playing the role of both Master and ETCD of the Kubernetes control plane. Another three VMs will be defined as a VMSS, Worker nodes will be run on it. + +![Architecture](/images/docs/aks/Azure-architecture.png) + +Those VMs will be attached to a load balancer, there are two predefined rules in the LB: + +- **Inbound NAT**: ssh port will be mapped for each machine, so we can easily manage VMs. +- **Load Balancing**: http and https ports will be mapped to Node pools by default, we can add other ports on demand. + +| Service | Protocol | Rule | Backend Port | Frontend Port/Ports | Pools | +|---|---|---|---|---|---| +| ssh | TCP | Inbound NAT | 22 |50200, 50201,50202, 50100~50199| Master, Node | +| apiserver | TCP | Load Balancing | 6443 | 6443 | Master | +| ks-console | TCP | Load Balancing | 30880 | 30880 | Master | +| http | TCP | Load Balancing | 80 | 80 | Node | +| https | TCP | Load Balancing | 443 | 443 | Node | + +## Deploy HA Cluster Infrastructrue + +You don't have to create those resources one by one with Wizards. Following the best practice of **infrastructure as code** on Azure, all resources in the architecture are already defined as ARM templates. + +### Start to deploy with one click + +Click the *Deploy* button below, you will be redirected to Azure and asked to fill in deployment parameters. + +[![Deploy to Azure](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazure.svg?sanitize=true)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FRolandMa1986%2Fazurek8s%2Fmaster%2Fazuredeploy.json) [![Visualize](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/visualizebutton.svg?sanitize=true)](http://armviz.io/#/?load=https%3A%2F%2Fraw.githubusercontent.com%2FRolandMa1986%2Fazurek8s%2Fmaster%2Fazuredeploy.json) + +### Change template parameters + +Only a few parameter need to be changed. + +- Choose the *Create new* link under the Resources group and fill in a Name such as "KubeSphereVMRG". +- Fill in the admin's Username. +- Copy your public ssh key and fill in the Admin Key. Or create new one with *ssh-keygen*. + +> Password authentication is restriced in the Linux configuration, only SSH accept. + +Click the *Purchase* button in the bottom when you ready to continue. + +### Review Azure Resources in the Portal + +Once the deployment success, you can find all the resources you need in the KubeSphereVMRG Resources group. Take your time and check them one by one if you are new to Azure. Then find the public IP of LB and private IP addresses of the VMs. You will need them in the next step. + +![New Created Resources](/images/docs/aks/azure-vm-all-resources.png) + +## Deploy Kubernetes and Kubesphere + +You can execute the following command on your laptop or SSH to one of the Master VMs. There are files will be downloaded to local and disturbed to each VM during the installation. The installation will be much faster when you use **kk** in the Intranet than the Internet. + +```bash +# copy your private ssh to master-0 +scp -P 50200 ~/.ssh/id_rsa kubesphere@40.81.5.xx:/home/kubesphere/.ssh/ + +# ssh to the master-0 +ssh -i .ssh/id_rsa2 -p50200 kubesphere@40.81.5.xx +``` + +### Download KubeKey + +[Kubekey](https://github.com/kubesphere/kubekey) is the next-gen installer which is used for installing Kubernetes and KubeSphere v3.0.0 fastly, flexibly and easily. + +1. First, download it and generate a configuration file to customize the installation as follows. + + +{{< tabs >}} + +{{< tab "For users with poor network to GitHub" >}} + +For users in China, you can download the installer using this link. + +```bash +wget https://kubesphere.io/kubekey/releases/v1.0.0 +``` +{{}} + +{{< tab "For users with good network to GitHub" >}} + +For users with good network to GitHub, you can download it from [GitHub Release Page](https://github.com/kubesphere/kubekey/releases/tag/v1.0.0) or use the following link directly. + +```bash +wget https://github.com/kubesphere/kubekey/releases/download/v1.0.0/kubekey-v1.0.0-linux-amd64.tar.gz +``` +{{}} + +{{}} + +Unzip it. + +```bash +tar -zxvf v1.0.0 +``` + +Grant the execution right to `kk`: + +```bash +chmod +x kk +``` + +2. Then create an example configuration file with default configurations. Here we use Kubernetes v1.17.9 as an example. + +``` +./kk create config --with-kubesphere v3.0.0 --with-kubernetes v1.17.9 +``` +> The following Kubernetes versions have been fully tested with KubeSphere: +> - v1.15:   v1.15.12 +> - v1.16:   v1.16.13 +> - v1.17:   v1.17.9 (default) +> - v1.18:   v1.18.6 + +### config-sample.yaml Example + +```yaml +spec: + hosts: + - {name: master-0, address: 40.81.5.xx, port: 50200, internalAddress: 10.0.1.4, user: kubesphere, privateKeyPath: "~/.ssh/id_rsa"} + - {name: master-1, address: 40.81.5.xx, port: 50201, internalAddress: 10.0.1.5, user: kubesphere, privateKeyPath: "~/.ssh/id_rsa"} + - {name: master-2, address: 40.81.5.xx, port: 50202, internalAddress: 10.0.1.6, user: kubesphere, privateKeyPath: "~/.ssh/id_rsa"} + - {name: node000000, address: 40.81.5.xx, port: 50100, internalAddress: 10.0.0.4, user: kubesphere, privateKeyPath: "~/.ssh/id_rsa"} + - {name: node000001, address: 40.81.5.xx, port: 50101, internalAddress: 10.0.0.5, user: kubesphere, privateKeyPath: "~/.ssh/id_rsa"} + - {name: node000002, address: 40.81.5.xx, port: 50102, internalAddress: 10.0.0.6, user: kubesphere, privateKeyPath: "~/.ssh/id_rsa"} + roleGroups: + etcd: + - master-0 + - master-1 + - master-2 + master: + - master-0 + - master-1 + - master-2 + worker: + - node000000 + - node000001 + - node000002 +``` +For a complete configuration sample explanation, please see [this file](https://github.com/kubesphere/kubekey/blob/master/docs/config-example.md). + +### Configure the Load Balancer + +In addition to the node information, you need to provide the load balancer information in the same yaml file. For the IP address, you can find it in *Azure -> KubeSphereVMRG -> PublicLB*. Assume the IP address and listening port of the **load balancer** are `40.81.5.xx` and `6443` respectively, and you can refer to the following example. + +#### The configuration example in config-sample.yaml + +```yaml +## Public LB config example +## apiserver_loadbalancer_domain_name: "lb.kubesphere.local" + controlPlaneEndpoint: + domain: lb.kubesphere.local + address: "40.81.5.xx" + port: "6443" +``` + +> - Note we are using the public load balancer directly instead of an internal load balancer due to the Azure [Load Balancer limits](https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-troubleshoot#cause-4-accessing-the-internal-load-balancer-frontend-from-the-participating-load-balancer-backend-pool-vm). + +### Persistent Storage Plugin Configuration + +See [Storage Configuration](../storage-configuration) for details. + +### Configure the Network Plugin + +Azure Virtual Network doesn't support IPIP mode which used by [calico](https://docs.projectcalico.org/reference/public-cloud/azure#about-calico-on-azure). So let's change the network plugin to flannel. + +```yaml + network: + plugin: flannel + kubePodsCIDR: 10.233.64.0/18 + kubeServiceCIDR: 10.233.0.0/18 +``` + +### Start to Bootstrap a Cluster + +After you complete the configuration, you can execute the following command to start the installation: + +```bash +./kk create cluster -f config-sample.yaml +``` + +Inspect the logs of installation: + +```bash +kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f +``` +When the installation finishes, you can see the following message: + +```bash +##################################################### +### Welcome to KubeSphere! ### +##################################################### +Console: http://10.128.0.44:30880 +Account: admin +Password: P@88w0rd +NOTES: + 1. After logging into the console, please check the + monitoring status of service components in + the "Cluster Management". If any service is not + ready, please wait patiently until all components + are ready. + 2. Please modify the default password after login. +##################################################### +https://kubesphere.io 2020-xx-xx xx:xx:xx +``` + +### Access KubeSphere Console + +Congratulation! Now you can access the KubeSphere console using http://10.128.0.44:30880 (Replace the IP with yours). + +## Add addtional Ports + +Since we are using self-hosted Kubernetes solutions on Azure, So the Load Balancer is not integrated with [Kubernetes Service](https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer). But you still can manually map the Nodeport to the PublicLB. There are 2 steps required. + +1. Create a new Load Balance Rule in the Load Balancer. + ![Load Balancer](/images/docs/aks/azure-vm-loadbalancer-rule.png) +2. Create an Inbound Security rule to allow Internet access in the Network Security Group. + ![Firewall](/images/docs/aks/azure-vm-firewall.png) diff --git a/content/zh/docs/installing-on-linux/public-cloud/install-ks-on-huaweicloud-ecs.md b/content/zh/docs/installing-on-linux/public-cloud/install-ks-on-huaweicloud-ecs.md deleted file mode 100644 index 0f01cd76b..000000000 --- a/content/zh/docs/installing-on-linux/public-cloud/install-ks-on-huaweicloud-ecs.md +++ /dev/null @@ -1,263 +0,0 @@ ---- -title: "KubeSphere 在华为云 ECS 高可用实例" -keywords: "Kubesphere 安装, 华为云, ECS, 高可用性, 高可用性, 负载均衡器" -description: "本教程用于安装高可用性集群" - -Weight: 2230 ---- - -由于对于生产环境,我们需要考虑集群的高可用性。教你部署如何在华为云 ECS 实例服务快速部署一套高可用的生产环境 -Kubernetes 服务需要做到高可用,需要保证 kube-apiserver 的 HA ,推荐华为云负载均衡器服务. - - ## 前提条件 - - - 请遵循该[指南](https://github.com/kubesphere/kubekey),确保您已经知道如何将 KubeSphere 与多节点集群一起安装。有关用于安装的 config.yaml 文件的详细信息。本教程重点介绍配置华为云负载均衡器服务高可用安装。 - - 考虑到数据的持久性,对于生产环境,我们不建议您使用存储OpenEBS,建议 NFS , GlusterFS 等存储(需要提前安装)。文章为了进行开发和测试,集成的 OpenEBS 直接将 LocalPV 设置为存储服务。 - - SSH 可以访问所有节点。 - - 所有节点的时间同步。 - - Red Hat 在其 Linux 发行版本中包括了SELinux,建议关闭 SELinux 或者将 SELinux 的模式切换为 Permissive [宽容]工作模式。 - - ## 创建主机 - - 本示例创建 6 台 Ubuntu 18.04 server 64bit 的云服务器,每台配置为 4 核 8 GB - - | 主机IP | 主机名称 | 角色 | - | --- | --- | --- | - |192.168.1.10|master1|master1, etcd| - |192.168.1.11|master2|master2, etcd| - |192.168.1.12|master3|master3, etcd| - |192.168.1.13|node1|node| - |192.168.1.14|node2|node| - |192.168.1.15|node3|node| - - > 注意:机器有限,所以把 etcd 放入 master,在生产环境建议单独部署 etcd,提高稳定性 - - ## 华为云负载均衡器部署 - ### 创建 VPC - - 进入到华为云控制, 在左侧列表选择'虚拟私有云', 选择'创建虚拟私有云' 创建VPC,配置如下图 - - ![1-1-创建VPC](/images/docs/huawei-ecs/huawei-VPC-create.png) - - ### 创建安全组 - -在 `访问控制→ 安全组`下,创建一个安全组,设置入方向的规则参考如下: - -![2-1-创建安全组](/images/docs/huawei-ecs/huawei-rules-create.png) - > 提示:后端服务器的安全组规则必须放行 100.125.0.0/16 网段,否则会导致健康检查异常,详见 后端服务器配置安全组 。此外,还应放行 192.168.1.0/24 (主机之间的网络需全放行)。 - - ### 创建主机 -![3-1-选择主机配置](/images/docs/huawei-ecs/huawei-ECS-basic-settings.png) -在网络配置中,网络选择第一步创建的 VPC 和子网。在安全组中,选择上一步创建的安全组。 -![3-2-选择网络配置](/images/docs/huawei-ecs/huawei-ECS-network-settings.png) - -### 创建负载均衡器 -在左侧栏选择 '弹性负载均衡器',进入后选择 购买弹性负载均衡器 -> 以下健康检查结果在部署后才会显示正常,目前状态为异常 -#### 内网LB 配置 -为所有master 节点 添加后端监听器 ,监听端口为 6443 - -![4-1-配置内网LB](/images/docs/huawei-ecs/huawei-master-lb-basic-config.png) - -![4-2-配置内网LB](/images/docs/huawei-ecs/huawei-master-lb-listeners-config.png) -#### 外网LB 配置 -若集群需要配置公网访问,则需要为外网负载均衡器配置一个公网 IP为 所有节点 添加后端监听器,监听端口为 80(测试使用 30880 端口,此处 80 端口也需要在安全组中开放)。 - -![4-3-配置外网LB](/images/docs/huawei-ecs/huawei-public-lb-basic-config.png) - -![4-4-配置外网LB](/images/docs/huawei-ecs/huawei-public-lb-listeners-config.png) - -后面配置文件 config.yaml 需要配置 slb 分配的地址 - ```yaml - controlPlaneEndpoint: - domain: lb.kubesphere.local - address: "192.168.1.8" - port: "6443" -``` - ### 获取安装程序可执行文件 - - ```bash - #下载 kk installer 至任意一台机器 - curl -O -k https://kubernetes.pek3b.qingstor.com/tools/kubekey/kk - chmod +x kk - ``` - -{{< notice tip >}} - - 您可以使用高级安装来控制自定义参数或创建多节点群集。具体来说,通过指定配置文件来创建集群。 - -{{}} - - ### 使用 kubekey 部署k8s集群和 KubeSphere 控制台 - - ```bash - # 在当前位置创建配置文件 master-HA.yaml |包含 KubeSphere 的配置文件 - ./kk create config --with-kubesphere v3.0.0 -f master-HA.yaml ---- -# 同时安装存储插件 (支持:localVolume、nfsClient、rbd、glusterfs)。您可以指定多个插件并用逗号分隔。请注意,您添加的第一个将是默认存储类。 -./kk create config --with-storage localVolume --with-kubesphere v3.0.0 -f master-HA.yaml - ``` - - ### 集群配置调整 -目前当前集群开启了全量的组件,文末也提供了自定义的方法.可默认为 false -```yaml -apiVersion: kubekey.kubesphere.io/v1alpha1 -kind: Cluster -metadata: - name: master-HA -spec: - hosts: - - {name: master1, address: 192.168.1.10, internalAddress: 192.168.1.10, password: yourpassword} # Assume that the default port for SSH is 22, otherwise add the port number after the IP address as above - - {name: master2, address: 192.168.1.11, internalAddress: 192.168.1.11, password: yourpassword} # Assume that the default port for SSH is 22, otherwise add the port number after the IP address as above - - {name: master3, address: 192.168.1.12, internalAddress: 192.168.1.12, password: yourpassword} # Assume that the default port for SSH is 22, otherwise add the port number after the IP address as above - - {name: node1, address: 192.168.1.13, internalAddress: 192.168.1.13, password: yourpassword} # Assume that the default port for SSH is 22, otherwise add the port number after the IP address as above - - {name: node2, address: 192.168.1.14, internalAddress: 192.168.1.14, password: yourpassword} # Assume that the default port for SSH is 22SSH is 22, otherwise add the port number after the IP address as above - - {name: node3, address: 192.168.1.15, internalAddress: 192.168.1.15, password: yourpassword} # Assume that the default port for SSH is 22, otherwise add the port number after the IP address as above - roleGroups: - etcd: - - master[1:3] - master: - - master[1:3] - worker: - - node[1:3] - controlPlaneEndpoint: - domain: lb.kubesphere.local - address: "192.168.1.8" - port: "6443" - kubernetes: - version: v1.17.9 - imageRepo: kubesphere - clusterName: cluster.local - masqueradeAll: false # masqueradeAll tells kube-proxy to SNAT everything if using the pure iptables proxy mode. [Default: false] - maxPods: 110 # maxPods is the number of pods that can run on this Kubelet. [Default: 110] - nodeCidrMaskSize: 24 # internal network node size allocation. This is the size allocated to each node on your network. [Default: 24] - proxyMode: ipvs # mode specifies which proxy mode to use. [Default: ipvs] - network: - plugin: calico - calico: - ipipMode: Always # IPIP Mode to use for the IPv4 POOL created at start up. If set to a value other than Never, vxlanMode should be set to "Never". [Always | CrossSubnet | Never] [Default: Always] - vxlanMode: Never # VXLAN Mode to use for the IPv4 POOL created at start up. If set to a value other than Never, ipipMode should be set to "Never". [Always | CrossSubnet | Never] [Default: Never] - vethMTU: 1440 # The maximum transmission unit (MTU) setting determines the largest packet size that can be transmitted through your network. [Default: 1440] - kubePodsCIDR: 10.233.64.0/18 - kubeServiceCIDR: 10.233.0.0/18 - registry: - registryMirrors: ["https://*.mirror.aliyuncs.com"] # # input your registryMirrors - insecureRegistries: [] - privateRegistry: "" - storage: - defaultStorageClass: localVolume - localVolume: - storageClassName: local - ---- -apiVersion: installer.kubesphere.io/v1alpha1 -kind: ClusterConfiguration -metadata: - name: ks-installer - namespace: kubesphere-system - labels: - version: v3.0.0 -spec: - local_registry: "" - persistence: - storageClass: "" - authentication: - jwtSecret: "" - etcd: - monitoring: true # Whether to install etcd monitoring dashboard - endpointIps: 192.168.1.10,192.168.1.11,192.168.1.12 # etcd cluster endpointIps - port: 2379 # etcd port - tlsEnable: true - common: - mysqlVolumeSize: 20Gi # MySQL PVC size - minioVolumeSize: 20Gi # Minio PVC size - etcdVolumeSize: 20Gi # etcd PVC size - openldapVolumeSize: 2Gi # openldap PVC size - redisVolumSize: 2Gi # Redis PVC size - es: # Storage backend for logging, tracing, events and auditing. - elasticsearchMasterReplicas: 1 # total number of master nodes, it's not allowed to use even number - elasticsearchDataReplicas: 1 # total number of data nodes - elasticsearchMasterVolumeSize: 4Gi # Volume size of Elasticsearch master nodes - elasticsearchDataVolumeSize: 20Gi # Volume size of Elasticsearch data nodes - logMaxAge: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default. - elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log - # externalElasticsearchUrl: - # externalElasticsearchPort: - console: - enableMultiLogin: false # enable/disable multiple sing on, it allows an account can be used by different users at the same time. - port: 30880 - alerting: # Whether to install KubeSphere alerting system. It enables Users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from. - enabled: true - auditing: # Whether to install KubeSphere audit log system. It provides a security-relevant chronological set of records,recording the sequence of activities happened in platform, initiated by different tenants. - enabled: true - devops: # Whether to install KubeSphere DevOps System. It provides out-of-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & Binary-to-Image - enabled: true - jenkinsMemoryLim: 2Gi # Jenkins memory limit - jenkinsMemoryReq: 1500Mi # Jenkins memory request - jenkinsVolumeSize: 8Gi # Jenkins volume size - jenkinsJavaOpts_Xms: 512m # The following three fields are JVM parameters - jenkinsJavaOpts_Xmx: 512m - jenkinsJavaOpts_MaxRAM: 2g - events: # Whether to install KubeSphere events system. It provides a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters. - enabled: true - logging: # Whether to install KubeSphere logging system. Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd. - enabled: true - logsidecarReplicas: 2 - metrics_server: # Whether to install metrics-server. IT enables HPA (Horizontal Pod Autoscaler). - enabled: true - monitoring: # - prometheusReplicas: 1 # Prometheus replicas are responsible for monitoring different segments of data source and provide high availability as well. - prometheusMemoryRequest: 400Mi # Prometheus request memory - prometheusVolumeSize: 20Gi # Prometheus PVC size - alertmanagerReplicas: 1 # AlertManager Replicas - multicluster: - clusterRole: none # host | member | none # You can install a solo cluster, or specify it as the role of host or member cluster - networkpolicy: # Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods). - enabled: true - notification: # It supports notification management in multi-tenant Kubernetes clusters. It allows you to set AlertManager as its sender, and receivers include Email, Wechat Work, and Slack. - enabled: true - openpitrix: # Whether to install KubeSphere Application Store. It provides an application store for Helm-based applications, and offer application lifecycle management - enabled: true - servicemesh: # Whether to install KubeSphere Service Mesh (Istio-based). It provides fine-grained traffic management, observability and tracing, and offer visualization for traffic topology - enabled: true - ``` - - ### 执行命令创建集群 - ```bash - # 指定配置文件创建集群 - ./kk create cluster --with-kubesphere v3.0.0 -f master-HA.yaml - - # 查看 KubeSphere 安装日志 -- 直到出现控制台的访问地址和登陆账号 -kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f -``` - -```bash -##################################################### -### Welcome to KubeSphere! ### -##################################################### - -Console: http://192.168.1.10:30880 -Account: admin -Password: P@88w0rd - -NOTES: - 1. After logging into the console, please check the - monitoring status of service components in - the "Cluster Management". If any service is not - ready, please wait patiently until all components - are ready. - 2. Please modify the default password after login. - -##################################################### -https://kubesphere.io 2020-08-28 01:25:54 -##################################################### -``` -访问公网 IP + Port 为部署后的使用情况,使用默认账号密码 (`admin/P@88w0rd`),文章组件安装为最大化,登陆点击`平台管理>集群管理` 可看到下图安装组件列表和机器情况。 - - -## 如何自定义开启可插拔组件 - -点击 `集群管理` - `自定义资源CRD` ,在过滤条件框输入 `ClusterConfiguration` ,如图下 -![5-1-自定义组件](/images/docs/huawei-ecs/huawei-crds-config.png) -点击 `ClusterConfiguration` 详情,对 `ks-installer` 编辑保存退出即可,组件描述介绍:[文档说明](https://github.com/kubesphere/ks-installer/blob/master/deploy/cluster-configuration.yaml) -![5-2-自定义组件](/images/docs/huawei-ecs/huawei-crds-edit-yaml.png) diff --git a/content/zh/docs/installing-on-linux/public-cloud/install-ks-on-linux-airgapped.md b/content/zh/docs/installing-on-linux/public-cloud/install-ks-on-linux-airgapped.md deleted file mode 100644 index 26b3e4f04..000000000 --- a/content/zh/docs/installing-on-linux/public-cloud/install-ks-on-linux-airgapped.md +++ /dev/null @@ -1,224 +0,0 @@ ---- -title: "Air-Gapped Installation" -keywords: 'kubernetes, kubesphere, air gapped, installation' -description: 'How to install KubeSphere on air-gapped Linux machines' - - -weight: 2240 ---- - -The air-gapped installation is almost the same as the online installation except it creates a local registry to host the Docker images. We will demonstrate how to install KubeSphere and Kubernetes on air-gapped environment. - -> Note: The dependencies in different operating systems may cause upexpected problems. If you encounter any installation problems on air-gapped environment, please describe your OS information and error logs on [GitHub](https://github.com/kubesphere/kubesphere/issues). - -## Prerequisites - -- If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information. -> - Installer will use `/var/lib/docker` as the default directory where all Docker related files, including the images, are stored. We recommend you to add additional storage to a disk with at least 100G mounted at `/var/lib/docker` and `/mnt/registry` respectively, use the [fdisk](https://www.computerhope.com/unix/fdisk.htm) command for reference. -- Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [OpenEBS](https://openebs.io/) to provide storage service with dynamic provisioning. It is convenient for testing and development. For production, please [configure supported persistent storage service](../storage-configuration) and prepare [high availability configuration](../master-ha) before installation. -- Since the air-gapped machines cannot connect to apt or yum source, please use clean Linux machine to avoid this problem. - -## Step 1: Prepare Linux Hosts - -The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements. - -- Supported OSes: CentOS 7.4 ~ 7.7 (64-bit), Ubuntu 16.04.5/16.04.6/18.04.1/18.04.2/18.04.3 LTS (64-bit) -- Time synchronization is required across all nodes, otherwise the installation may not succeed; -- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`; -- If you are using `Ubuntu 18.04`, you need to use the user `root`. -- Ensure your disk of each node is at least 100G. -- CPU and memory in total of all machines: 2 cores and 4 GB for minimal installation; 8 cores and 16 GB for complete installation. - - -The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes. - -> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide. - -| Host IP | Host Name | Role | -| --- | --- | --- | -|192.168.0.1|master|master, etcd| -|192.168.0.2|node1|node| -|192.168.0.3|node2|node| - -### Cluster Architecture - -#### Single Master, Single Etcd, Two Nodes - -![Architecture](/cluster-architecture.svg) - -## Step 2: Download Installer Package - -Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`. - -```bash -curl -L https://kubesphere.io/download/offline/latest > kubesphere-all-offline-v2.1.1.tar.gz \ -&& tar -zxf kubesphere-all-offline-v2.1.1.tar.gz && cd kubesphere-all-offline-v2.1.1/conf -``` - -## Step 3: Configure Host Template - -> This step is only for multi-node installation, you can skip this step if you choose all-in-one installation. - -Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file. - -> Note: -> -> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`. -> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`. -> - master, node1 and node2 are the host names of each node and all host names should be in lowercase. - -### hosts.ini - -```ini -[all] -master ansible_connection=local ip=192.168.0.1 -node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD -node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD - -[local-registry] -master - -[kube-master] -master - -[kube-node] -node1 -node2 - -[etcd] -master - -[k8s-cluster:children] -kube-node -kube-master -``` - -> Note: -> -> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here. -> - Installer will use a node as the local registry for docker images, defaults to "master" in the group `[local-registry]`. -> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively. -> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`. -> -> Parameters Specification: -> -> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection. -> - `ansible_host`: The name of the host to be connected. -> - `ip`: The ip of the host to be connected. -> - `ansible_user`: The default ssh user name to use. -> - `ansible_become_pass`: Allows you to set the privilege escalation password. -> - `ansible_ssh_pass`: The password of the host to be connected using root. - -## Step 4: Enable All Components - -> This is step is complete installation. You can skip this step if you choose a minimal installation. - -Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default. - -```yaml -# LOGGING CONFIGURATION -# logging is an optional component when installing KubeSphere, and -# Kubernetes builtin logging APIs will be used if logging_enabled is set to false. -# Builtin logging only provides limited functions, so recommend to enable logging. -logging_enabled: true # Whether to install logging system -elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number -elasticsearch_data_replica: 2 # total number of data nodes -elasticsearch_volume_size: 20Gi # Elasticsearch volume size -log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default. -elk_prefix: logstash # the string making up index names. The index name will be formatted as ks--log -kibana_enabled: false # Kibana Whether to install built-in Grafana -#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption. -#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port - -#DevOps Configuration -devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image) -jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default -jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default -jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default -jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters -jenkinsJavaOpts_Xmx: 6g -jenkinsJavaOpts_MaxRAM: 8g -sonarqube_enabled: true # Whether to install built-in SonarQube -#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption. -#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token - -# Following components are all optional for KubeSphere, -# Which could be turned on to install it before installation or later by updating its value to true -openpitrix_enabled: true # KubeSphere application store -metrics_server_enabled: true # For KubeSphere HPA to use -servicemesh_enabled: true # KubeSphere service mesh system(Istio-based) -notification_enabled: true # KubeSphere notification system -alerting_enabled: true # KubeSphere alerting system -``` - -## Step 5: Install KubeSphere to Linux Machines - -> Note: -> -> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default. -> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions. -> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation. -> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts. - -**1.** Enter `scripts` folder, and execute `install.sh` using `root` user: - -```bash -cd ../cripts -./install.sh -``` - -**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume. - -```bash -################################################ - KubeSphere Installer Menu -################################################ -* 1) All-in-one -* 2) Multi-node -* 3) Quit -################################################ -https://kubesphere.io/ 2020-02-24 -################################################ -Please input an option: 2 - -``` - -**3.** Verify the multi-node installation: - -**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go. - -```bash -successsful! -##################################################### -### Welcome to KubeSphere! ### -##################################################### - -Console: http://192.168.0.1:30880 -Account: admin -Password: P@88w0rd - -NOTE:Please modify the default password after login. -##################################################### -``` - -> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components). - -**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in. - -![Login](https://pek3b.qingstor.com/kubesphere-docs/png/20191017172215.png) - -Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up. - -![Landing Page](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png) - -## Enable Pluggable Components - -If you already have set up minimal installation, you still can edit the ConfigMap of ks-installer using the following command. Please make sure there is enough resource in your machines, see [Pluggable Components Overview](/en/installation/pluggable-components/). - -```bash -kubectl edit cm -n kubesphere-system ks-installer -``` - -## FAQ - -If you have further questions please do not hesitate to raise issues on [GitHub](https://github.com/kubesphere/kubesphere/issues). diff --git a/content/zh/docs/installing-on-linux/public-cloud/install-kubesphere-on-ali-ecs.md b/content/zh/docs/installing-on-linux/public-cloud/install-kubesphere-on-ali-ecs.md deleted file mode 100644 index 27cf8e2c7..000000000 --- a/content/zh/docs/installing-on-linux/public-cloud/install-kubesphere-on-ali-ecs.md +++ /dev/null @@ -1,276 +0,0 @@ ---- -title: "KubeSphere 在阿里云 ECS 高可用实例" -keywords: "Kubesphere 安装, 阿里云, ECS, 高可用性, 高可用性, 负载均衡器" -description: "本教程用于安装高可用性集群" - -Weight: 2230 ---- - -由于对于生产环境,我们需要考虑集群的高可用性。教你部署如何在阿里 ECS 实例服务快速部署一套高可用的生产环境 -Kubernetes 服务需要做到高可用,需要保证 kube-apiserver 的 HA ,推荐下列两种方式 - 1. 阿里云 SLB - 2. keepalived + haproxy [keepalived + haproxy](https://kubesphere.com.cn/forum/d/1566-kubernetes-keepalived-haproxy)对 kube-apiserver 进行负载均衡,实现高可用 kubernetes 集群。 - - ## 前提条件 - - - 请遵循该[指南](https://github.com/kubesphere/kubekey),确保您已经知道如何将 KubeSphere 与多节点集群一起安装。有关用于安装的 config.yaml 文件的详细信息。本教程重点介绍配置阿里负载均衡器服务高可用安装。 - - 考虑到数据的持久性,对于生产环境,我们不建议您使用存储OpenEBS,建议 NFS , GlusterFS 等存储(需要提前安装)。文章为了进行开发和测试,集成的 OpenEBS 直接将 LocalPV 设置为存储服务。 - - SSH 可以访问所有节点。 - - 所有节点的时间同步。 - - Red Hat 在其 Linux 发行版本中包括了SELinux,建议关闭 SELinux 或者将 SELinux 的模式切换为 Permissive [宽容]工作模式。 - - ## 部署架构 - - ![部署架构](/images/docs/ali-ecs/ali.png) - - ## 创建主机 - - 本示例创建 SLB + 6 台 **CentOS Linux release 7.6.1810 (Core)** 的虚拟机,每台配置为 2Core4GB40G - - | 主机IP | 主机名称 | 角色 | - | --- | --- | --- | - |39.104.82.170|Eip|slb| - |172.24.107.72|master1|master1, etcd| - |172.24.107.73|master2|master2, etcd| - |172.24.107.74|master3|master3, etcd| - |172.24.107.75|node1|node| - |172.24.107.76|node2|node| - |172.24.107.77|node3|node| - - > 注意:机器有限,所以把 etcd 放入 master,在生产环境建议单独部署 etcd,提高稳定性 - - ## 使用阿里 SLB 部署 - ### 创建 SLB - - 进入到阿里云控制, 在左侧列表选择'负载均衡', 选择'实例管理' 进入下图, 选择'创建负载均衡' - - ![1-1-创建slb](/images/docs/ali-ecs/ali-slb-create.png) - - ### 配置 SLB - - 配置规格根据自身流量规模创建 - - ![2-1-创建slb](/images/docs/ali-ecs/ali-slb-config.png) - -后面的 config.yaml 需要配置 slb 分配的地址 - ```yaml - controlPlaneEndpoint: - domain: lb.kubesphere.local - address: "39.104.82.170" - port: "6443" -``` - ### 配置SLB 主机实例 - - 需要在服务器组添加需要负载的3台 master 主机后按下图顺序配置监听 TCP 6443 端口( api-server ) - -![3-1-添加主机](/images/docs/ali-ecs/ali-slb-add.png) - -![3-2-配置监听端口](/images/docs/ali-ecs/ali-slb-listen-conf1.png) - -![3-3-配置监听端口](/images/docs/ali-ecs/ali-slb-listen-conf2.png) - -![3-4-配置监听端口](/images/docs/ali-ecs/ali-slb-listen-conf3.png) - -再按上述操作配置监听 HTTP 30880 端口( ks-console ),主机添加选择全部主机节点。 - -![3-5-配置监听端口](/images/docs/ali-ecs/ali-slb-listen-conf4.png) - -- 现在的健康检查暂时是失败的,因为还没部署 master 的服务,所以端口 telnet 不通的。 -- 然后提交审核即可 - - ### 获取安装程序可执行文件 - - ```bash - #下载 kk installer 至任意一台机器 - curl -O -k https://kubernetes.pek3b.qingstor.com/tools/kubekey/kk - chmod +x kk - ``` - -{{< notice tip >}} - - 您可以使用高级安装来控制自定义参数或创建多节点群集。具体来说,通过指定配置文件来创建集群。 - -{{}} - - ### 使用 kubekey 部署k8s集群和 KubeSphere 控制台 - - ```bash - # 在当前位置创建配置文件 config-sample.yaml |包含 KubeSphere 的配置文件 - ./kk create config --with-kubesphere v3.0.0 -f config-sample.yaml ---- -# 同时安装存储插件 (支持:localVolume、nfsClient、rbd、glusterfs)。您可以指定多个插件并用逗号分隔。请注意,您添加的第一个将是默认存储类。 -./kk create config --with-storage localVolume --with-kubesphere v3.0.0 -f config-sample.yaml - ``` - ### 集群配置调整 - - ```yaml - #vi ~/config-sample.yaml - apiVersion: kubekey.kubesphere.io/v1alpha1 - kind: Cluster - metadata: - name: config-sample - spec: - hosts: - - {name: master1, address: 172.24.107.72, internalAddress: 172.24.107.72, user: root, password: QWEqwe123} - - {name: master2, address: 172.24.107.73, internalAddress: 172.24.107.73, user: root, password: QWEqwe123} - - {name: master3, address: 172.24.107.74, internalAddress: 172.24.107.74, user: root, password: QWEqwe123} - - {name: node1, address: 172.24.107.75, internalAddress: 172.24.107.75, user: root, password: QWEqwe123} - - {name: node2, address: 172.24.107.76, internalAddress: 172.24.107.76, user: root, password: QWEqwe123} - - {name: node3, address: 172.24.107.77, internalAddress: 172.24.107.77, user: root, password: QWEqwe123} - - roleGroups: - etcd: - - master1 - - master2 - - master3 - master: - - master1 - - master2 - - master3 - worker: - - node1 - - node2 - - node3 - controlPlaneEndpoint: - domain: lb.kubesphere.local - address: "39.104.82.170" - port: "6443" - kubernetes: - version: v1.17.9 - imageRepo: kubesphere - clusterName: cluster.local - network: - plugin: calico - kubePodsCIDR: 10.233.64.0/18 - kubeServiceCIDR: 10.233.0.0/18 - registry: - registryMirrors: ["https://*.mirror.aliyuncs.com"] # # input your registryMirrors - insecureRegistries: [] - storage: - defaultStorageClass: localVolume - localVolume: - storageClassName: local - - --- - apiVersion: installer.kubesphere.io/v1alpha1 - kind: ClusterConfiguration - metadata: - name: ks-installer - namespace: kubesphere-system - labels: - version: v3.0.0 - spec: - local_registry: "" - persistence: - storageClass: "" - authentication: - jwtSecret: "" - etcd: - monitoring: true - endpointIps: 172.24.107.72,172.24.107.73,172.24.107.74 - port: 2379 - tlsEnable: true - common: - es: - elasticsearchDataVolumeSize: 20Gi - elasticsearchMasterVolumeSize: 4Gi - elkPrefix: logstash - logMaxAge: 7 - mysqlVolumeSize: 20Gi - minioVolumeSize: 20Gi - etcdVolumeSize: 20Gi - openldapVolumeSize: 2Gi - redisVolumSize: 2Gi - console: - enableMultiLogin: false # enable/disable multi login - port: 30880 - alerting: - enabled: false - auditing: - enabled: false - devops: - enabled: false - jenkinsMemoryLim: 2Gi - jenkinsMemoryReq: 1500Mi - jenkinsVolumeSize: 8Gi - jenkinsJavaOpts_Xms: 512m - jenkinsJavaOpts_Xmx: 512m - jenkinsJavaOpts_MaxRAM: 2g - events: - enabled: false - ruler: - enabled: true - replicas: 2 - logging: - enabled: false - logsidecarReplicas: 2 - metrics_server: - enabled: true - monitoring: - prometheusMemoryRequest: 400Mi - prometheusVolumeSize: 20Gi - multicluster: - clusterRole: none # host | member | none - networkpolicy: - enabled: false - notification: - enabled: false - openpitrix: - enabled: false - servicemesh: - enabled: false - ``` - - ### 执行命令创建集群 - ```bash - # 指定配置文件创建集群 -./kk create cluster -f config-sample.yaml - - # 查看 KubeSphere 安装日志 -- 直到出现控制台的访问地址和登陆账号 -kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f -``` - -```bash -************************************************** -##################################################### -### Welcome to KubeSphere! ### -##################################################### - -Console: http://172.24.107.72:30880 -Account: admin -Password: P@88w0rd - -NOTES: - 1. After logging into the console, please check the - monitoring status of service components in - the "Cluster Management". If any service is not - ready, please wait patiently until all components - are ready. - 2. Please modify the default password after login. - -##################################################### -https://kubesphere.io 2020-08-24 23:30:06 -##################################################### -``` - - - 访问公网 IP + Port 为部署后的使用情况,使用默认账号密码 (`admin/P@88w0rd`),文章安装为最小化,登陆点击`工作台` 可看到下图安装组件列表和机器情况。 - - ![面板图](/images/docs/ali-ecs/succes.png) - -## 如何自定义开启可插拔组件 - - + 点击 `集群管理` - `自定义资源CRD` ,在过滤条件框输入 `ClusterConfiguration` ,如图下 - - ![修改KsInstaller](/images/docs/ali-ecs/update_crd.png) - - + 点击 `ClusterConfiguration` 详情,对 `ks-installer` 编辑保存退出即可,组件描述介绍:[文档说明](https://github.com/kubesphere/ks-installer/blob/master/deploy/cluster-configuration.yaml) - - ![修改KsInstaller](/images/docs/ali-ecs/ks-install-source.png) - -## 安装问题 - -> 提示: 如果安装过程中碰到 `Failed to add worker to cluster: Failed to exec command...` ->
-``` bash 处理方式 - kubeadm reset -``` diff --git a/content/zh/docs/installing-on-linux/public-cloud/kubesphere-on-qingcloud-instance.md b/content/zh/docs/installing-on-linux/public-cloud/kubesphere-on-qingcloud-instance.md new file mode 100644 index 000000000..bfabcc660 --- /dev/null +++ b/content/zh/docs/installing-on-linux/public-cloud/kubesphere-on-qingcloud-instance.md @@ -0,0 +1,310 @@ +--- +title: "KubeSphere on QingCloud Instance" +keywords: "KubeSphere, Installation, HA, High-availability, LoadBalancer" +description: "The tutorial is for installing a high-availability cluster." + +Weight: 2229 +--- + +## Introduction + +For a production environment, we need to consider the high availability of the cluster. If the key components (e.g. kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same master node, Kubernetes and KubeSphere will be unavailable once the master node goes down. Therefore, we need to set up a high-availability cluster by provisioning load balancers with multiple master nodes. You can use any cloud load balancer, or any hardware load balancer (e.g. F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters. + +This tutorial walks you through an example of how to create two [QingCloud Load Balancers](https://docs.qingcloud.com/product/network/loadbalancer), serving as the internal load balancer and external load balancer respectively, and of how to implement high availability of master and etcd nodes using the load balancers. + +## Prerequisites + +- Please make sure that you already know how to install KubeSphere with a multi-node cluster by following the [guide](https://github.com/kubesphere/kubekey). For the detailed information about the config yaml file that is used for installation, see Multi-node Installation. This tutorial focuses more on how to configure load balancers. +- You need a [QingCloud](https://console.qingcloud.com/login) account to create load balancers, or follow the guide of any other cloud provider to create load balancers. +- Considering data persistence, for a production environment, we recommend you to prepare persistent storage and create a StorageClass in advance. For development and testing, you can use the integrated OpenEBS to provision LocalPV as the storage service directly. + +## Architecture + +This example prepares six machines of **Ubuntu 16.04.6**. We will create two load balancers, and deploy three master and etcd nodes on three of the machines. You can configure these master and etcd nodes in `config-sample.yaml` of KubeKey (Please note that this is the default name, which can be changed by yourself). + +![kubesphere-ha-architecture](https://ap3.qingstor.com/kubesphere-website/docs/ha-architecture.png) + +{{< notice note >}} + +The Kubernetes document [Options for Highly Available topology](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/) demonstrates that there are two options for configuring the topology of a highly available (HA) Kubernetes cluster, i.e. stacked etcd topology and external etcd topology. You should carefully consider the advantages and disadvantages of each topology before setting up an HA cluster according to [this document](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/). In this guide, we adopt stacked etcd topology to bootstrap an HA cluster for convenient demonstration. + +{{}} + +## Install HA Cluster + +### Create Load Balancers + +This step demonstrates how to create load balancers on QingCloud platform. + +#### Create an Internal Load Balancer + +1. Log in [QingCloud Console](https://console.qingcloud.com/login). In the menu on the left, under **Network & CDN**, select **Load Balancers**. Click **Create** to create a load balancer. + +![create-lb](https://ap3.qingstor.com/kubesphere-website/docs/create-lb.png) + +2. In the pop-up window, set a name for the load balancer. Choose the VxNet where your machines are created from the Network drop-down list. Here is `pn`. Other fields can be default values as shown below. Click **Submit** to finish. + +![qingcloud-lb](https://ap3.qingstor.com/kubesphere-website/docs/qingcloud-lb.png) + +3. Click the load balancer. In the detailed information page, create a listener that listens on port `6443` with the Listener Protocol set as `TCP`. + +![Listener](https://ap3.qingstor.com/kubesphere-website/docs/listener.png) + +- Name: Define a name for this Listener +- Listener Protocol: Select `TCP` protocol +- Port: `6443` +- Load mode: `Poll` + +Click Submit to continue. + +{{< notice note >}} + +After you create the listener, please check the firewall rules of the load balancer. Make sure that the port `6443` has been added to the firewall rules and the external traffic can pass through `6443`. Otherwise, the installation will fail. If you are using QingCloud platform, you can find the information in **Security Groups** under **Security**. + +{{}} + +4. Click **Add Backend**, and choose the VxNet you just selected (in this example, it is `pn`). Click the button **Advanced Search**, choose the three master nodes, and set the port to `6443` which is the default secure port of api-server. + +![add-backend](https://ap3.qingstor.com/kubesphere-website/docs/3-master.png) + +Click **Submit** when you finish. + +5. Click the button **Apply Changes** to activate the configurations. At this point, you can find the three masters have been added as the backend servers of the listener that is behind the internal load balancer. + +{{< notice note >}} + +The status of all masters might show `Not Available` after you added them as backends. This is normal since the port `6443` of api-server is not active on master nodes yet. The status will change to `Active` and the port of api-server will be exposed after the installation finishes, which means the internal load balancer you configured works as expected. + +{{}} + +![apply-changes](https://ap3.qingstor.com/kubesphere-website/docs/apply-change.png) + +Record the Intranet VIP shown under Networks. The IP address will be added later to the config yaml file. + +#### Create an External Load Balancer + +You need to create an EIP in advance. To create an EIP, go to **Elastic IPs** under **Networks & CDN**. + +{{< notice note >}} + +Two elastic IPs are needed for this whole tutorial, one for the VPC network and the other for the external load balancer created in this step. You cannot associate the same EIP to the VPC network and the load balancer at the same time. + +{{}} + +6. Similarly, create an external load balancer while don't select VxNet for the Network field. Bind the EIP that you created to this load balancer by clicking **Add IPv4**. + +![bind-eip](https://ap3.qingstor.com/kubesphere-website/docs/bind-eip.png) + +7. In the load balancer detailed information page, create a listener that listens on port `30880` (NodePort of KubeSphere console) with the Listener Protocol set as `HTTP`. + +{{< notice note >}} + +After you create the listener, please check the firewall rules of the load balancer. Make sure that the port `30880` has been added to the firewall rules and the external traffic can pass through `6443`. Otherwise, the installation will fail. If you are using QingCloud platform, you can find the information in **Security Groups** under **Security**. + +{{}} + +![listener2](https://ap3.qingstor.com/kubesphere-website/docs/listener2.png) + +8. Click **Add Backend**. In **Advanced Search**, choose the `six` machines on which we are going to install KubeSphere within the VxNet `pn`, and set the port to `30880`. + +![six-instances](https://ap3.qingstor.com/kubesphere-website/docs/six-instances.png) + +Click **Submit** when you finish. + +9. Click **Apply Changes** to activate the configurations. At this point, you can find the six machines have been added as the backend servers of the listener that is behind the external load balancer. + +### Download KubeKey + +[Kubekey](https://github.com/kubesphere/kubekey) is the next-gen installer which is used for installing Kubernetes and KubeSphere v3.0.0 fastly, flexibly and easily. + +{{< tabs >}} + +{{< tab "For users with poor network to GitHub" >}} + +For users in China, you can download the installer using this link. + +```bash +wget https://kubesphere.io/kubekey/releases/v1.0.0 +``` +{{}} + +{{< tab "For users with good network to GitHub" >}} + +For users with good network to GitHub, you can download it from [GitHub Release Page](https://github.com/kubesphere/kubekey/releases/tag/v1.0.0) or use the following link directly. + +```bash +wget https://github.com/kubesphere/kubekey/releases/download/v1.0.0/kubekey-v1.0.0-linux-amd64.tar.gz +``` +{{}} + +{{}} + +Unzip it. + +```bash +tar -zxvf v1.0.0 +``` + +Grant the execution right to `kk`: + +```bash +chmod +x kk +``` + +Then create an example configuration file with default configurations. Here we use Kubernetes v1.17.9 as an example. + +```bash +./kk create config --with-kubesphere v3.0.0 --with-kubernetes v1.17.9 +``` + +> Tip: These Kubernetes versions have been fully tested with KubeSphere: *v1.15.12*, *v1.16.13*, *v1.17.9* (default), *v1.18.6*. + +### Cluster Node Planning + +As we adopt the HA topology with stacked control plane nodes, where etcd nodes are colocated with master nodes, we will define the master nodes and etcd nodes are on the same three machines. + +| **Property** | **Description** | +| :----------- | :-------------------------------- | +| `hosts` | Detailed information of all nodes | +| `etcd` | etcd node names | +| `master` | Master node names | +| `worker` | Worker node names | + +- Put the master node name (master1, master2 and master3) under `etcd` and `master` respectively as below, which means these three machines will be assigned with both the master and etcd role. Please note that the number of etcd needs to be odd. Meanwhile, we do not recommend you to install etcd on worker nodes since the memory consumption of etcd is very high. Edit the configuration file, and we use **Ubuntu 16.04.6** in this example. + +#### config-sample.yaml Example + +```yaml +spec: + hosts: + - {name: master1, address: 192.168.0.2, internalAddress: 192.168.0.2, user: ubuntu, password: Testing123} + - {name: master2, address: 192.168.0.3, internalAddress: 192.168.0.3, user: ubuntu, password: Testing123} + - {name: master3, address: 192.168.0.4, internalAddress: 192.168.0.4, user: ubuntu, password: Testing123} + - {name: node1, address: 192.168.0.5, internalAddress: 192.168.0.5, user: ubuntu, password: Testing123} + - {name: node2, address: 192.168.0.6, internalAddress: 192.168.0.6, user: ubuntu, password: Testing123} + - {name: node3, address: 192.168.0.7, internalAddress: 192.168.0.7, user: ubuntu, password: Testing123} + roleGroups: + etcd: + - master1 + - master2 + - master3 + master: + - master1 + - master2 + - master3 + worker: + - node1 + - node2 + - node3 +``` + +For a complete configuration sample explanation, please see [this file](https://github.com/kubesphere/kubekey/blob/master/docs/config-example.md). + +### Configure the Load Balancer + +In addition to the node information, you need to provide the load balancer information in the same yaml file. For the Intranet VIP address, you can find it in step 5 mentioned above. Assume the VIP address and listening port of the **internal load balancer** are `192.168.0.253` and `6443` respectively, and you can refer to the following example. + +#### The configuration example in config-sample.yaml + +```yaml +## Internal LB config example +## apiserver_loadbalancer_domain_name: "lb.kubesphere.local" + controlPlaneEndpoint: + domain: lb.kubesphere.local + address: "192.168.0.253" + port: "6443" +``` + +{{< notice note >}} + +- The address and port should be indented by two spaces in `config-sample.yaml`, and the address should be VIP. +- The domain name of the load balancer is `lb.kubesphere.local` by default for internal access. If you need to change the domain name, please uncomment and modify it. + +{{}} + +After that, you can enable any components you need by following **Enable Pluggable Components** and start your HA cluster installation. + +### Kubernetes Cluster Configuration (Optional) + +Kubekey provides some fields and parameters to allow the cluster administrator to customize Kubernetes installation, including Kubernetes version, network plugins and image registry. There are some default values provided in `config-example.yaml`. Optionally, you can modify the Kubernetes related configuration in `config-example.yaml` according to your needs. See [config-example.md](https://github.com/kubesphere/kubekey/blob/master/docs/config-example.md) for detailed explanation. + +### Persistent Storage Plugin Configuration + +As we mentioned in the prerequisites, considering data persistence in a production environment, you need to prepare persistent storage and configure the storage plugin (e.g. CSI) in `config-sample.yaml` to define which storage service you want. + +{{< notice note >}} + +For testing or development, you can skip this part. KubeKey will use the integrated OpenEBS to provision LocalPV as the storage service directly. + +{{}} + +**Available Storage Plugins & Clients** + +- Ceph RBD & CephFS +- GlusterFS +- NFS +- QingCloud CSI +- QingStor CSI +- More plugins are WIP, which will be added soon + +For each storage plugin configuration, you can refer to [config-example.md](https://github.com/kubesphere/kubekey/blob/master/docs/config-example.md) to get detailed explanation. Make sure you have configured the storage plugin before you get started. KubeKey will create a StorageClass and persistent volumes for related workloads during the installation. + +### Enable Pluggable Components (Optional) + +KubeSphere has decoupled some core feature components since v2.1.0. These components are designed to be pluggable which means you can enable them either before or after installation. By default, KubeSphere will be started with a minimal installation if you do not enable them. + +You can enable any of them according to your demands. It is highly recommended that you install these pluggable components to discover the full-stack features and capabilities provided by KubeSphere. Please ensure your machines have sufficient CPU and memory before enabling them. See [Enable Pluggable Components](https://github.com/kubesphere/ks-installer#enable-pluggable-components) for details. + +### Start to Bootstrap a Cluster + +After you complete the configuration, you can execute the following command to start the installation: + +```bash +./kk create cluster -f config-sample.yaml +``` + +### Verify the Installation + +Inspect the logs of installation. When you see the successful logs as follows, congratulations and enjoy it! + +```bash +kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f +``` + +```bash +##################################################### +### Welcome to KubeSphere! ### +##################################################### + +Console: http://192.168.0.3:30880 +Account: admin +Password: P@88w0rd + +NOTES: + 1. After logging into the console, please check the + monitoring status of service components in + the "Cluster Management". If any service is not + ready, please wait patiently until all components + are ready. + 2. Please modify the default password after login. + +##################################################### +https://kubesphere.io 2020-08-13 10:50:24 +##################################################### +``` + +### Verify the HA Cluster + +Now that you have finished the installation, you can go back to the detailed information page of both the internal and external load balancers to see the status. + +![LB active](https://ap3.qingstor.com/kubesphere-website/docs/active.png) + +Both listeners show that the status is `Active`, meaning the node is up and running. + +![active-listener](https://ap3.qingstor.com/kubesphere-website/docs/active-listener.png) + +In the web console of KubeSphere, you can also see that all the nodes are functioning well. + +![cluster-node](https://ap3.qingstor.com/kubesphere-website/docs/cluster-node.png) + +To verify if the cluster is highly available, you can turn off an instance on purpose. For example, the above dashboard is accessed through the address `IP: 30880` (the EIP address here is the one bound to the external load balancer). If the cluster is highly available, the dashboard will still work well even if you shut down a master node. diff --git a/content/zh/docs/installing-on-linux/public-cloud/master-ha.md b/content/zh/docs/installing-on-linux/public-cloud/master-ha.md deleted file mode 100644 index ee8f26203..000000000 --- a/content/zh/docs/installing-on-linux/public-cloud/master-ha.md +++ /dev/null @@ -1,152 +0,0 @@ ---- -title: "High Availability Configuration" -keywords: "kubesphere, kubernetes, docker,installation, HA, high availability" -description: "The guide for installing a high availability of KubeSphere cluster" - -weight: 2230 ---- - -## Introduction - -[Multi-node installation](../multi-node) can help you to quickly set up a single-master cluster on multiple machines for development and testing. However, we need to consider the high availability of the cluster for production. Since the key components on the master node, i.e. kube-apiserver, kube-scheduler, and kube-controller-manager are running on a single master node, Kubernetes and KubeSphere will be unavailable during the master being down. Therefore we need to set up a high availability cluster by provisioning load balancers and multiple masters. You can use any cloud load balancer, or any hardware load balancer (e.g. F5). In addition, keepalved and Haproxy is also an alternative for creating such high-availability cluster. - -This document walks you through an example how to create two [QingCloud Load Balancer](https://docs.qingcloud.com/product/network/loadbalancer), serving as internal load balancer and external load balancer respectively, and how to configure the high availability of masters and Etcd using the load balancers. - -## Prerequisites - -- Please make sure that you already read [Multi-Node installation](../multi-node). This document only demonstrates how to configure load balancers. -- You need a [QingCloud](https://console.qingcloud.com/login) account to create load balancers, or follow the guide of any other cloud provider to create load balancers. - -## Architecture - -This example prepares six machines of CentOS 7.5. We will create two load balancers, and deploy three masters and Etcd nodes on three of the machines. You can configure these masters and Etcd nodes in `conf/hosts.ini`. - -![Master and etcd node high availability architecture](https://pek3b.qingstor.com/kubesphere-docs/png/20200307215924.png) - -## Install HA Cluster - -### Step 1: Create Load Balancers - -This step briefly shows an example of creating a load balancer on QingCloud platform. - -#### Create an Internal Load Balancer - -1.1. Log in [QingCloud Console](https://console.qingcloud.com/login) and select **Network & CDN → Load Balancers**, then click on the create button and fill in the basic information. - -1.2. Choose the VxNet that your machines are created within from the **Network** dropdown list. Here is `kube`. Other settings can be default values as follows. Click **Submit** to complete the creation. - -![Create Internal LB on QingCloud](https://pek3b.qingstor.com/kubesphere-docs/png/20200215224125.png) - -1.3. Drill into the detail page of the load balancer, then create a listener that listens to the port `6443` of the `TCP` protocol. - -- Name: Define a name for this Listener -- Listener Protocol: Select `TCP` protocol -- Port: `6443` -- Load mode: `Poll` - -> Note: After creating the listener, please check the firewall rules of the load balancer. Make sure that the port `6443` has been added to the firewall rules and the external traffic can pass through `6443`. Otherwise, the installation will fail. - -![Add Listener to LB](https://pek3b.qingstor.com/kubesphere-docs/png/20200215225205.png) - -1.4. Click **Add Backend**, choose the VxNet `kube` that we chose. Then click on the button **Advanced Search** and choose the three master nodes under the VxNet and set the port to `6443` which is the default secure port of api-server. - -Click **Submit** when you are done. - -![Choose Backends](https://pek3b.qingstor.com/kubesphere-docs/png/20200215225550.png) - -1.5. Click on the button **Apply Changes** to activate the configurations. At this point, you can find the three masters have been added as the backend servers of the listener that is behind the internal load balancer. - -> Please note: The status of all masters might shows `Not available` after you added them as backends. This is normal since the port `6443` of api-server are not active in masters yet. The status will change to `Active` and the port of api-server will be exposed after installation complete, which means the internal load balancer you configured works as expected. - -![Apply Changes](https://pek3b.qingstor.com/kubesphere-docs/png/20200215230107.png) - -#### Create an External Load Balancer - -You need to create an EIP in advance. - -1.6. Similarly, create an external load balancer without joining any network, but associate the EIP that you created to this load balancer. - -1.7. Enter the load balancer detail page, create a listener that listens to the port `30880` of the `HTTP` protocol which is the nodeport of KubeSphere console.. - -> Note: After creating the listener, please check the firewall rules of the load balancer. Make sure that the port `30880` has been added to the firewall rules and the external traffic can pass through `6443`. Otherwise, the installation will fail. - -![Create external LB](https://pek3b.qingstor.com/kubesphere-docs/png/20200215232114.png) - -1.8. Click **Add Backend**, then choose the `six` machines that we are going to install KubeSphere within the VxNet `Kube`, and set the port to `30880`. - -Click **Submit** when you are done. - -1.9. Click on the button **Apply Changes** to activate the configurations. At this point, you can find the six machines have been added as the backend servers of the listener that is behind the external load balancer. - -![Apply Changes](https://pek3b.qingstor.com/kubesphere-docs/png/20200215232445.png) - -### Step 2: Modify the host.ini - -Go to the taskbox where you downloaded the installer by following the [Multi-node Installation](../multi-node) and complete the following configurations. - -| **Parameter** | **Description** | -|--------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `[all]` | node information. Use the following syntax if you run installation as `root` user:
- ` ansible_connection= ip=`
- ` ansible_host= ip= ansible_ssh_pass=`
If you log in as a non-root user, use the syntax:
- ` ansible_connection= ip= ansible_user= ansible_become_pass=` | -| `[kube-master]` | master node names | -| `[kube-node]` | worker node names | -| `[etcd]` | etcd node names. The number of `etcd` nodes needs to be odd. | -| `[k8s-cluster:children]` | group names of `[kube-master]` and `[kube-node]` | - - -We use **CentOS 7.5** with `root` user to install an HA cluster. Please see the following configuration as an example: - -> Note: ->
-> If the _taskbox_ cannot establish `ssh` connection with the rest nodes, try to use the non-root user configuration. - -#### host.ini example - -```ini -[all] -master1 ansible_connection=local ip=192.168.0.1 -master2 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD -master3 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD -node1 ansible_host=192.168.0.4 ip=192.168.0.4 ansible_ssh_pass=PASSWORD -node2 ansible_host=192.168.0.5 ip=192.168.0.5 ansible_ssh_pass=PASSWORD -node3 ansible_host=192.168.0.6 ip=192.168.0.6 ansible_ssh_pass=PASSWORD - -[kube-master] -master1 -master2 -master3 - -[kube-node] -node1 -node2 -node3 - -[etcd] -master1 -master2 -master3 - -[k8s-cluster:children] -kube-node -kube-master -``` - -### Step 3: Configure the Load Balancer Parameters - -Besides configuring the `common.yaml` by following the [Multi-node Installation](../multi-node), you need to modify the load balancer information in the `common.yaml`. Assume the **VIP** address and listening port of the **internal load balancer** are `192.168.0.253` and `6443`, then you can refer to the following example. - -> - Note that address and port should be indented by two spaces in `common.yaml`, and the address should be VIP. -> - The domain name of the load balancer is "lb.kubesphere.local" by default for internal access. If you need to change the domain name, please uncomment and modify it. - -#### The configuration sample in common.yaml - -```yaml -## External LB example config -## apiserver_loadbalancer_domain_name: "lb.kubesphere.local" -loadbalancer_apiserver: - address: 192.168.0.253 - port: 6443 -``` - -Finally, please refer to the [guide](../storage-configuration) to configure the persistent storage service in `common.yaml` and start your HA cluster installation. - -Then it is ready to install the high availability KubeSphere cluster. diff --git a/content/zh/docs/installing-on-linux/public-cloud/multi-node.md b/content/zh/docs/installing-on-linux/public-cloud/multi-node.md deleted file mode 100644 index d1cd790ea..000000000 --- a/content/zh/docs/installing-on-linux/public-cloud/multi-node.md +++ /dev/null @@ -1,176 +0,0 @@ ---- -title: "Multi-node Installation" -keywords: 'kubesphere, kubernetes, docker, kubesphere installer' -description: 'The guide for installing KubeSphere on Multi-Node in development or testing environment' - -weight: 2220 ---- - -`Multi-Node` installation enables installing KubeSphere on multiple nodes. Typically, use any one node as _taskbox_ to run the installation task. Please note `ssh` communication is required to be established between taskbox and other nodes. - -- The following instructions are for the default installation without enabling any optional components as we have made them pluggable since v2.1.0. If you want to enable any one, please read [Enable Pluggable Components](../pluggable-components). -- If your machines in total have >= 8 cores and >= 16G memory, we recommend you to install the full package of KubeSphere by [Enabling Optional Components](../complete-installation). -- The installation time depends on your network bandwidth, your computer configuration, the number of nodes, etc. - -## Prerequisites - -If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information. - -## Step 1: Prepare Linux Hosts - -The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements. - -- Time synchronization is required across all nodes, otherwise the installation may not succeed; -- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`; -- If you are using `Ubuntu 18.04`, you need to use the user `root`; -- If the Debian system does not have the sudo command installed, you need to execute `apt update && apt install sudo` command using root before installation. - -### Hardware Recommendation - -- KubeSphere can be installed on any cloud platform. -- The installation speed can be accelerated by increasing network bandwidth. -- If you choose air-gapped installation, ensure your disk of each node is at least 100G. - -| System | Minimum Requirements (Each node) | -| --- | --- | -| CentOS 7.4 ~ 7.7 (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:40 G | -| Ubuntu 16.04/18.04 LTS (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:40 G | -| Red Hat Enterprise Linux Server 7.4 (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:40 G | -| Debian Stretch 9.5 (64 bit)| CPU:2 Core, Memory:4 G, Disk Space:40 G | - -The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes. - -> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide. - -| Host IP | Host Name | Role | -| --- | --- | --- | -|192.168.0.1|master|master, etcd| -|192.168.0.2|node1|node| -|192.168.0.3|node2|node| - -### Cluster Architecture - -#### Single Master, Single Etcd, Two Nodes - -![Architecture](/cluster-architecture.svg) - -## Step 2: Download Installer Package - -**1.** Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`. - -```bash -curl -L https://kubesphere.io/download/stable/latest > installer.tar.gz \ -&& tar -zxf installer.tar.gz && cd kubesphere-all-v2.1.1/conf -``` - -**2.** Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file. - -> Note: -> -> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`. -> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`. -> - master, node1 and node2 are the host names of each node and all host names should be in lowercase. - -### hosts.ini - -```ini -[all] -master ansible_connection=local ip=192.168.0.1 -node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD -node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD - -[kube-master] -master - -[kube-node] -node1 -node2 - -[etcd] -master - -[k8s-cluster:children] -kube-node -kube-master -``` - -> Note: -> -> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here. -> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively. -> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`. -> -> Parameters Specification: -> -> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection. -> - `ansible_host`: The name of the host to be connected. -> - `ip`: The ip of the host to be connected. -> - `ansible_user`: The default ssh user name to use. -> - `ansible_become_pass`: Allows you to set the privilege escalation password. -> - `ansible_ssh_pass`: The password of the host to be connected using root. - -## Step 3: Install KubeSphere to Linux Machines - -> Note: -> -> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default. -> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions. -> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation. -> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts. - -**1.** Enter `scripts` folder, and execute `install.sh` using `root` user: - -```bash -cd ../cripts -./install.sh -``` - -**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume. - -```bash -################################################ - KubeSphere Installer Menu -################################################ -* 1) All-in-one -* 2) Multi-node -* 3) Quit -################################################ -https://kubesphere.io/ 2020-02-24 -################################################ -Please input an option: 2 - -``` - -**3.** Verify the multi-node installation: - -**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go. - -```bash -successsful! -##################################################### -### Welcome to KubeSphere! ### -##################################################### - -Console: http://192.168.0.1:30880 -Account: admin -Password: P@88w0rd - -NOTE:Please modify the default password after login. -##################################################### -``` - -> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components). - -**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in. - -![Login](https://pek3b.qingstor.com/kubesphere-docs/png/20191017172215.png) - -Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up. - -![Landing Page](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png) - -## FAQ - -The installer has been tested on Aliyun, AWS, Huawei Cloud, QingCloud, Tencent Cloud. Please check the [results](https://github.com/kubesphere/ks-installer/issues/23) for details. Also please read the [FAQ of installation](../../faq/faq-install). - -If you have any further questions please do not hesitate to file issues on [GitHub](https://github.com/kubesphere/kubesphere/issues). diff --git a/content/zh/docs/installing-on-linux/public-cloud/storage-configuration.md b/content/zh/docs/installing-on-linux/public-cloud/storage-configuration.md deleted file mode 100644 index a3d8d5156..000000000 --- a/content/zh/docs/installing-on-linux/public-cloud/storage-configuration.md +++ /dev/null @@ -1,157 +0,0 @@ ---- -title: "StorageClass Configuration" -keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus' -description: 'Instructions for Setting up StorageClass for KubeSphere' - -weight: 2250 ---- - -Currently, Installer supports the following [Storage Class](https://kubernetes.io/docs/concepts/storage/storage-classes/), providing persistent storage service for KubeSphere (more storage classes will be supported soon). - -- NFS -- Ceph RBD -- GlusterFS -- QingCloud Block Storage -- QingStor NeonSAN -- Local Volume (for development and test only) - -The versions of storage systems and corresponding CSI plugins in the table listed below have been well tested. - -| **Name** | **Version** | **Reference** | -| ----------- | --- |---| -Ceph RBD Server | v0.94.10 | For development and testing, refer to [Install Ceph Storage Server](/zh-CN/appendix/ceph-ks-install/) for details. Please refer to [Ceph Documentation](http://docs.ceph.com/docs/master/) for production. | -Ceph RBD Client | v12.2.5 | Before installing KubeSphere, you need to configure the corresponding parameters in `common.yaml`. Please refer to [Ceph RBD](../storage-configuration/#ceph-rbd) | -GlusterFS Server | v3.7.6 | For development and testing, refer to [Deploying GlusterFS Storage Server](/zh-CN/appendix/glusterfs-ks-install/) for details. Please refer to [Gluster Documentation](https://www.gluster.org/install/) or [Gluster Documentation](http://gluster.readthedocs.io/en/latest/Install-Guide/Install/) for production. Note you need to install [Heketi Manager (V3.0.0)](https://github.com/heketi/heketi/tree/master/docs/admin). | -|GlusterFS Client |v3.12.10|Before installing KubeSphere, you need to configure the corresponding parameters in `common.yaml`. Please refer to [GlusterFS](../storage-configuration/#glusterfs)| -|NFS Client | v3.1.0 | Before installing KubeSphere, you need to configure the corresponding parameters in `common.yaml`. Make sure you have prepared NFS storage server. Please see [NFS Client](../storage-configuration/#nfs) | -QingCloud-CSI|v0.2.0.1|You need to configure the corresponding parameters in `common.yaml` before installing KubeSphere. Please refer to [QingCloud CSI](../storage-configuration/#qingcloud-csi) for details| -NeonSAN-CSI|v0.3.0| Before installing KubeSphere, you need to configure the corresponding parameters in `common.yaml`. Make sure you have prepared QingStor NeonSAN storage server. Please see [Neonsan-CSI](../storage-configuration/#neonsan-csi) | - -> Note: You are only allowed to set ONE default storage classes in the cluster. To specify a default storage class, make sure there is no default storage class already exited in the cluster. - -## Storage Configuration - -After preparing the storage server, you need to refer to the parameters description in the following table. Then modify the corresponding configurations in `conf/common.yaml` accordingly. - -The following describes the storage configuration in `common.yaml`. - -> Note: Local Volume is configured as the default storage class in `common.yaml` by default. If you are going to set other storage class as the default, disable the Local Volume and modify the configuration for other storage class. - -### Local Volume (For developing or testing only) - -A [Local Volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) represents a mounted local storage device such as a disk, partition or directory. Local volumes can only be used as a statically created PersistentVolume. We recommend you to use Local volume in testing or development only since it is quick and easy to install KubeSphere without the struggle to set up persistent storage server. Refer to following table for the definition in `conf/common.yaml`. - -| **Local volume** | **Description** | -| --- | --- | -| local\_volume\_provisioner\_enabled | Whether to use Local as the persistent storage, defaults to true | -| local\_volume\_provisioner\_storage\_class | Storage class name, default value:local | -| local\_volume\_is\_default\_class | Whether to set Local as the default storage class, defaults to true.| - -### NFS - -An NFS volume allows an existing NFS (Network File System) share to be mounted into your Pod. NFS can be configured in `conf/common.yaml`. Note you need to prepare NFS server in advance. - -| **NFS** | **Description** | -| --- | --- | -| nfs\_client\_enable | Whether to use NFS as the persistent storage, defaults to false | -| nfs\_client\_is\_default\_class | Whether to set NFS as default storage class, defaults to false. | -| nfs\_server | The NFS server address, either IP or Hostname | -| nfs\_path | NFS shared directory, which is the file directory shared on the server, see [Kubernetes Documentation](https://kubernetes.io/docs/concepts/storage/volumes/#nfs) | -|nfs\_vers3\_enabled | Specifies which version of the NFS protocol to use, defaults to false which means v4. True means v4 | -|nfs_archiveOnDelete | Archive PVC when deleting. It will automatically remove data from `oldPath` when it sets to false | - -### Ceph RBD - -The open source [Ceph RBD](https://ceph.com/) distributed storage system can be configured to use in `conf/common.yaml`. You need to prepare Ceph storage server in advance. Please refer to [Kubernetes Documentation](https://kubernetes.io/docs/concepts/storage/storage-classes/#ceph-rbd) for more details. - -| **Ceph\_RBD** | **Description** | -| --- | --- | -| ceph\_rbd\_enabled | Whether to use Ceph RBD as the persistent storage, defaults to false | -| ceph\_rbd\_storage\_class | Storage class name | -| ceph\_rbd\_is\_default\_class | Whether to set Ceph RBD as default storage class, defaults to false | -| ceph\_rbd\_monitors | Ceph monitors, comma delimited. This parameter is required, which depends on Ceph RBD server parameters | -| ceph\_rbd\_admin\_id | Ceph client ID that is capable of creating images in the pool. Defaults to “admin” | -| ceph\_rbd\_admin\_secret | Admin_id's secret, secret name for "adminId". This parameter is required. The provided secret must have type “kubernetes.io/rbd” | -| ceph\_rbd\_pool | Ceph RBD pool. Default is “rbd” | -| ceph\_rbd\_user\_id | Ceph client ID that is used to map the RBD image. Default is the same as adminId | -| ceph\_rbd\_user\_secret | Secret for User_id, it is required to create this secret in namespace which used rbd image | -| ceph\_rbd\_fsType | fsType that is supported by Kubernetes. Default: "ext4"| -| ceph\_rbd\_imageFormat | Ceph RBD image format, “1” or “2”. Default is “1” | -|ceph\_rbd\_imageFeatures| This parameter is optional and should only be used if you set imageFormat to “2”. Currently supported features are layering only. Default is “”, and no features are turned on| - -> Note: -> -> The ceph secret, which is created in storage class, like "ceph_rbd_admin_secret" and "ceph_rbd_user_secret", is retrieved using following command in Ceph storage server. - -```bash -ceph auth get-key client.admin -``` - -### GlusterFS - -[GlusterFS](https://docs.gluster.org/en/latest/) is a scalable network filesystem suitable for data-intensive tasks such as cloud storage and media streaming. You need to prepare GlusterFS storage server in advance. Please refer to [Kubernetes Documentation](https://kubernetes.io/docs/concepts/storage/storage-classes/#glusterfs) for further information. - -| **GlusterFS(It requires glusterfs cluster which is managed by heketi)**|**Description** | -| --- | --- | -| glusterfs\_provisioner\_enabled | Whether to use GlusterFS as the persistent storage, defaults to false | -| glusterfs\_provisioner\_storage\_class | Storage class name | -| glusterfs\_is\_default\_class | Whether to set GlusterFS as default storage class, defaults to false | -| glusterfs\_provisioner\_restauthenabled | Gluster REST service authentication boolean that enables authentication to the REST server | -| glusterfs\_provisioner\_resturl | Gluster REST service/Heketi service url which provision gluster volumes on demand. The general format should be "IP address:Port" and this is a mandatory parameter for GlusterFS dynamic provisioner| -| glusterfs\_provisioner\_clusterid | Optional, for example, 630372ccdc720a92c681fb928f27b53f is the ID of the cluster which will be used by Heketi when provisioning the volume. It can also be a list of clusterids | -| glusterfs\_provisioner\_restuser | Gluster REST service/Heketi user who has access to create volumes in the Gluster Trusted Pool | -| glusterfs\_provisioner\_secretName | Optional, identification of Secret instance that contains user password to use when talking to Gluster REST service, Installer will automatically create this secret in kube-system | -| glusterfs\_provisioner\_gidMin | The minimum value of GID range for the storage class | -| glusterfs\_provisioner\_gidMax |The maximum value of GID range for the storage class | -| glusterfs\_provisioner\_volumetype | The volume type and its parameters can be configured with this optional value, for example: ‘Replica volume’: volumetype: replicate:3 | -| jwt\_admin\_key | "jwt.admin.key" field is from "/etc/heketi/heketi.json" in Heketi server | - -**Attention:** - - > Please note: `"glusterfs_provisioner_clusterid"` could be returned from glusterfs server by running the following command: - - ```bash - export HEKETI_CLI_SERVER=http://localhost:8080 - heketi-cli cluster list - ``` - -### QingCloud Block Storage - -[QingCloud Block Storage](https://docs.qingcloud.com/product/Storage/volume/) is supported in KubeSphere as the persistent storage service. If you would like to experience dynamic provisioning when creating volume, we recommend you to use it as your persistent storage solution. KubeSphere integrates [QingCloud-CSI](https://github.com/yunify/qingcloud-csi/blob/master/README_zh.md), and allows you to use various block storage services of QingCloud. With simple configuration, you can quickly expand, clone PVCs and view the topology of PVCs, create/delete snapshot, as well as restore volume from snapshot. - -QingCloud-CSI plugin has implemented the standard CSI. You can easily create and manage different types of volumes in KubeSphere, which are provided by QingCloud. The corresponding PVCs will created with ReadWriteOnce access mode and mounted to running Pods. - -QingCloud-CSI supports create the following five types of volume in QingCloud: - -- High capacity -- Standard -- SSD Enterprise -- Super high performance -- High performance - -|**QingCloud-CSI** | **Description**| -| --- | ---| -| qingcloud\_csi\_enabled|Whether to use QingCloud-CSI as the persistent storage volume, defaults to false | -| qingcloud\_csi\_is\_default\_class| Whether to set QingCloud-CSI as default storage class, defaults to false | -qingcloud\_access\_key\_id ,
qingcloud\_secret\_access\_key| Please obtain it from [QingCloud Console](https://console.qingcloud.com/login) | -|qingcloud\_zone| Zone should be the same as the zone where the Kubernetes cluster is installed, and the CSI plugin will operate on the storage volumes for this zone. For example: zone can be set to these values, such as sh1a (Shanghai 1-A), sh1b (Shanghai 1-B), pek2 (Beijing 2), pek3a (Beijing 3-A), pek3b (Beijing 3-B), pek3c (Beijing 3-C), gd1 (Guangdong 1), gd2a (Guangdong 2-A), ap1 (Asia Pacific 1), ap2a (Asia Pacific 2-A) | -| type | The type of volume in QingCloud platform. In QingCloud platform, 0 represents high performance volume. 3 represents super high performance volume. 1 or 2 represents high capacity volume depending on cluster‘s zone, see [QingCloud Documentation](https://docs.qingcloud.com/product/api/action/volume/create_volumes.html)| -| maxSize, minSize | Limit the range of volume size in GiB| -| stepSize | Set the increment of volumes size in GiB| -| fsType | The file system of the storage volume, which supports ext3, ext4, xfs. The default is ext4| - -### QingStor NeonSAN - -The NeonSAN-CSI plugin supports the enterprise-level distributed storage [QingStor NeonSAN](https://www.qingcloud.com/products/qingstor-neonsan/) as the persistent storage solution. You need prepare the NeonSAN server, then configure the NeonSAN-CSI plugin to connect to its storage server in `conf/common.yaml`. Please refer to [NeonSAN-CSI Reference](https://github.com/wnxn/qingstor-csi/blob/master/docs/reference_zh.md#storageclass-%E5%8F%82%E6%95%B0) for further information. - -| **NeonSAN** | **Description** | -| --- | --- | -| neonsan\_csi\_enabled | Whether to use NeonSAN as the persistent storage, defaults to false | -| neonsan\_csi\_is\_default\_class | Whether to set NeonSAN-CSI as the default storage class, defaults to false| -Neonsan\_csi\_protocol | transportation protocol, user must set the option, such as TCP or RDMA| -| neonsan\_server\_address | NeonSAN server address | -| neonsan\_cluster\_name| NeonSAN server cluster name| -| neonsan\_server\_pool|A comma separated list of pools. Tell plugin to manager these pools. User must set the option, the default value is kube| -| neonsan\_server\_replicas|NeonSAN image replica count. Default: 1| -| neonsan\_server\_stepSize|set the increment of volumes size in GiB. Default: 1| -| neonsan\_server\_fsType|The file system to use for the volume. Default: ext4| diff --git a/content/zh/docs/installing-on-linux/uninstalling/_index.md b/content/zh/docs/installing-on-linux/uninstalling/_index.md new file mode 100644 index 000000000..a5cec1c3a --- /dev/null +++ b/content/zh/docs/installing-on-linux/uninstalling/_index.md @@ -0,0 +1,10 @@ +--- +title: "Uninstalling" +keywords: 'kubernetes, kubesphere, uninstalling, remove-cluster' +description: 'How to uninstall KubeSphere' + + +weight: 2450 +--- + +Uninstall will remove KubeSphere and Kubernetes from the machines. This operation is irreversible and does not have any backup. Please be caution with operation. You can see [Uninstalling KubeSphere and Kubernetes](../uninstalling-kubesphere-and-kubernetes) for details. diff --git a/content/zh/docs/installing-on-linux/uninstalling/uninstalling-kubesphere-and-Kubernetes.md b/content/zh/docs/installing-on-linux/uninstalling/uninstalling-kubesphere-and-Kubernetes.md new file mode 100644 index 000000000..b74466aaa --- /dev/null +++ b/content/zh/docs/installing-on-linux/uninstalling/uninstalling-kubesphere-and-Kubernetes.md @@ -0,0 +1,26 @@ +--- +title: "Uninstalling KubeSphere and Kubernetes" +keywords: 'kubernetes, kubesphere, uninstalling, remove-cluster' +description: 'How to uninstall KubeSphere and kubernetes' + + +weight: 2451 +--- + +You can delete the cluster by the following command. + +{{< notice tip >}} +Uninstall will remove KubeSphere and Kubernetes from the machines. This operation is irreversible and does not have any backup. Please be caution with operation. +{{}} + +- If you started with the quick start (all-in-one): + +``` +./kk delete cluster +``` + +- If you started with the advanced mode (created with a configuration file): + +``` +./kk delete cluster [-f config-sample.yaml] +``` diff --git a/content/zh/docs/introduction/_index.md b/content/zh/docs/introduction/_index.md index 25a021201..29c100754 100644 --- a/content/zh/docs/introduction/_index.md +++ b/content/zh/docs/introduction/_index.md @@ -11,12 +11,29 @@ icon: "/images/docs/docs.svg" --- -## Installing KubeSphere and Kubernetes on Linux +This chapter gives you an overview of the basic concept of KubeSphere, features, advantages, uses cases and more. -In this chapter, we will demonstrate how to use KubeKey to provision a new Kubernetes and KubeSphere cluster based on different infrastructures. Kubekey can help you to quickly build a production-ready cluster architecture on a set of machines from zero to one. It also helps you to easily scale the cluster and install pluggable components on existing architecture. +## [What is KubeSphere](https://kubesphere-v3.netlify.app/docs/introduction/what-is-kubesphere/) -## Most Popular Pages +Develop a basic understanding of KubeSphere and highlighted features of its latest version. -Below you will find some of the most common and helpful pages from this chapter. We highly recommend you to review them at first. +## [Features](https://kubesphere-v3.netlify.app/docs/introduction/features/) + +Get started with KubeSphere by understanding what KubeSphere is capable of and how you can make full use of it. + +## [Architecture](https://kubesphere-v3.netlify.app/docs/introduction/architecture/) + +Explore the structure of KubeSphere to get a clear view of the components both at front end and back end. + +## [Advantages](https://kubesphere-v3.netlify.app/docs/introduction/advantages/) + +Understand the reason why KubeSphere is beneficial to your work. + +## [Use Cases](https://kubesphere-v3.netlify.app/docs/introduction/scenarios/) + +See how KubeSphere can be used in different scenarios, such as multi-cluster deployment, DevOps and service mesh. + +## [Glossary](https://kubesphere-v3.netlify.app/docs/introduction/glossary/) + +Learn terms and phrases that are used in KubeSphere. -{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} diff --git a/content/zh/docs/introduction/advantages.md b/content/zh/docs/introduction/advantages.md index 64c1f2e89..3ac3f4ccf 100644 --- a/content/zh/docs/introduction/advantages.md +++ b/content/zh/docs/introduction/advantages.md @@ -1,97 +1,92 @@ --- title: "Advantages" -keywords: "kubesphere, kubernetes, docker, helm, jenkins, istio, prometheus, service mesh, advantages" -description: "KubeSphere advantages" +keywords: "KubeSphere, Kubernetes, Advantages" +description: "KubeSphere Advantages" weight: 1400 --- ## Vision -{{< notice note >}} -### This is a simple note. -{{}} +Kubernetes has become the de facto standard for deploying containerized applications at scale in private, public and hybrid cloud environments. However, many people can easily get confused when they start to use Kubernetes as it is complicated and has many additional components to manage. Some components need to be installed and deployed by users themselves, such as storage and network services. At present, Kubernetes only provides open-source solutions or projects, which can be difficult to install, maintain and operate to some extent. For users, it is not always easy to quickly get started as they are faced with a steep learning curve. -{{< notice tip >}} -This is a simple tip. -{{}} +KubeSphere is designed to reduce or eliminate many Kubernetes headaches related to building, deployment, management, observability and so on. It provides comprehensive services and automates provisioning, scaling and management of applications so that you can focus on code writing. More specifically, KubeSphere boasts an extensive portfolio of features including multi-cluster management, application lifecycle management, multi-tenant management, CI/CD pipelines, service mesh, and observability (monitoring, logging, alerting, auditing, events and notification). -{{< notice info >}} -This is a simple info. -{{}} +As a comprehensive open-source platform, KubeSphere strives to make the container platform more user-friendly and powerful. With a highly responsive web console, KubeSphere provides a graphic interface for developing, testing and operating, which can be easily accessed in a browser. For users who are accustomed to command-line tools, they can quickly get familiar with KubeSphere as kubectl is also integrated in the fully-functioning web console. With the responsive UI design, users can create, modify and create their apps and resources with a minimal learning curve. -{{< notice warning >}} -This is a simple warning. -{{}} - -{{< tabs >}} - -{{< tab "first" >}} -### Why KubeSphere -{{}} - -{{< tab "second" >}} -``` -console.log('test') -``` -{{}} - -{{< tab "third" >}} -this is third tab -{{}} - -{{}} - -KubeSphere is a distributed operating system that provides full stack system services and a pluggable framework for third-party software integration for enterprise-critical containerized workloads running in data center. - -Kubernetes has now become the de facto standard for deploying containerized applications at scale in private, public and hybrid cloud environments. However, many people easily get confused when they start to use Kubernetes as it is complicated and has many additional components to manage, some of which need to be installed and deployed by users themselves, such as storage service and network service. At present, Kubernetes only provides open source solutions or projects, which may be difficult to install, maintain and operate to some extent. For users, learning costs and barrier to entry are both high. In a word, it is not easy to get started quickly. - -If you want to deploy your cloud-native applications on the cloud, it is a good practice to adopt KubeSphere to help you run Kubernetes since KubeSphere already provides rich and required services for running your applications successfully so that you can focus on your core business. More specifically, KubeSphere provides application lifecycle management, infrastructure management, CI/CD pipeline, service mesh, observability such as monitoring, logging, alerting, events and notification. In another word, Kubernetes is a wonderful open-source platform. KubeSphere steps further to make the container platform more user-friendly and powerful not only to ease the learning curve and drive the adoption of Kubernetes, but also to help users deliver cloud-native applications faster and easier. +In addition, KubeSphere offers excellent solutions to storage and network. Apart from the major open-source storage solutions such as Ceph RBD and GlusterFS, users are also provided with [QingCloud Block Storage](https://docs.qingcloud.com/product/storage/volume/) and [QingStor NeonSAN](https://docs.qingcloud.com/product/storage/volume/super_high_performance_shared_volume/), developed by QingCloud for persistent storage. With the integrated QingCloud CSI and NeonSAN CSI plugins, enterprises can enjoy a more stable and secure services of their apps and data. ## Why KubeSphere -KubeSphere provides high-performance and scalable container service management for enterprise users, It aims to help enterprises accomplish the digital transformation driven by the new generation of Internet technology, and accelerate the speed of iteration and delivery of business to meet the ever-changing business needs of enterprises. +KubeSphere provides high-performance and scalable container service management for enterprises. It aims to help them accomplish digital transformation driven by cutting-edge technologies, and accelerate app iteration and business delivery to meet the ever-changing needs of enterprises. -## Awesome User Experience and Wizard UI +Here are the six major advantages that make KubeSphere stand out among its counterparts. -- KubeSphere provides user-friendly web console for developing, testing and operating. With the wizard UI, users greatly reduce the learning and operating cost of Kubernetes. -- Users can deploy an enterprise application with one click from template, and use the application lifecycle management service to deliver their products in the console. +### Unified Management of Clusters across Cloud Providers -## High Reliability and Availability +As container usage ramps up, enterprises are faced with increased complexity of cluster management as they deploy clusters across cloud and on-premises environments. To address the urgent need of users for a uniform platform to manage heterogeneous clusters, KubeSphere sees a major feature enhancement with substantial benefits. Users can leverage KubeSphere to manage, monitor, import, operate and retire clusters across regions, clouds and environments. -- Automatic elastic scaling: Deployment is able to scale the number of Pods horizontally, and Pod is able to scale vertically based on observed metrics such as CPU utilization when user requests change, which guarantees applications keep running without crash because of resource pressure. -- Health check service: Supporting visually setting health check probes for containers to ensure the reliability of business. +The feature can be enabled both before and after the installation, giving users great flexibility as they make their own decisions to use KubeSphere for their specific issues. In particular, it features: -## Containerized DevOps Delivery +**Unified Management**. Users can import Kubernetes clusters either through direct connection or with an agent. With simple configurations, the process can be done within minutes in the interactive console. Once clusters are imported, users are able to monitor the status and operate on cluster resources in a unified way. -- Easy-to-use pipeline: CI/CD pipeline management is visualized without user configuring, also the system ships many built-in pipeline templates. -- Source to Image (S2I):Through S2I, users do not need to write Dockerfile. The system can get source code from code repository and build the image automatically, deploy the workload into Kubernetes environment and push it to image registry automatically as well. -- Binary to Image (B2I):exactly same as S2I except the input is binary artifacts instead of source code which is much useful for developers without Docker skills or legacy applications dockerized. -- End-to-end pipeline configuration: supports end-to-end pipeline configuration from pulling source code from repository such as GitHub, SVN and Git, to compiling code, to packaging image, to scanning image in terms of security, then to pushing image to registry, and to releasing the application. -- Source code quality management: supports static analysis scanning for code quality for the application in DevOps project. -- Logging: Logs all steps of CI/CD pipeline. +**High Availability**. This is extremely useful when it comes to disaster recovery. A cluster can run major services with another one serving as the backup. When the major one goes down, services can be quickly taken over by another cluster. The logic is quite similar to the case when clusters are deployed in different regions, as requests can be sent to the closest one for low latency. In short, high availability is achieved across zones and clusters. -## Out-of-Box Microservice Governance +For more information, see Multi-cluster Management. -- Flexible micro-service framework: provides visual micro-service governance capabilities based on Istio micro-service framework, and divides Kubernetes services into finer-grained services to support non-intrusive micro-service governance. -- Comprehensive governance services: offers excellent microservice governance such as grayscale releasing, circuit break, traffic monitoring, traffic control, rate limit, tracing, intelligent routing, etc. +### Powerful Observability -## Multiple Persistent Storage Support +The observability feature of KubeSphere has been greatly improved with key building blocks enhanced, including monitoring, logging, auditing, events, alerting and notification. The highly functional system allows users to observe virtually everything that happens in the platform. It has much to offer for users with distinct advantages listed as below: -- Support GlusterFS, CephRBD, NFS, etc., open source storage solutions. -- Provide NeonSAN CSI plug-in to connect commercial QingStor NeonSAN service to meet core business requirements, i.e., low latency, strong resilient, high performance. -- Provide QingCloud CSI plug-in that accesses commercial QingCloud block storage services. +**Customized**. Users are allowed to customize their own monitoring dashboard with multiple display forms available. They can set their own templates based on their needs, add the metric they want to monitor and even choose the display color they prefer. Alerting policies and rules can all be customized as well, including repetition interval, time and threshold. -## Flexible Network Solution Support +**Diversified**. Ops teams are freed from the complicated work of recording massive data as KubeSphere monitors resources from virtually all dimensions. It also features an efficient notification system with diversified channels for users to choose from. -- Support open-source network solutions such as Calico and Flannel. -- A bare metal load balancer plug-in [Porter](https://github.com/kubesphere/porter) for Kubernetes installed on physical machines. +**Visualized and Interactive**. KubeSphere presents users with a graphic web console, especially for the monitoring of different resources. They are displayed in highly interactive graphs that give users a clear view of what is happening inside a cluster. Resources at different levels can also be sorted based on their usage, which is convenient for users to compare for further data analysis. -## Multi-tenant and Multi-dimensional Monitoring and Logging +**Accurate**. The entire monitoring system functions at second-level precision that allow users to quickly locate any component failures. In terms of events and auditing, all activities are accurately recorded for future reference. -- Monitoring system is fully visualized, and provides open standard APIs for enterprises to integrate their existing operating platforms such as alerting, monitoring, logging etc. in order to have a unified system for their daily operating work. -- Multi-dimensional and second-level precision monitoring metrics. -- Provide resource usage ranking by node, workspace and project. -- Provide service component monitoring for user to quickly locate component failures. -- Provide rich alerting rules based on multi-tenancy and multi-dimensional monitoring metrics. Currently, the system supports two types of alerting. One is infrastructure alerting for cluster administrator. The other one is workload alerting for tenants. -- Provide multi-tenant log management. In KubeSphere log search system, different tenants can only see their own log information. +For more information, see Project Administration and Usage. + +### Automated DevOps + +Automation represents a key part of implementing DevOps. With automatic, streamlined pipelines in place, users are better positioned to distribute apps in terms of continuous delivery and integration. + +**Jenkins-powered**. KubeSphere DevOps system is built with Jenkins as the engine, which is abundant in plugins. On top of that, Jenkins provides an enabling environment for extension development, making it possible for the DevOps team to work smoothly across the whole process (developing, testing, building, deploying, monitoring, logging, notifying, etc.) in a unified platform. The KubeSphere account can also be used for the built-in Jenkins, meeting the demand of enterprises for multi-tenant isolation of CI/CD pipelines and unified authentication. + +**Convenient built-in tools**. Users can easily take advantage of automation tools (e.g. Binary-to-Image and Source-to-Image) even without a thorough understanding of how Docker or Kubernetes works. They only need to submit a registry address or upload binary files (e.g. JAR/WAR/Binary). Ultimately, services will be released to Kubernetes automatically without any coding in a Dockerfile. + +For more information, see DevOps Administration. + +### Fine-grained Access Control + +KubeSphere users are allowed to implement fine-grained access control across different levels, including clusters, workspaces and projects. Users with specific roles can operate on different resources if they are authorized to do so. + +**Self-defined**. Apart from system roles, KubeSphere empowers users to define their roles with a spectrum of operations that they can assign to tenants. This meets the need of enterprises for detailed task allocation as they can decide who should be responsible for what while not being affected by irrelevant resources. + +**Secure**. As tenants at different levels are completely isolated from each other, they can share resources while not affecting one another. The network can also be completely isolated to ensure data security. + +For more information, see Role and Member Management in Workspace. + +### Out-of-Box Microservices Governance + +On the back of Istio, KubeSphere features major grayscale strategies. All these features are out of the box, which means consistent user experiences without any code hacking. Traffic control, for example, plays an essential role in microservices governance. In this connection, Ops teams, in particular, are able to implement operational patterns (e.g. circuit breaking) to compensate for poorly behaving services. Here are two major reasons why you use microservices governance, or service mesh in KubeSphere: + +- **Comprehensive**. KubeSphere provides users with a well-diversified portfolio of solutions to traffic management, including canary release, blue-green deployment, traffic mirroring and circuit breaking. In addition, the distributed tracing feature also helps users monitor apps, locate failures, and improve performance. +- **Visualized**. With a highly responsive web console, KubeSphere allows users to view how microservices interconnect with each other in a straightforward way. + +KubeSphere aims to make service-to-service calls within the microservices architecture reliable and fast. For more information, see Project Administration and Usage. + +### Vibrant Open Source Community + +As an open-source project, KubeSphere represents more than just a container platform for app deployment and distribution. We believe that a true open-source model focuses more on sharing, discussions and problem solving with everyone involved. Together with partners, ambassadors and contributors, and other community members, we file issues, submit pull requests, participate in meetups, and exchange ideas of innovation. + +At KubeSphere, we have the capabilities and technical know-how to help you share the benefits that the open-source model can offer. More importantly, we have community members from around the world who make everything here possible. + +**Partners**. KubeSphere partners play a critical role in KubeSphere's go-to-market strategy. They can be app developers, technology companies, cloud providers or go-to-market partners, all of whom drive the community ahead in their respective aspects. + +**Ambassadors**. As community representatives, ambassadors promote KubeSphere in a variety of ways (e.g. activities, blogs and user cases) so that more people can join us. + +**Contributors**. KubeSphere contributors help the whole community by contributing to code or documentation. You don't need to be an expert while you can still make a different even it is a minor code fix or language improvement. + +For more information, see [Partner Program](https://kubesphere.io/partner/) and [Community Governance](https://kubesphere.io/contribution/). \ No newline at end of file diff --git a/content/zh/docs/introduction/features.md b/content/zh/docs/introduction/features.md index 7911df620..5eb91e490 100644 --- a/content/zh/docs/introduction/features.md +++ b/content/zh/docs/introduction/features.md @@ -1,7 +1,7 @@ --- -title: "Features and Benefits" -keywords: "kubesphere, kubernetes, docker, helm, jenkins, istio, prometheus" -description: "The document describes the features and benefits of KubeSphere" +title: "Features" +keywords: "KubeSphere, Kubernetes, Docker, Jenkins, Istio, Features" +description: "KubeSphere Key Features" linkTitle: "Features" weight: 1200 @@ -9,120 +9,164 @@ weight: 1200 ## Overview -As an open source container platform, KubeSphere provides enterprises with a robust, secure and feature-rich platform, including most common functionalities needed for enterprise adopting Kubernetes, such as workload management, Service Mesh (Istio-based), DevOps projects (CI/CD), Source to Image and Binary to Image, multi-tenancy management, multi-dimensional monitoring, log query and collection, alerting and notification, service and network management, application management, infrastructure management, image registry management, application management. It also supports various open source storage and network solutions, as well as cloud storage services. Meanwhile, KubeSphere provides an easy-to-use web console to ease the learning curve and drive the adoption of Kubernetes. +As an open source container platform, KubeSphere provides enterprises with a robust, secure and feature-rich platform, boasting the most common functionalities needed for enterprises adopting Kubernetes, such as multi-cluster deployment and management, network policy configuration, Service Mesh (Istio-based), DevOps projects (CI/CD), security management, Source-to-Image and Binary-to-Image, multi-tenant management, multi-dimensional monitoring, log query and collection, alerting and notification, auditing, application management, and image registry management. + +It also supports various open source storage and network solutions, as well as cloud storage services. For example, KubeSphere presents users with a powerful cloud-native tool [Porter](https://porterlb.io/), a CNCF-certified load balancer developed for bare metal Kubernetes clusters. + +With an easy-to-use web console in place, KubeSphere eases the learning curve for users and drives the adoption of Kubernetes. ![Overview](https://pek3b.qingstor.com/kubesphere-docs/png/20200202153355.png) -The following modules elaborate the key features and benefits provided by KubeSphere container platform. +The following modules elaborate on the key features and benefits provided by KubeSphere. For detailed information, see the respective chapter in this guide. ## Provisioning and Maintaining Kubernetes -### Provisioning Kubernetes Cluster +### Provisioning Kubernetes Clusters -KubeSphere Installer allows you to deploy Kubernetes on your infrastructure out of box, provisioning Kubernetes cluster with high availability. It is recommended that at least three master nodes are configured behind a load balancer for production environment. +[KubeKey](https://github.com/kubesphere/kubekey) allows you to deploy Kubernetes on your infrastructure out of box, provisioning Kubernetes clusters with high availability. It is recommended that at least three master nodes are configured behind a load balancer for production environment. ### Kubernetes Resource Management -KubeSphere provides graphical interface for creating and managing Kubernetes resources, including Pods and Containers, Workloads, Secrets and ConfigMaps, Services and Ingress, Jobs and CronJobs, HPA, etc. As well as powerful observability including resources monitoring, events, logging, alerting and notification. +KubeSphere provides a graphical web console, giving users a clear view of a variety of Kubernetes resources, including Pods and containers, clusters and nodes, workloads, secrets and ConfigMaps, services and Ingress, jobs and CronJobs, and applications. With wizard user interfaces, users can easily interact with these resources for service discovery, HPA, image management, scheduling, high availability implementation, container health check and more. + +As KubeSphere 3.0 features enhanced observability, users are able to keep track of resources from multi-tenant perspectives, such as custom monitoring, events, auditing logs, alerts and notifications. ### Cluster Upgrade and Scaling -KubeSphere Installer provides ease of setup, installation, management and maintenance. Moreover, it supports rolling upgrades of Kubernetes clusters so that the cluster service is always available while being upgraded. Additionally, it provides the ability to roll back to previous stable version in case of failure. Also, you can add new nodes to a Kubernetes cluster in order to support more workloads by using KubeSphere Installer. +The next-gen installer [KubeKey](https://github.com/kubesphere/kubekey) provides an easy way of installation, management and maintenance. Moreover, it supports rolling upgrades of Kubernetes clusters so that the cluster service is always available while being upgraded. Also, you can add new nodes to a Kubernetes cluster to include more workloads by using KubeKey. + +## Multi-cluster Management and Deployment + +As the IT world sees a growing number of cloud-native applications reshaping software portfolios for enterprises, users tend to deploy their clusters across locations, geographies, and clouds. Against this backdrop, KubeSphere has undergone a significant upgrade to address the pressing need of users with its brand-new multi-cluster feature. + +With KubeSphere, users can manage the infrastructure underneath, such as adding or deleting clusters. Heterogeneous clusters deployed on any infrastructure (e.g. Amazon EKS and Google Kubernetes Engine) can be managed in a unified way. This is made possible by a central control plane of KubeSphere with two efficient management approaches available. + +- **Solo**. Independently deployed Kubernetes clusters can be maintained and managed together in KubeSphere container platform. +- **Federation**. Multiple Kubernetes clusters can be aggregated together as a Kubernetes resource pool. When users deploy applications, replicas can be deployed on different Kubernetes clusters in the pool. In this regard, high availability is achieved across zones and clusters. + +KubeSphere allows users to deploy applications across clusters. More importantly, an application can also be configured to run on a certain cluster. Besides, the multi-cluster feature, paired with [OpenPitrix](https://github.com/openpitrix/openpitrix), an industry-leading application management platform, enables users to manage apps across their whole lifecycle, including release, removal and distribution. + +For more information, see Multi-cluster Management. ## DevOps Support -KubeSphere provides pluggable DevOps component based on popular CI/CD tools such as Jenkins, and offers automated workflow and tools including binary-to-image (B2I) and source-to-image (S2I) to get source code or binary artifacts into ready-to-run container images. The following are the detailed description of CI/CD pipeline, S2I and B2I. +KubeSphere provides a pluggable DevOps component based on popular CI/CD tools such as Jenkins. It features automated workflows and tools including binary-to-image (B2I) and source-to-image (S2I) to package source code or binary artifacts into ready-to-run container images. ![DevOps](https://pek3b.qingstor.com/kubesphere-docs/png/20200202220455.png) ### CI/CD Pipeline -- CI/CD pipelines and build strategies are based on Jenkins, which streamlines the creation and automation of development, test and production process, and supports dependency cache to accelerate build and deployment. -- Ship out-of-box Jenkins build strategy and client plugin to create a Jenkins pipeline based on Git repository/SVN. You can define any step and stage in your built-in Jenkinsfile. -- Design a visualized control panel to create CI/CD pipelines, and deliver complete visibility to simplify user interaction. -- Integrate source code quality analysis, also support output and collect logs of each step. +- **Automation**. CI/CD pipelines and build strategies are based on Jenkins, streamlining and automating the development, test and production process. Dependency caches are used to accelerate build and deployment. +- **Out-of-box**. Users can ship their Jenkins build strategy and client plugin to create a Jenkins pipeline based on Git repository/SVN. They can define any step and stage in the built-in Jenkinsfile. Common agent types are embedded, such as Maven, Node.js and Go. Users can customize the agent type as well. +- **Visualization**. Users can easily interact with a visualized control panel to set conditions and manage CI/CD pipelines. +- **Quality Management**. Static code analysis is supported to detect bugs, code smells and security vulnerabilities. +- **Logs**. The entire running process of CI/CD pipelines is recorded. -### Source to Image +### Source-to-Image Source-to-Image (S2I) is a toolkit and automated workflow for building reproducible container images from source code. S2I produces ready-to-run images by injecting source code into a container image and making the container ready to execute from source code. -S2I allows you to publish your service to Kubernetes without writing Dockerfile. You just need to provide source code repository address, and specify the target image registry. All configurations will be stored as different resources in Kubernetes. Your service will be automatically published to Kubernetes, and the image will be pushed to target registry as well. +S2I allows you to publish your service to Kubernetes without writing a Dockerfile. You just need to provide a source code repository address, and specify the target image registry. All configurations will be stored as different resources in Kubernetes. Your service will be automatically published to Kubernetes, and the image will be pushed to the target registry as well. ![S2I](https://pek3b.qingstor.com/kubesphere-docs/png/20200204131749.png) -### Binary to Image +### Binary-to-Image -As similar as S2I, Binary to Image (B2I) is a toolkit and automated workflow for building reproducible container images from binary (e.g. Jar, War, Binary package). +Similar to S2I, Binary-to-Image (B2I) is a toolkit and automated workflow for building reproducible container images from binary (e.g. Jar, War, Binary package). -You just need to upload your application binary package, and specify the image registry to which you want to push. The rest is exactly same as S2I. +You just need to upload your application binary package, and specify the image registry to which you want to push. The rest is exactly the same as S2I. + +For more information, see DevOps Administration. ## Istio-based Service Mesh -KubeSphere service mesh is composed of a set of ecosystem projects, including Istio, Envoy and Jaeger, etc. We design a unified user interface to use and manage these tools. Most features are out-of-box and have been designed from developer's perspective, which means KubeSphere can help you to reduce the learning curve since you do not need to deep dive into those tools individually. +KubeSphere service mesh is composed of a set of ecosystem projects, such as Istio, Envoy and Jaeger. We design a unified user interface to use and manage these tools. Most features are out-of-box and have been designed from the developer's perspective, which means KubeSphere can help you to reduce the learning curve since you do not need to deep dive into those tools individually. -KubeSphere service mesh provides fine-grained traffic management, observability, tracing, and service identity and security for a distributed microservice application, so the developer can focus on core business. With a service mesh management on KubeSphere, users can better track, route and optimize communications within Kubernetes for cloud native apps. +KubeSphere service mesh provides fine-grained traffic management, observability, tracing, and service identity and security management for a distributed application. Therefore, developers can focus on core business. With service mesh management of KubeSphere, users can better track, route and optimize communications within Kubernetes for cloud-native apps. ### Traffic Management -- **Canary release** provides canary rollouts, and staged rollouts with percentage-based traffic splits. -- **Blue-green deployment** allows the new version of the application to be deployed in the green environment and tested for functionality and performance. Once the testing results are successful, application traffic is routed from blue to green. Green then becomes the new production. +- **Canary release** represents an important deployment strategy of new versions for testing purposes. Traffic is separated with a pre-configured ratio into a canary release and a production release respectively. If everything goes well, users can change the percentage and gradually replace the old version with the new one. +- **Blue-green deployment** allows users to run two versions of an application at the same time. Blue stands for the current app version and green represents the new version tested for functionality and performance. Once the testing results are successful, application traffic is routed from the in-production version (blue) to the new one (green). - **Traffic mirroring** enables teams to bring changes to production with as little risk as possible. Mirroring sends a copy of live traffic to a mirrored service. -- **Circuit breakers** allows users to set limits for calls to individual hosts within a service, such as the number of concurrent connections or how many times calls to this host have failed. +- **Circuit breaker** allows users to set limits for calls to individual hosts within a service, such as the number of concurrent connections or how many times calls to this host have failed. + +For more information, see Grayscale Release. ### Visualization -KubeSphere service mesh has the ability to visualize the connections between microservices and the topology of how they interconnect. As we know, observability is extremely useful in understanding cloud-native microservice interconnections. +KubeSphere service mesh has the ability to visualize the connections between microservices and the topology of how they interconnect. In this regard, observability is extremely useful in understanding the interconnection of cloud-native microservices. ### Distributed Tracing -Based on Jaeger, KubeSphere service mesh enables users to track how each service interacts with other services. It brings a deeper understanding about request latency, bottlenecks, serialization and parallelism via visualization. +Based on Jaeger, KubeSphere service mesh enables users to track how services interact with each other. It helps users gain a deeper understanding of request latency, bottlenecks, serialization and parallelism via visualization. ## Multi-tenant Management -- Multi-tenancy: provides unified authentication with fine-grained roles and three-tier authorization system. -- Unified authentication: supports docking to a central enterprise authentication system that is LDAP/AD based protocol. And supports single sign-on (SSO) to achieve unified authentication of tenant identity. -- Authorization system: It is organized into three levels, namely, cluster, workspace and project. We ensure the resource sharing as well as isolation among different roles at multiple levels to fully guarantee resource security. +In KubeSphere, resources (e.g. clusters) can be shared between tenants. First, administrators or managers need to set different account roles with different authorizations. After that, members in the platform can be assigned with these roles to perform specific actions on varied resources. Meanwhile, as KubeSphere completely isolates tenants, they will not affect each other at all. -## Multi-dimensional Monitoring +- **Multi-tenancy**. It provides role-based fine-grained authentication in a unified way and a three-tier authorization system. +- **Unified authentication**. For enterprises, KubeSphere is compatible with their central authentication system that is base on LDAP or AD protocol. Single sign-on (SSO) is also supported to achieve unified authentication of tenant identity. +- **Authorization system**. It is organized into three levels: cluster, workspace and project. KubeSphere ensures resources can be shared while different roles at multiple levels are completely isolated for resource security. -- Monitoring system is fully visualized, and provides open standard APIs for enterprises to integrate their existing operating platforms such as alerting, monitoring, logging etc. in order to have a unified system for their daily operating work. -- Comprehensive and second-level precision monitoring metrics. - - In the aspect of infrastructure monitoring, the system provides many metrics including CPU utilization, memory utilization, CPU load average, disk usage, inode utilization, disk throughput, IOPS, network interface outbound/inbound rate, Pod status, ETCD service status, API Server status, etc. - - In the aspect of application resources, the system provides five monitoring metrics, i.e., CPU utilization, memory consumption, the number of Pods of applications, network outbound/inbound rate of an application. Besides, it supports sorting according to resource consumption, user-defined time range query and quickly locating the place where exception happens. -- Provide resource usage ranking by node, workspace and project. -- Provide service component monitoring for user to quickly locate component failures. +For more information, see Role and Member Management in Workspace. -## Alerting and Notification System +## Observability -- Provide rich alerting rules based on multi-tenancy and multi-dimensional monitoring metrics. Currently, the system supports two types of alerting. One is infrastructure alerting for cluster administrator. The other one is workload alerting for tenants. -- Flexible alerting policy: You can customize an alerting policy that contains multiple alerting rules, and you can specify notification rules and repeat alerting rules. -- Rich monitoring metrics for alerting: Provide alerting for infrastructure and workloads. -- Flexible alerting rules: You can customize the detection period, duration and alerting level of monitoring metrics. -- Flexible notification rules: You can customize the notification delivery period and receiver list. Mail notification is currently supported. -- Custom repeat alerting rules: Support to set the repeat alerting cycle, maximum repeat times, and the alerting level. +### Multi-dimensional Monitoring + +KubeSphere features a self-updating monitoring system with graphical interfaces that streamline the whole process of operation and maintenance. It provides customized monitoring of a variety of resources and includes a set of alerts that can immediately notify users of any occurring issues. + +- **Customized monitoring dashboard**. Users can decide exactly what metics need to be monitored in what kind of form. Different templates are available in KubeSphere for users to select, such as Elasticsearch, MySQL, and Redis. Alternatively, they can also create their own monitoring templates, including charts, colors, intervals and units. +- **O&M-friendly**. The monitoring system can be operated in a visualized interface with open standard APIs for enterprises to integrate their existing systems. Therefore, they can implement operation and maintenance in a unified way. +- **Third-party compatibility**. KubeSphere is compatible with Prometheus, which is the de facto metrics collection platform for monitoring in Kubernetes environments. Monitoring data can be seamlessly displayed in the web console of KubeSphere. + +- **Multi-dimensional monitoring at second-level precision**. + - For infrastructure monitoring, the system provides comprehensive metrics such as CPU utilization, memory utilization, CPU load average, disk usage, inode utilization, disk throughput, IOPS, network outbound/inbound rate, Pod status, ETCD service status, and API Server status. + - For application resource monitoring, the system provides five key monitoring metrics: CPU utilization, memory consumption, Pod number, network outbound and inbound rate. Besides, users can sort data based on resource consumption and search metics by customizing the time range. In this way, occurring problems can be quickly located so that users can take necessary action. +- **Ranking**. Users can sort data by node, workspace and project, which gives them a graphical view of how their resources are running in a straightforward way. +- **Component monitoring**. It allows users to quickly locate any component failures to avoid unnecessary business downtime. + +### Alerting, Events, Auditing and Notifications + +- **Customized alerting policies and rules**. The alerting system is based on multi-tenant monitoring of multi-dimensional metrics. The system will send alerts related to a wide spectrum of resources such as pod, network and workload. In this regard, users can customize their own alerting policy by setting specific rules, such as repetition interval and time. The threshold and alerting level can also be defined by users themselves. +- **Accurate event tracking**. KubeSphere allows users to know what is happening inside a cluster, such as container running status (successful or failed), node scheduling, and image pulling result. They will be accurately recorded with the specific reason, status and message displayed in the web console. In a production environment, this will help users to respond to any issues in time. +- **Enhanced auditing security**. As KubeSphere features fine-grained management of user authorization, resources and network can be completely isolated to ensure data security. The comprehensive auditing feature allows users to search for activities related to any operation or alert. +- **Diversified notification methods**. Emails represent a key approach for users to receive notifications of relevant activities they want to know. They can be sent based on the rule set by users themselves, who are able to customize the sender email address and their receiver lists. Besides, other channels, such as Slack and WeChat, are also supported to meet the need of our users. In this connection, KubeSphere provides users with more notification preferences as they are updated on the latest development in KubeSphere no matter what channel they select. + +For more information, please see Project Administration and Usage. ## Log Query and Collection -- Provide multi-tenant log management. In KubeSphere log search system, different tenants can only see their own log information. -- Contain multi-level log queries (project/workload/container group/container and keywords) as well as flexible and convenient log collection configuration options. -- Support multiple log collection platforms such as Elasticsearch, Kafka, Fluentd. +- **Multi-tenant log management**. In KubeSphere log search system, different tenants can only see their own log information. Logs can be exported as records for future reference. +- **Multi-level log query**. Users can search for logs related to various resources, such as projects, workloads, and pods. Flexible and convenient log collection configuration options are available. +- **Multiple log collectors**. Users can choose log collectors such as Elasticsearch, Kafka, and Fluentd. +- **On-disk log collection**. For applications whose logs are saved in a Pod sidecar as a file, users can enable Disk Log Collection. ## Application Management and Orchestration -- Use open source [OpenPitrix](https://github.com/openpitrix/openpitrix) to set up app store and app repository services which provides full lifecycle of application management. -- Users can easily deploy an application from templates with one click. +- **App Store**. KubeSphere provides an app store based on [OpenPitrix](https://github.com/openpitrix/openpitrix), an industry-leading open source system for app management across the whole lifecycle, including release, removal, and distribution. +- **App repository**. In KubeSphere, users can create an app repository hosted either in object storage (such as [QingStor](https://www.qingcloud.com/products/qingstor/) or [AWS S3](https://aws.amazon.com/what-is-cloud-object-storage/)) or in [GitHub](https://github.com/). App packages submitted to the app repository are composed of Helm Chart template files of the app. +- **App template**. With app templates, KubeSphere provides a visualized way for app deployment with just one click. Internally, app templates can help different teams in the enterprise to share middleware and business systems. Externally, they can serve as an industry standard for application delivery based on different scenarios and needs. -## Infrastructure Management +## Multiple Storage Solutions -Support storage management, host management and monitoring, resource quota management, image registry management, authorization management. +- Open source storage solutions are available such as GlusterFS, CephRBD, and NFS. +- NeonSAN CSI plugin connects to QingStor NeonSAN to meet core business requirements for low latency, high resilience, and high performance. +- QingCloud CSI plugin connects to various block storage services in QingCloud platform. -## Multiple Storage Solutions Support +## Multiple Network Solutions -- Support GlusterFS, CephRBD, NFS, etc., open source storage solutions. -- Provide NeonSAN CSI plug-in to connect QingStor NeonSAN service to meet core business requirements, i.e., low latency, strong resilient, high performance. -- Provide QingCloud CSI plug-in that accesses QingCloud block storage services. +- Open source network solutions are available such as Calico and Flannel. -## Multiple Network Solutions Support +- [Porter](https://github.com/kubesphere/porter), a load balancer developed for bare metal Kubernetes clusters, is designed by KubeSphere development team. This CNCF-certified tool serves as an important solution for developers. It mainly features: -- Support Calico, Flannel, etc., open source network solutions. -- A bare metal load balancer plug-in [Porter](https://github.com/kubesphere/porter) for Kubernetes installed on physical machines. + 1. ECMP routing load balancing + 2. BGP dynamic routing configuration + 3. VIP management + 4. LoadBalancerIP assignment in Kubernetes services (v0.3.0) + 5. Installation with Helm Chart (v0.3.0) + 6. Dynamic BGP server configuration through CRD (v0.3.0) + 7. Dynamic BGP peer configuration through CRD (v0.3.0) + + For more information, please see [this article](https://kubesphere.io/conferences/porter/). diff --git a/content/zh/docs/introduction/scenarios.md b/content/zh/docs/introduction/scenarios.md new file mode 100644 index 000000000..7edc3bba2 --- /dev/null +++ b/content/zh/docs/introduction/scenarios.md @@ -0,0 +1,105 @@ +--- +title: "Use Cases" +keywords: 'KubeSphere, Kubernetes, Multi-cluster, Observability, DevOps' +description: 'Applicable in a variety of scenarios, KubeSphere provides enterprises with containerized environments with a complete set of features for management and operation.' + +weight: 1498 +--- + +KubeSphere is applicable in a variety of scenarios. For enterprises that deploy their business system on bare metal, their business modules are tightly coupled with each other. That means it is extremely difficult for resources to be horizontally scaled. In this connection, KubeSphere provides enterprises with containerized environments with a complete set of features for management and operation. It empowers enterprises to rise to the challenges in the middle of their digital transformation, including agile software development, automated operation and maintenance, microservices governance, traffic management, autoscaling, high availability, as well as DevOps and CI/CD. + +At the same time, with the strong support for network and storage offered by QingCloud, KubeSphere is highly compatible with the existing monitoring and O&M system of enterprises. This is how they can upgrade their system for IT containerization. + +## Multi-cluster Deployment + +It is generally believed that using as few clusters as possible can reduce costs with less pressure for O&M. That said, both individuals and organizations tend to deploy multiple clusters for various reasons. For instance, the majority of enterprises may deploy their services across clusters as they need to be tested in non-production environments. Another typical example is that enterprises may separate their services based on regions, departments, and infrastructure providers by adopting multiple clusters. + +The main reasons for employing this method fall into the following four categories: + +### High Availability + +Users can deploy workloads on multiple clusters by using a global VIP or DNS to send requests to corresponding backend clusters. When a cluster malfunctions or fails to handle requests, the VIP or DNS records can be transferred to a health cluster. + +![high-availability](https://ap3.qingstor.com/kubesphere-website/docs/ha.png) + +### Low Latency + +When clusters are deployed in various regions, user requests can be forwarded to the nearest cluster, greatly reducing network latency. For example, we have three Kubernetes clusters deployed in New York, Houston and Los Angeles respectively. For users in California, their requests can be forwarded to Los Angeles. This will reduce the network latency due to geographical distance, providing the best user experience possible for users in different areas. + +### Isolation + +**Failure Isolation**. Generally, it is much easier for multiple small clusters to isolate failures than a large cluster. In case of outages, network failures, insufficient resources or other possible resulting issues, the failure can be isolated within a certain cluster without spreading to others. + +**Business Isolation**. Although Kubernetes provides namespaces as a solution to app isolation, this method only represents the isolation in logic. This is because different namespaces are connected through the network, which means the issue of resource preemption still exists. To achieve further isolation, users need to create additional network isolation policies or set resource quotas. Using multiple clusters can achieve complete physical isolation that is more secure and reliable than the isolation through namespaces. For example, this is extremely effective when different departments within an enterprise use multiple clusters for the deployment of development, testing or production environments. + +![pipeline](https://ap3.qingstor.com/kubesphere-website/docs/pipeline.png) + +### Avoid Vendor Lock-in + +Kubernetes has become the de facto standard in container orchestration. Against this backdrop, many enterprises avoid putting all eggs in one basket as they deploy clusters by using services of different cloud providers. That means they can transfer and scale their business anytime between clusters. However, it is not that easy for them to transfer their business in terms of costs, as different cloud providers feature varied Kubernetes services, including storage and network interface. + +KubeSphere provides its unique feature as a solution to the above four cases. Based on the Federation pattern of KubeSphere's multi-cluster feature, multiple heterogeneous Kubernetes clusters can be aggregated within a unified Kubernetes resource pool. When users deploy applications, they can decide to which Kubernetes cluster they want app replicas to be scheduled in the pool. The whole process is managed and maintained through KubeSphere. This is how KubeSphere helps users achieve multi-site high availability (across zones and clusters). + +For more information, see Multi-cluster Management. + +## Full-stack Observability with Streamlined O&M + +Observability represents an important part in the work of Ops teams. In this regard, enterprises see increasing pressure on their Ops teams as they deploy their business on Kubernetes directly or on the platform of other cloud providers. This poses considerable challenges to Ops teams since they need to cope with extensive data. + +### Multi-dimensional Cluster Monitoring + +Again, the adoption of multi-cluster deployment across clouds is on the rise both among individuals and enterprises. However, because they run different services, users need to learn, deploy and especially, monitor across different cloud environments. After all, the tool provided by one cloud vendor for observability may not be applicable to another. In short, Ops teams are in desperate need of a unified view across different clouds for cluster monitoring covering metrics across the board. + +### Log Query + +A comprehensive monitoring feature is meaningless without a flexible log query system. This is because users need to be able to track all the information related to their resources, such as alerting messages, node scheduling status, app deployment success, or network policy modification. All these records play an important role in making sure users can keep up with the latest development, which will inform policy decisions of their business. + +### Customization + +Even for resource monitoring on the same platform, the tool provided by the cloud vendor may not be a panacea. In some cases, users need to create their own standard of observability, such as the specific monitoring metrics and display form. Moreover, they need to integrate common tools to the cloud for special use, such as Prometheus, which is the de facto standard for Kubernetes monitoring. In other words, customization has become a necessity in the industry as cloud-powered applications drive business on the one hand while requiring fine-grained monitoring on the other just in case of any failure. + +KubeSphere features a unified platform for the management of clusters deployed across cloud providers. Apps can be deployed automatically, streamlining the process of operation and maintenance. At the same time, KubeSphere boasts powerful observability features (alerting, events, auditing, logging and notifications) with a comprehensive customized monitoring system for a wide range of resources. Users themselves can decide what resources they want to monitor in what kind of forms. + +With KubeSphere, enterprises can focus more on business innovation as they are freed from complicated process of data collection and analysis. + +## Implement DevOps Practices + +DevOps represents an important set of practices or methods that engage both development and Ops teams for more coordinated and efficient cooperation between them. Therefore, development, test and release can be faster, more efficient and more reliable. CI/CD pipelines in KubeSphere provide enterprises with agile development and automated O&M. Besides, the microservices feature (service mesh) in KubeSphere enables enterprises to develop, test and release services in a fine-grained way, creating an enabling environment for their implementation of DevOps. With KubeSphere, enterprises can make full use of DevOps by: + +- Testing service robustness through fault injection without code hacking. +- Decoupling Kubernetes services with credential management and access control. +- Visualizing end-to-end monitoring process. + +## Service Mesh and Cloud-native Architecture + +Enterprises are now under increasing pressure to accelerate innovation amid their digital transformation. Specifically, they need to speed up in terms of development cycle, delivery time and deployment frequency. As application architectures evolve from monolithic to microservices, enterprises are faced with a multitude of resulting challenges. For example, microservices communicate with each other frequently, which entails smooth and stable network connectivity. Among others, latency represents a key factor that affects the entire architecture and user experience. In case of any failure, a troubleshooting and identifying system also needs to be in place to respond in time. Besides, deploying distributed applications is never an easy job without highly-functional tools and infrastructure. + +KubeSphere service mesh addresses a series of microservices use cases. + +### Multi-cloud App Distribution + +As mentioned above, it is not uncommon for individuals or organizations to deploy apps across Kubernetes clusters, whether on premises, public or hybrid. This may bring out significant challenges in unified traffic management, application and service scalability, DevOps pipeline automation, monitoring and so on. + +### Visualization + +As users deploy microservices which will communicate among themselves considerably, it will help users gain a better understanding of topological relations between microservices if the connection is highly visualized. Besides, distributed tracing is also essential for each service, providing operators with a detailed understanding of call flows and service dependencies within a mesh. + +### Rolling Updates + +When enterprises introduce a new version of a service, they may adopt a canary upgrade or blue-green deployment. The new one runs side by side with the old one and a set percentage of traffic is moved to the new service for error detection and latency monitoring. If everything works fine, the traffic to the new one will gradually increase until 100% of customers are using the new version. For this type of update, KubeSphere provides three kinds of categories of grayscale release: + +**Blue-green Deployment**. The blue-green release provides a zero downtime deployment, which means the new version can be deployed with the old one preserved. It enables both versions to run at the same time. If there is a problem with running, you can quickly roll back to the old version. + +**Canary Release**. This method brings part of the actual traffic into a new version to test its performance and reliability. It can help detect potential problems in the actual environment while not affecting the overall system stability. + +**Traffic Mirroring**. Traffic mirroring provides a more accurate way to test new versions as problems can be detected in advance while not affecting the production environment. + +With a lightweight, highly scalable microservices architecture offered by KubeSphere, enterprises are well-positioned to build their own cloud-native applications for the above scenarios. Based on Istio, a major solution to microservices, KubeSphere provides a platform for microservices governance without any hacking into code. Spring Cloud is also integrated for enterprises to build Java apps. KubeSphere also offers microservices upgrade consultations and technical support services, helping enterprises implement microservices architectures for their cloud-native transformation. + +## Bare Metal Deployment + +Sometimes, the cloud is not necessarily the ideal place for the deployment of resources. For example, physical, dedicated servers tend to function better when it comes to the cases that require considerable compute resources and high disk I/O. Besides, for some specialized workloads that are difficult to migrate to a cloud environment, certified hardware and complicated licensing and support agreements may be required. + +KubeSphere can help enterprises deploy a containerized architecture on bare metal, load balancing traffic with a physical switch. In this connection, [Porter](https://github.com/kubesphere/porter), a CNCF-certified cloud-native tool is born for this end. At the same time, KubeSphere, together with QingCloud VPC and QingStor NeonSAN, provides users with a complete set of features ranging from load balancing, container platform building, network management, and storage. This means virtually all aspects of the containerized architecture can be fully controlled and uniformly managed, without sacrificing the performance in virtualization. + +For detailed information about how KubeSphere drives the development of numerous industries, please see [Case Studies](https://kubesphere.io/case/). diff --git a/content/zh/docs/introduction/what-is-kubesphere.md b/content/zh/docs/introduction/what-is-kubesphere.md index fac311a54..885e19807 100644 --- a/content/zh/docs/introduction/what-is-kubesphere.md +++ b/content/zh/docs/introduction/what-is-kubesphere.md @@ -1,35 +1,50 @@ --- title: "What is KubeSphere" -keywords: 'Kubernetes, docker, jenkins, devops, istio, service mesh, devops, microservice' +keywords: 'Kubernetes, KubeSphere, Introduction' description: 'What is KubeSphere' -linkTitle: "Introduction" weight: 1100 --- ## Overview -[KubeSphere](https://kubesphere.io) is a **distributed operating system providing cloud native stack** with [Kubernetes](https://kubernetes.io) as its kernel, and aims to be plug-and-play architecture for third-party applications seamless integration to boost its ecosystem. KubeSphere is also a multi-tenant enterprise-grade container platform with full-stack automated IT operation and streamlined DevOps workflows. It provides developer-friendly wizard web UI, helping enterprises to build out a more robust and feature-rich platform, which includes most common functionalities needed for enterprise Kubernetes strategy, such as the Kubernetes resource management, DevOps (CI/CD), application lifecycle management, monitoring, logging, service mesh, multi-tenancy, alerting and notification, storage and networking, autoscaling, access control, GPU support, etc., as well as multi-cluster management, network policy, registry management, more security enhancements in upcoming releases. +[KubeSphere](https://kubesphere.io) is a **distributed operating system managing cloud-native applications** with [Kubernetes](https://kubernetes.io) as its kernel, providing a plug-and-play architecture for the seamless integration of third-party applications to boost its ecosystem. -KubeSphere delivers **consolidated views while integrating a wide breadth of ecosystem tools** around Kubernetes and offers consistent user experience to reduce complexity, and develops new features and capabilities that are not yet available in upstream Kubernetes in order to alleviate the pain points of Kubernetes including storage, network, security and ease of use. Not only does KubeSphere allow developers and DevOps teams use their favorite tools in a unified console, but, most importantly, these functionalities are loosely coupled with the platform since they are pluggable and optional. +KubeSphere also represents a multi-tenant enterprise-grade container platform with full-stack automated IT operation and streamlined DevOps workflows. It provides developer-friendly wizard web UI, helping enterprises to build out a more robust and feature-rich platform. It boasts the most common functionalities needed for enterprise Kubernetes strategies, such as Kubernetes resource management, DevOps (CI/CD), application lifecycle management, monitoring, logging, service mesh, multi-tenancy, alerting and notification, auditing, storage and networking, autoscaling, access control, GPU support, multi-cluster deployment and management, network policy, registry management, and security management. -Last but not least, KubeSphere does not change Kubernetes itself at all. In another word, KubeSphere can be deployed **on any existing version-compatible Kubernetes cluster across any infrastructure** including virtual machine, bare metal, on-premise, public cloud and hybrid cloud. KubeSphere screens users from the infrastructure underneath and helps your enterprise modernize, migrate, deploy and manage existing and containerized apps seamlessly across a variety of infrastructure, so that developers and Ops team can focus on application development and accelerate DevOps automated workflows and delivery processes with enterprise-level observability and troubleshooting, unified monitoring and logging, centralized storage and networking management, easy-to-use CI/CD pipelines. +KubeSphere delivers **consolidated views while integrating a wide breadth of ecosystem tools** around Kubernetes, thus providing consistent user experiences to reduce complexity. At the same time, it also features new capabilities that are not yet available in upstream Kubernetes, alleviating the pain points of Kubernetes including storage, network, security and usability. Not only does KubeSphere allow developers and DevOps teams use their favorite tools in a unified console, but, most importantly, these functionalities are loosely coupled with the platform since they are pluggable and optional. + +## Run KubeSphere Everywhere + +As a lightweight platform, KubeSphere has become more friendly to different cloud ecosystems as it does not change Kubernetes itself at all. In other words, KubeSphere can be deployed **on any existing version-compatible Kubernetes cluster on any infrastructure** including virtual machine, bare metal, on-premises, public cloud and hybrid cloud. KubeSphere users have the choice of installing KubeSphere on cloud and container platforms, such as Alibaba Cloud, AWS, QingCloud, Tencent Cloud, Huawei Cloud and Rancher, and even importing and managing their existing Kubernetes clusters created using major Kubernetes distributions. The seamless integration of KubeSphere into existing Kubernetes platforms means that the business of users will not be affected, without any modification to their current resources or assets. For more information, see Installation. + +KubeSphere screens users from the infrastructure underneath and helps enterprises modernize, migrate, deploy and manage existing and containerized apps seamlessly across a variety of infrastructure types. This is how KubeSphere empowers developers and Ops teams to focus on application development and accelerate DevOps automated workflows and delivery processes with enterprise-level observability and troubleshooting, unified monitoring and logging, centralized storage and networking management, easy-to-use CI/CD pipelines, and so on. ![KubeSphere Overview](https://pek3b.qingstor.com/kubesphere-docs/png/20200224091526.png) -## Video on Youtube +## What's New in 3.0 - +- **Multi-cluster Management**. As we usher in an era of hybrid cloud, multi-cluster management has emerged as the call of our times. It represents one of the most necessary features on top of Kubernetes as it addresses the pressing need of our users. In the latest version 3.0, we have equipped KubeSphere with its unique multi-cluster feature that is able to provide a central control plane for clusters deployed in different clouds. Users can import and manage their existing Kubernetes clusters created on the platform of mainstream infrastructure providers (e.g. Amazon EKS and Google Kubernetes Engine). This will greatly reduce the learning cost for our users with operation and maintenance process streamlined as well. Solo and Federation are the two featured patterns for multi-cluster management, making KubeSphere stand out among its counterparts. -## What is New in 2.1 +- **Improved Observability**. We have enhanced observability as it becomes more powerful to include custom monitoring, tenant event management, diversified notification methods (e.g. WeChat and Slack) and more features. Among others, users can now customize monitoring dashboards, with a variety of metrics and graphs to choose from for their own needs. It also deserves to mention that KubeSphere 3.0 is compatible with Prometheus, which is the de facto standard for Kubernetes monitoring in the cloud-native industry. -We decouple some main feature components and make them pluggable and optional to choose so that users can install a default KubeSphere with resource requirements down to 2 cores CPU and 4G memory. Meanwhile, there are great enhancements in application store, especially in application lifecycle management. +- **Enhanced Security**. Security has alway remained one of our focuses in KubeSphere. In this connection, feature enhancements can be summarized as follows: -It is worth mentioning that both DevOps and observability components have been improved significantly. For example, we add lots of new features including Binary-to-Image, dependency caching support in pipeline, branch switch support and Git logs output within DevOps component. We also bring upgrade, enhancements and bugfix in storage, authentication and security, as well as user experience improvements. See [Release Notes For 2.1.0](../../release/release-v210) for details. + - **Auditing**. Records will be kept to track who does what at what time. The support of auditing is extremely important especially for traditional industries such as finance and banking. + + - **Network Policy and Isolation**. Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods). By configuring network isolation to control traffic among Pods within the same cluster and traffic from outside, users can isolate applications with security enhanced. They can also decide whether services are accessible externally. + + - **Open Policy Agent**. KubeSphere provides flexible, fine-grained access control based on [Open Policy Agent](https://www.openpolicyagent.org/). Users can manage their security and authorization policies in a unified way with a general architecture. + + - **OAuth 2.0**. Users can now easily integrate third-party applications with OAuth 2.0 protocol. + +- **Multilingual Support of Web Console**. KubeSphere is designed for users around the world at the very beginning. Thanks to our community members across the globe, KubeSphere 3.0 now supports four official languages for its web console: English, Simplified Chinese, Traditional Chinese, and Spanish. More languages are expected to be supported going forward. + +In addition to the above highlights, KubeSphere 3.0 also features other functionality upgrades. For more and detailed information, see Release Notes for 3.0.0. ## Open Source -As we adopt open source model, development is taking in the open way and driven by KubeSphere community. KubeSphere is **100% open source** and available on [GitHub](https://github.com/kubesphere/) where you can find all source code, documents and discussions. It has been widely installed and used in development testing and production environments, and a large number of services are running smoothly in KubeSphere. +As we adopt the open source model, development is proceeding in an open way and driven by KubeSphere community. KubeSphere is **100% open source** and available on [GitHub](https://github.com/kubesphere/) where you can find all the source code, documents and discussions. It has been widely installed and used in development, testing and production environments, and a large number of services are running smoothly in KubeSphere. ## Roadmap @@ -37,10 +52,9 @@ As we adopt open source model, development is taking in the open way and driven ![Roadmap](https://pek3b.qingstor.com/kubesphere-docs/png/20190926000413.png) -## Landscapes +## Landscape -KubeSphere is a member of CNCF and a [Kubernetes Conformance Certified platform -](https://www.cncf.io/certification/software-conformance/#logos), which enriches the [CNCF CLOUD NATIVE Landscape. +KubeSphere is a member of CNCF and a [Kubernetes Conformance Certified platform](https://www.cncf.io/certification/software-conformance/#logos), further enriching [CNCF CLOUD NATIVE Landscape. ](https://landscape.cncf.io/landscape=observability-and-analysis&license=apache-license-2-0) ![CNCF Landscape](https://pek3b.qingstor.com/kubesphere-docs/png/20191011233719.png) diff --git a/content/zh/docs/multicluster-management/_index.md b/content/zh/docs/multicluster-management/_index.md index da9e078dd..24a32d2f8 100644 --- a/content/zh/docs/multicluster-management/_index.md +++ b/content/zh/docs/multicluster-management/_index.md @@ -1,6 +1,6 @@ --- title: "Multi-cluster Management" -description: "Import a hosted or on-premise Kubernetes cluster into KubeSphere" +description: "Import a hosted or on-premises Kubernetes cluster into KubeSphere" layout: "single" linkTitle: "Multi-cluster Management" @@ -11,9 +11,13 @@ icon: "/images/docs/docs.svg" --- -## Installing KubeSphere and Kubernetes on Linux +Today, it's very common for organizations to run and manage multiple Kubernetes Clusters on different cloud providers or infrastructures. Each Kubernetes cluster is a relatively self-contained unit. And the upstream community is struggling to research and develop the multi-cluster management solution, such as [kubefed](https://github.com/kubernetes-sigs/kubefed). -In this chapter, we will demonstrate how to use KubeKey to provision a new Kubernetes and KubeSphere cluster based on different infrastructures. Kubekey can help you to quickly build a production-ready cluster architecture on a set of machines from zero to one. It also helps you to easily scale the cluster and install pluggable components on existing architecture. +The most common use cases in multi-cluster management including **service traffic load balancing, development and production isolation, decoupling of data processing and data storage, cross-cloud backup and disaster recovery, flexible allocation of computing resources, low latency access with cross-region services, and no vendor lock-in,** etc. + +KubeSphere is developed to address the multi-cluster and multi-cloud management challenges and implement the proceeding user scenarios, providing users with a unified control plane to distribute applications and its replicas to multiple clusters from public cloud to on-premise environment. KubeSphere also provides rich observability cross multiple clusters including centralized monitoring, logging, events, and auditing logs. + +![KubeSphere Multi-cluster Management](/images/docs/multi-cluster-overview.jpg) ## Most Popular Pages diff --git a/content/zh/docs/multicluster-management/enable-multicluster/_index.md b/content/zh/docs/multicluster-management/enable-multicluster/_index.md new file mode 100644 index 000000000..594cae3de --- /dev/null +++ b/content/zh/docs/multicluster-management/enable-multicluster/_index.md @@ -0,0 +1,7 @@ +--- +linkTitle: "Enable Multi-cluster in KubeSphere" +weight: 3010 + +_build: + render: false +--- diff --git a/content/zh/docs/multicluster-management/enable-multicluster/agent-connection.md b/content/zh/docs/multicluster-management/enable-multicluster/agent-connection.md new file mode 100644 index 000000000..69c78318f --- /dev/null +++ b/content/zh/docs/multicluster-management/enable-multicluster/agent-connection.md @@ -0,0 +1,214 @@ +--- +title: "Agent Connection" +keywords: 'kubernetes, kubesphere, multicluster, agent-connection' +description: 'Overview' + + +weight: 2343 +--- + +## Prerequisites + +You have already installed at least two KubeSphere clusters, please refer to [Installing on Linux](../../../installing-on-linux) or [Installing on Kubernetes](../../../installing-on-kubernetes) if not yet. + +{{< notice note >}} +Multi-cluster management requires Kubesphere to be installed on the target clusters. If you have an existing cluster, please install a minimal KubeSphere on it as an agent, see [Installing Minimal KubeSphere on Kubernetes](../../installing-on-kubernetes/minimal-kubesphere-on-k8s) for details. +{{}} + +## Agent Connection + +The component [Tower](https://github.com/kubesphere/tower) of KubeSphere is used for agent connection. Tower is a tool for network connection between clusters through the agent. If the H Cluster cannot access the M Cluster directly, you can expose the proxy service address of the H cluster. This enables the M Cluster to connect to the H cluster through the agent. This method is applicable when the M Cluster is in a private environment (e.g. IDC) and the H Cluster is able to expose the proxy service. The agent connection is also applicable when your clusters are distributed in different cloud providers. + +### Prepare a Host Cluster + +{{< tabs >}} + +{{< tab "KubeSphere has been installed" >}} + +If you already have a standalone KubeSphere installed, you can change the `clusterRole` to a host cluster by editing the cluster configuration and **wait for a while**. + +- Option A - Use Web Console: + +Use `cluster-admin` account to enter **Cluster Management → CRDs**, search for the keyword `ClusterConfiguration` and enter its detailed page, edit the YAML of `ks-installer`. This is similar to Enable Pluggable Components. + +- Option B - Use Kubectl: + +```shell +kubectl edit cc ks-installer -n kubesphere-system +``` + +Scroll down and change the value of `clusterRole` to `host`, then click **Update** to make it effective: + +```yaml +multicluster: + clusterRole: host +``` + +{{}} + +{{< tab "KubeSphere has not been installed" >}} + +There is no big difference if you just start the installation. Please fill in the `jwtSecret` with the value shown as above in `config-sample.yaml` or `cluster-configuration.yaml`: + +```yaml +authentication: + jwtSecret: gfIwilcc0WjNGKJ5DLeksf2JKfcLgTZU +``` + +Then scroll down and change the `clusterRole` to `member`: + +```yaml +multicluster: + clusterRole: member +``` + +{{}} + +{{}} + +Then you can use the **kubectl** to retrieve the installation logs to verify the status. Wait for a while, you will be able to see the successful logs return if the host cluster is ready. + +``` +kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f +``` + +#### Set Proxy Service Address + +After the installation of the Host Cluster, a proxy service called tower will be created in `kubesphere-system`, whose type is **LoadBalancer**. + +{{< tabs >}} + +{{< tab "There is a LoadBalancer in your cluster" >}} + +If a LoadBalancer plugin is available for the cluster, you can see a corresponding address for `EXTERNAL-IP`, which will be acquired by KubeSphere automatically. That means we can skip the step to set the proxy. + +```shell +$ kubectl -n kubesphere-system get svc +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +tower LoadBalancer 10.233.63.191 139.198.110.23 8080:30721/TCP 16h +``` + +> Generally, there is always a LoadBalancer solution in the public cloud, and the external IP should be allocated by Load Balancer automatically. If your clusters are running in an on-premises environment (Especially for the **bare metal environment**), we recommend you to use [Porter](https://github.com/porter/porter) as the LB solution. + +{{}} + +{{< tab "There is not a LoadBalancer in your cluster" >}} + +1. If you cannot see a corresponding address displayed (the EXTERNAL-IP is pending), you need to manually set the proxy address. For example, you have an available public IP address `139.198.120.120`. And the port `8080` of this IP address has been forwarded to the port `30721` of the cluster. + +```shell +kubectl -n kubesphere-system get svc +``` + +``` +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +tower LoadBalancer 10.233.63.191 8080:30721/TCP 16h +``` + +2. Change the ConfigMap of the ks-installer and input the the address you set before. You can also edit the ConfigMap from **Configuration → ConfigMaps**, search for the keyword `kubesphere-config`, then edit its YAML and add the following configuration: + +```bash +kubectl -n kubesphere-system edit cm kubesphere-config +``` + +``` +multicluster: + clusterRole: host + proxyPublishAddress: http://139.198.120.120:8080 # Add this line to set the address to access tower +``` + +3. Save and update the ConfigMap, then restart the Deployment `ks-apiserver`. + +```shell +kubectl -n kubesphere-system rollout restart deployment ks-apiserver +``` + +{{}} + +{{}} + + +### Prepare a Member Cluster + +In order to manage the member cluster within the host cluster, we need to make the jwtSecret same between them. So first you need to get it from the host by the following command. + +```bash +kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep jwtSecret +``` + +```yaml +jwtSecret: "gfIwilcc0WjNGKJ5DLeksf2JKfcLgTZU" +``` + +{{< tabs >}} + +{{< tab "KubeSphere has been installed" >}} + +If you already have a standalone KubeSphere installed, you can change the `clusterRole` to a host cluster by editing the cluster configuration and **wait for a while**. + +- Option A - Use Web Console: + +Use `cluster-admin` account to enter **Cluster Management → CRDs**, search for the keyword `ClusterConfiguration` and enter its detailed page, edit the YAML of `ks-installer`. This is similar to Enable Pluggable Components. + +- Option B - Use Kubectl: + +```shell +kubectl edit cc ks-installer -n kubesphere-system +``` + +Then input the corresponding jwtSecret shown above: + +```yaml +authentication: + jwtSecret: gfIwilcc0WjNGKJ5DLeksf2JKfcLgTZU +``` + +Then scroll down and change the value of `clusterRole` to `member`, then click **Update** to make it effective: + +```yaml +multicluster: + clusterRole: member +``` + +{{}} + +{{< tab "KubeSphere has not been installed" >}} + +There is no big difference if you just start the installation. Please fill in the `jwtSecret` with the value shown as above in `config-sample.yaml` or `cluster-configuration.yaml`: + +```yaml +authentication: + jwtSecret: gfIwilcc0WjNGKJ5DLeksf2JKfcLgTZU +``` + +Then scroll down and change the `clusterRole` to `member`: + +```yaml +multicluster: + clusterRole: member +``` + +{{}} + +{{}} + + +### Import Cluster + +1. Open the H Cluster Dashboard and click **Add Cluster**. + +![Add Cluster](https://ap3.qingstor.com/kubesphere-website/docs/20200827231611.png) + +2. Enter the basic information of the imported cluster and click **Next**. + +![Import Cluster](https://ap3.qingstor.com/kubesphere-website/docs/20200827211842.png) + +3. In **Connection Method**, select **Cluster connection agent** and Click **Import**. + +![agent-en](/images/docs/agent-en.png) + +4. Create an `agent.yaml` file in the M Cluster based on the instruction, then copy and paste the deployment to the file. Execute `kubectl create -f agent.yaml` on the node and wait for the agent to be up and running. Please make sure the proxy address is accessible to the M Cluster. + +5. You can see the cluster you have imported in the H Cluster when the cluster agent is up and running. + +![Azure AKS](https://ap3.qingstor.com/kubesphere-website/docs/20200827231650.png) diff --git a/content/zh/docs/multicluster-management/enable-multicluster/direct-connection.md b/content/zh/docs/multicluster-management/enable-multicluster/direct-connection.md new file mode 100644 index 000000000..9f953eab9 --- /dev/null +++ b/content/zh/docs/multicluster-management/enable-multicluster/direct-connection.md @@ -0,0 +1,160 @@ +--- +title: "Direct Connection" +keywords: 'kubernetes, kubesphere, multicluster, hybrid-cloud' +description: 'Overview' + + +weight: 2340 +--- + +## Prerequisites + +You have already installed at least two KubeSphere clusters, please refer to [Installing on Linux](../../../installing-on-linux) or [Installing on Kubernetes](../../../installing-on-kubernetes) if not yet. + +{{< notice note >}} +Multi-cluster management requires Kubesphere to be installed on the target clusters. If you have an existing cluster, please install a minimal KubeSphere on it as an agent, see [Installing Minimal KubeSphere on Kubernetes](../../installing-on-kubernetes/minimal-kubesphere-on-k8s) for details. +{{}} + +## Direct Connection + +If the kube-apiserver address of Member Cluster (hereafter referred to as **M** Cluster) is accessible on any node of the Host Cluster (hereafter referred to as **H** Cluster), you can adopt **Direction Connection**. This method is applicable when the kube-apiserver address of M Cluster can be exposed or H Cluster and M Cluster are in the same private network or subnet. + +### Prepare a Host Cluster + +{{< tabs >}} + +{{< tab "KubeSphere has been installed" >}} + +If you already have a standalone KubeSphere installed, you can change the `clusterRole` to a host cluster by editing the cluster configuration and **wait for a while**. + +- Option A - Use Web Console: + +Use `cluster-admin` account to enter **Cluster Management → CRDs**, search for the keyword `ClusterConfiguration` and enter its detailed page, edit the YAML of `ks-installer`. This is similar to Enable Pluggable Components. + +- Option B - Use Kubectl: + +```shell +kubectl edit cc ks-installer -n kubesphere-system +``` + +Scroll down and change the value of `clusterRole` to `host`, then click **Update** to make it effective: + +```yaml +multicluster: + clusterRole: host +``` + +{{}} + +{{< tab "KubeSphere has not been installed" >}} + +There is no big difference if you just start the installation. Please note that the `clusterRole` in `config-sample.yaml` or `cluster-configuration.yaml` has to be set like following: + +```yaml +multicluster: + clusterRole: host +``` + +{{}} + +{{}} + +Then you can use the **kubectl** to retrieve the installation logs to verify the status. Wait for a while, you will be able to see the successful logs return if the host cluster is ready. + +``` +kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f +``` + +### Prepare a Member Cluster + +In order to manage the member cluster within the host cluster, we need to make the jwtSecret same between them. So first you need to get it from the host by the following command. + +```bash +kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep jwtSecret +``` + +```yaml +jwtSecret: "gfIwilcc0WjNGKJ5DLeksf2JKfcLgTZU" +``` + +{{< tabs >}} + +{{< tab "KubeSphere has been installed" >}} + +If you already have a standalone KubeSphere installed, you can change the `clusterRole` to a host cluster by editing the cluster configuration and **wait for a while**. + +- Option A - Use Web Console: + +Use `cluster-admin` account to enter **Cluster Management → CRDs**, search for the keyword `ClusterConfiguration` and enter its detailed page, edit the YAML of `ks-installer`. This is similar to Enable Pluggable Components. + +- Option B - Use Kubectl: + +```shell +kubectl edit cc ks-installer -n kubesphere-system +``` + +Then input the corresponding jwtSecret shown above: + +```yaml +authentication: + jwtSecret: gfIwilcc0WjNGKJ5DLeksf2JKfcLgTZU +``` + +Then scroll down and change the value of `clusterRole` to `member`, then click **Update** to make it effective: + +```yaml +multicluster: + clusterRole: member +``` + +{{}} + +{{< tab "KubeSphere has not been installed" >}} + +There is no big difference if you just start the installation. Please fill in the `jwtSecret` with the value shown as above in `config-sample.yaml` or `cluster-configuration.yaml`: + +```yaml +authentication: + jwtSecret: gfIwilcc0WjNGKJ5DLeksf2JKfcLgTZU +``` + +Then scroll down and change the `clusterRole` to `member`: + +``` +multicluster: + clusterRole: member +``` + +{{}} + +{{}} + +Then you can use the **kubectl** to retrieve the installation logs to verify the status. Wait for a while, you will be able to see the successful logs return if the host cluster is ready. + +``` +kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f +``` + +### Import Cluster + +1. Open the H Cluster Dashboard and click **Add Cluster**. + +![Add Cluster](https://ap3.qingstor.com/kubesphere-website/docs/20200827231611.png) + +2. Enter the basic information of the cluster and click **Next**. + +![Import Cluster](https://ap3.qingstor.com/kubesphere-website/docs/20200827211842.png) + +3. In **Connection Method**, select **Direct Connection to Kubernetes cluster**. + +4. [Retrieve the KubeConfig](../retrieve-kubeconfig), then copy the KubeConfig of the Member Cluster and paste it into the box. + +{{< notice tip >}} +Please make sure the `server` address in KubeConfig is accessible on any node of the H Cluster. For `KubeSphere API Server` address, you can fill in the KubeSphere APIServer address or leave it blank. +{{}} + +![import a cluster - direct connection](/images/docs/direct_import_en.png) + +5. Click **Import** and wait for cluster initialization to finish. + +![Azure AKS](https://ap3.qingstor.com/kubesphere-website/docs/20200827231650.png) diff --git a/content/zh/docs/multicluster-management/enable-multicluster/retrieve-kubeconfig.md b/content/zh/docs/multicluster-management/enable-multicluster/retrieve-kubeconfig.md new file mode 100644 index 000000000..19f6306bd --- /dev/null +++ b/content/zh/docs/multicluster-management/enable-multicluster/retrieve-kubeconfig.md @@ -0,0 +1,42 @@ +--- +title: "Retrieve KubeConfig" +keywords: 'kubernetes, kubesphere, multicluster, hybrid-cloud' +description: 'Overview' + + +weight: 2345 +--- + +## Prerequisites + +You have a KubeSphere cluster. + +## Explore KubeConfig File + +Go to `$HOME/.kube`, and see what files are there. Typically, there is a file named config. Use the following command to retrieve the KubeConfig file: + +```bash +cat $HOME/.kube/config +``` + +``` +apiVersion: v1 +clusters: +- cluster: + certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01EZ3dPREE1hqaVE3NXhwbGFQNUgwSm5ySk5peTBacFh6QWxjYzZlV2JlaXJ1VgpUbmZUVjZRY3pxaVcrS3RBdFZVbkl4MCs2VTgzL3FiKzdINHk2RnA0aVhUaDJxRHJ6Qkd4dG1UeFlGdC9OaFZlCmhqMHhEbHVMOTVUWkRjOUNmSFgzdGZJeVh5WFR3eWpnQ2g1RldxbGwxVS9qVUo2RjBLVVExZ1pRTFp4TVJMV0MKREM2ZFhvUGlnQ3BNaVRPVXl5SVNhWUVjYVNBMEo5VWZmSGd4ditVcXVleTc0cEM2emszS0lOT2tGMkI1MllxeApUa09OT2VkV2hDUExMZkUveVJqeGw1aFhPL1Z4REFaVC9HQ1Y1a0JZN0toNmRhendmUllOa21IQkhDMD0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=hqaVE3NXhwbGFQNUgwSm5ySk5peTBacFh6QWxjYzZlV2JlaXJ1VgpUbmZUVjZRY3pxaVcrS3RBdFZVbkl4MCs2VTgzL3FiKzdINHk2RnA0aVhUaDJxRHJ6Qkd4dG1UeFlGdC9OaFZlCmhqMHhEbHVMOTVUWkRjOUNmSFgzdGZJeVh5WFR3eWpnQ2g1RldxbGwxVS9qVUo2RjBLVVExZ1pRTFp4TVJMV0MKREM2ZFhvUGlnQ3BNaVRPVXl5SVNhWUVjYVNBMEo5VWZmSGd4ditVcXVleTc0cEM2emszS0lOT2tGMkI1MllxeApUa09OT2VkV2hDUExMZkUveVJqeGw1aFhPL1Z4REFaVC9HQ1Y1a0JZN0toNmRhendmUllOa21IQkhDMD0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= + server: https://lb.kubesphere.local:6443 + name: cluster.local +contexts: +- context: + cluster: cluster.local + user: kubernetes-admin + name: kubernetes-admin@cluster.local +current-context: kubernetes-admin@cluster.local +kind: Config +preferences: {} +users: +- name: kubernetes-admin + user: + client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJRzd5REpscVdjdTh3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TURBNE1EZ3dPVEkzTXpkYUZ3MHlNVEE0TURnd09USTNNemhhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnsOTJBUkJDNTRSR3BsZ3VmCmw5a0hPd0lEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFEQ2FUTXNBR1Vhdnhrazg0NDZnOGNRQUJpSmk5RTZiREV5TwphRnJubC8reGRzRmgvOTFiMlNpM3ZwaHFkZ2k5bXRYWkhhaWI5dnQ3aXdtSEFwbGQxUkhBU25sMFoxWFh1dkhzCmMzcXVIU0puY3dmc3JKT0I4UG9NRjVnaG10a0dPV3g0M2RHTTNHQnpGTVJ4ZGcrNmttNjRNUGhneXl6NTJjYUoKbzhPajNja1Uzd1NWNkxvempRcFVaUnZHV25qQjEwUXFPWXBtQUk4VCtlZkxKZzhuY0drK3V3UUVTeXBYWExpYwoxWVQ2QkFJeFhEK2tUUU1hOFhjdUhHZzlWRkdsUm9yK1EvY3l0S3RDeHVncFlxQ2xvbHVpckFUUnpsemRXamxYCkVQaHVjRWs2UUdIZEpObjd0M2NwRGkzSUdYYXJFdGxQQmFwck9nSGpkOHZVOStpWXdoQT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=TJBUkJDNTRSR3BsZ3VmCmw5a0hPd0lEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFEQ2FUTXNBR1Vhdnhrazg0NDZnOGNRQUJpSmk5RTZiREV5TwphRnJubC8reGRzRmgvOTFiMlNpM3ZwaHFkZ2k5bXRYWkhhaWI5dnQ3aXdtSEFwbGQxUkhBU25sMFoxWFh1dkhzCmMzcXVIU0puY3dmc3JKT0I4UG9NRjVnaG10a0dPV3g0M2RHTTNHQnpGTVJ4ZGcrNmttNjRNUGhneXl6NTJjYUoKbzhPajNja1Uzd1NWNkxvempRcFVaUnZHV25qQjEwUXFPWXBtQUk4VCtlZkxKZzhuY0drK3V3UUVTeXBYWExpYwoxWVQ2QkFJeFhEK2tUUU1hOFhjdUhHZzlWRkdsUm9yK1EvY3l0S3RDeHVncFlxQ2xvbHVpckFUUnpsemRXamxYCkVQaHVjRWs2UUdIZEpObjd0M2NwRGkzSUdYYXJFdGxQQmFwck9nSGpkOHZVOStpWXdoQT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= + client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBeXBLWkdtdmdiSHdNaU9pVU80UHZKZXB2MTJaaE1yRUIxK2xlVnM0dHIzMFNGQ0p1Ck8wc09jL2lUNmFuWEJzUU1XNDF6V3hwV1B5elkzWXlUWEJMTlIrM01pWTl2SFhUeWJ6eitTWnNlTzVENytHL3MKQnR5NkovNGpJb2pZZlRZNTFzUUxyRVJydStmVnNGeUU0U2dXbE1HYWdqV0RIMFltM0VJsOTJBUkJDNTRSR3BsZ3VmCmw5a0hPd0lEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFEQ2FUTXNBR1Vhdnhrazg0NDZnOGNRQUJpSmk5RTZiREV5TwphRnJubC8reGRzRmgvOTFiMlNpM3ZwaHFkZ2k5bXRYWkhhaWI5dnQ3aXdtSEFwbGQxUkhBU25sMFoxWFh1dkhzCmMzcXVIU0puY3dmc3JKT0I4UG9NRjVnaG10a0dPV3g0M2RHTTNHQnpGTVJ4ZGcrNmttNjRNUGhneXl6NTJjYUoKbzhPajNja1Uzd1NWNkxvempRcFVaUnZHV25qQjEwUXFPWXBtQUk4VCtlZkxKZzhuY0drK3V3UUVTeXBYWExpYwoxWVQ2QkFJeFhEK2tUUU1hOFhjdUhHZzlWRkdsUm9yK1EvY3l0S3RDeHVncFlxQ2xvbHVpckFUUnpsemRXamxYCkVQaHVjRWs2UUdIZEpObjd0M2NwRGkzSUdYYXJFdGxQQmFwck9nSGpkOHZVOStpWXdoQT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=Ygo3THE3a2tBMURKNTBld2pMUTNTd1Yxd2p6N2ZjeDYvbzUwRnJnK083dEJMVVdQNTNHaDQ1VjJpUEp2NkdPYk1uCjhIWElmem83cW5XRFQvU20ybW5HbitUdVY4THdLVWFXL2wya3FkRUNnWUVBcS9zRmR1RDk2Z3VoT2ZaRnczcWMKblZGekNGQ3JsMkUvVkdYQy92SmV1WnJLQnFtSUtNZFI3ajdLWS9WRFVlMnJocVd6MFh2Wm9Sa1FoMkdwWkdIawpDd3NzcENKTVl4L0hETTVaWlBvcittb1J6VE5HNHlDNGhTRGJ2VEFaTmV1VTZTK1hzL1JSTDJ6WnUwemNQQXk1CjJJRVgwelFpZ1JzK3VzS3Jkc1FVZXZrQ2dZQUUrQUNWeDJnMC94bmFsMVFJNmJsK3Y2TDJrZVJtVGppcHB4Wm0KS1JEd2xnaXpsWGxsTjhyQmZwSGNiK1ZnZ282anN2eHFrb0pkTEhBLzFDME5IMWVuS1NoUTlpZVFpeWNsZngwdQpKOE1oeW1JM0RBZUg1REJyOG1rZ0pwNnJwUXNBc1paYmVhOHlLTzV5eVdCYTN6VGxOVnQvNDRibGg5alpnTWNMCjNyUXFVUUtCZ1FETVlXdEt2S0hOQllXV0p5enFERnFPbS9qY3Z3andvcURibUZVMlU3UGs2aUdNVldBV3VYZ3cKSm5qQWtES01GN0JXSnJRUjR6RHVoQlhvQVMxWVhiQ2lGd2hTcXVjWGhFSGlwQ3Nib0haVVRtT1pXUUh4Vlp4bQowU1NiRXFZU2MvZHBDZ1BHRk9IaW1FdUVic05kc2JjRmRETDQyODZHb0psQUxCOGc3VWRUZUE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo= +``` diff --git a/content/zh/docs/multicluster-management/import-cloud-hosted-k8s/_index.md b/content/zh/docs/multicluster-management/import-cloud-hosted-k8s/_index.md new file mode 100644 index 000000000..545c12498 --- /dev/null +++ b/content/zh/docs/multicluster-management/import-cloud-hosted-k8s/_index.md @@ -0,0 +1,7 @@ +--- +linkTitle: "Import Cloud-hosted Kubernetes Cluster" +weight: 3010 + +_build: + render: false +--- diff --git a/content/zh/docs/multicluster-management/import-cloud-hosted-k8s/import-aliyun-ack.md b/content/zh/docs/multicluster-management/import-cloud-hosted-k8s/import-aliyun-ack.md new file mode 100644 index 000000000..1b4ce5659 --- /dev/null +++ b/content/zh/docs/multicluster-management/import-cloud-hosted-k8s/import-aliyun-ack.md @@ -0,0 +1,10 @@ +--- +title: "Import Aliyun ACK" +keywords: 'kubernetes, kubesphere, multicluster, ACK' +description: 'Import Aliyun ACK' + + +weight: 2340 +--- + +TBD diff --git a/content/zh/docs/multicluster-management/import-cloud-hosted-k8s/import-aws-eks.md b/content/zh/docs/multicluster-management/import-cloud-hosted-k8s/import-aws-eks.md new file mode 100644 index 000000000..c1dc7ab9a --- /dev/null +++ b/content/zh/docs/multicluster-management/import-cloud-hosted-k8s/import-aws-eks.md @@ -0,0 +1,10 @@ +--- +title: "Import AWS EKS" +keywords: 'kubernetes, kubesphere, multicluster, aws-eks' +description: 'Import AWS EKS"' + + +weight: 2340 +--- + +TBD diff --git a/content/zh/docs/multicluster-management/import-on-prem-k8s/_index.md b/content/zh/docs/multicluster-management/import-on-prem-k8s/_index.md new file mode 100644 index 000000000..a5583e5da --- /dev/null +++ b/content/zh/docs/multicluster-management/import-on-prem-k8s/_index.md @@ -0,0 +1,7 @@ +--- +linkTitle: "Import On-prem Kubernetes Cluster" +weight: 3010 + +_build: + render: false +--- diff --git a/content/zh/docs/multicluster-management/import-on-prem-k8s/import-kubeadm-k8s.md b/content/zh/docs/multicluster-management/import-on-prem-k8s/import-kubeadm-k8s.md new file mode 100644 index 000000000..23ebf51b1 --- /dev/null +++ b/content/zh/docs/multicluster-management/import-on-prem-k8s/import-kubeadm-k8s.md @@ -0,0 +1,10 @@ +--- +title: "Import Kubeadm Kubernetes" +keywords: 'kubernetes, kubesphere, multicluster, kubeadm' +description: 'Overview' + + +weight: 2340 +--- + +TBD diff --git a/content/zh/docs/multicluster-management/introduction/_index.md b/content/zh/docs/multicluster-management/introduction/_index.md new file mode 100644 index 000000000..44efc6f9c --- /dev/null +++ b/content/zh/docs/multicluster-management/introduction/_index.md @@ -0,0 +1,7 @@ +--- +linkTitle: "Introduction" +weight: 3005 + +_build: + render: false +--- diff --git a/content/zh/docs/multicluster-management/introduction/kubefed-in-kubesphere.md b/content/zh/docs/multicluster-management/introduction/kubefed-in-kubesphere.md new file mode 100644 index 000000000..7a85a3334 --- /dev/null +++ b/content/zh/docs/multicluster-management/introduction/kubefed-in-kubesphere.md @@ -0,0 +1,12 @@ +--- +title: "Kubernetes Federation in KubeSphere" +keywords: 'kubernetes, kubesphere, multicluster, hybrid-cloud' +description: 'Overview' + + +weight: 2340 +--- + +The multi-cluster feature relates to the network connection among multiple clusters. Therefore, it is important to understand the topological relations of clusters as the workload can be reduced. + +Before you use the multi-cluster feature, you need to create a Host Cluster (hereafter referred to as **H** Cluster), which is actually a KubeSphere cluster that has enabled the multi-cluster feature. All the clusters managed by the H Cluster are called Member Cluster (hereafter referred to as **M** Cluster). They are common KubeSphere clusters that do not have the multi-cluster feature enabled. There can only be one H Cluster while multiple M Clusters can exist at the same time. In a multi-cluster architecture, the network between the H Cluster and the M Cluster can be connected directly or through an agent. The network between M Clusters can be set in a completely isolated environment. diff --git a/content/zh/docs/multicluster-management/introduction/overview.md b/content/zh/docs/multicluster-management/introduction/overview.md new file mode 100644 index 000000000..818f2cfd4 --- /dev/null +++ b/content/zh/docs/multicluster-management/introduction/overview.md @@ -0,0 +1,16 @@ +--- +title: "Overview" +keywords: 'kubernetes, kubesphere, multicluster, hybrid-cloud' +description: 'Overview' + + +weight: 2335 +--- + +Today, it's very common for organizations to run and manage multiple Kubernetes Clusters on different cloud providers or infrastructures. Each Kubernetes cluster is a relatively self-contained unit. And the upstream community is struggling to research and develop the multi-cluster management solution, such as [kubefed](https://github.com/kubernetes-sigs/kubefed). + +The most common use cases in multi-cluster management including **service traffic load balancing, development and production isolation, decoupling of data processing and data storage, cross-cloud backup and disaster recovery, flexible allocation of computing resources, low latency access with cross-region services, and no vendor lock-in,** etc. + +KubeSphere is developed to address the multi-cluster and multi-cloud management challenges and implement the proceeding user scenarios, providing users with a unified control plane to distribute applications and its replicas to multiple clusters from public cloud to on-premise environment. KubeSphere also provides rich observability cross multiple clusters including centralized monitoring, logging, events, and auditing logs. + +![KubeSphere Multi-cluster Management](/images/docs/multi-cluster-overview.jpg) diff --git a/content/zh/docs/multicluster-management/release-v210.md b/content/zh/docs/multicluster-management/release-v210.md deleted file mode 100644 index 1eb9cedb7..000000000 --- a/content/zh/docs/multicluster-management/release-v210.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: "Enable Multicluster Management" -keywords: "kubernetes, StorageClass, kubesphere, PVC" -description: "Enable Multicluster Management in KubeSphere" - -linkTitle: "Enable Multicluster Management" -weight: 200 ---- - -TBD diff --git a/content/zh/docs/multicluster-management/release-v211.md b/content/zh/docs/multicluster-management/release-v211.md deleted file mode 100644 index 66048687f..000000000 --- a/content/zh/docs/multicluster-management/release-v211.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -title: "Kubernetes Federation in KubeSphere" -keywords: "kubernetes, multicluster, kubesphere, federation, hybridcloud" -description: "Kubernetes and KubeSphere node management" - -linkTitle: "Kubernetes Federation in KubeSphere" -weight: 100 ---- diff --git a/content/zh/docs/multicluster-management/release-v300.md b/content/zh/docs/multicluster-management/release-v300.md deleted file mode 100644 index e52dee1e1..000000000 --- a/content/zh/docs/multicluster-management/release-v300.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: "Introduction" -keywords: "kubernetes, multicluster, kubesphere, hybridcloud" -description: "Upgrade KubeSphere" - -linkTitle: "Introduction" -weight: 50 ---- - -TBD diff --git a/content/zh/docs/multicluster-management/remove-cluster/_index.md b/content/zh/docs/multicluster-management/remove-cluster/_index.md new file mode 100644 index 000000000..b303ded0a --- /dev/null +++ b/content/zh/docs/multicluster-management/remove-cluster/_index.md @@ -0,0 +1,7 @@ +--- +linkTitle: "Remove Cluster" +weight: 3010 + +_build: + render: false +--- diff --git a/content/zh/docs/multicluster-management/remove-cluster/kubefed-in-kubesphere.md b/content/zh/docs/multicluster-management/remove-cluster/kubefed-in-kubesphere.md new file mode 100644 index 000000000..f9a72caac --- /dev/null +++ b/content/zh/docs/multicluster-management/remove-cluster/kubefed-in-kubesphere.md @@ -0,0 +1,10 @@ +--- +title: "Remove a Cluster from KubeSphere" +keywords: 'kubernetes, kubesphere, multicluster, hybrid-cloud' +description: 'Overview' + + +weight: 2340 +--- + +TBD diff --git a/content/zh/docs/pluggable-components/app-store.md b/content/zh/docs/pluggable-components/app-store.md new file mode 100644 index 000000000..4045d6207 --- /dev/null +++ b/content/zh/docs/pluggable-components/app-store.md @@ -0,0 +1,144 @@ +--- +title: "KubeSphere App Store" +keywords: "Kubernetes, KubeSphere, app-store, OpenPitrix" +description: "How to Enable KubeSphere App Store" + +linkTitle: "KubeSphere App Store" +weight: 3515 +--- + +## What is KubeSphere App Store + +As an open-source and app-centric container platform, KubeSphere provides users with a Helm-based app store for application lifecycle management on the back of [OpenPitrix](https://github.com/openpitrix/openpitrix), an open-source web-based system to package, deploy and manage different types of apps. KubeSphere App Store allows ISVs, developers and users to upload, test, deploy and release apps with just several clicks in a one-stop shop. + +Internally, KubeSphere App Store can serve as a place for different teams to share data, middleware, and office applications. Externally, it is conducive to setting industry standards of building and delivery. By default, there are 15 apps in the App Store. After you enable this feature, you can add more apps with app templates. + +![app-store](https://ap3.qingstor.com/kubesphere-website/docs/20200828170503.png) + +For more information, see App Store. + +## Enable App Store before Installation + +### Installing on Linux + +When you install KubeSphere on Linux, you need to create a configuration file, which lists all KubeSphere components. + +1. In the tutorial of [Installing KubeSphere on Linux](https://kubesphere-v3.netlify.app/docs/installing-on-linux/introduction/multioverview/), you create a default file **config-sample.yaml**. Modify the file by executing the following command: + +```bash +vi config-sample.yaml +``` + +{{< notice note >}} + +If you adopt [All-in-one Installation](https://kubesphere-v3.netlify.app/docs/quick-start/all-in-one-on-linux/), you do not need to create a config-sample.yaml file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable App Store in this mode (e.g. for testing purpose), refer to the following section to see how App Store can be installed after installation. + +{{}} + +2. In this file, navigate to `openpitrix` and change `false` to `true` for `enabled`. Save the file after you finish. + +```bash +openpitrix: + enabled: true # Change "false" to "true" +``` + +3. Create a cluster using the configuration file: + +```bash +./kk create cluster -f config-sample.yaml +``` + +### **Installing on Kubernetes** + +When you install KubeSphere on Kubernetes, you need to download the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) for cluster setting. If you want to install App Store, do not use `kubectl apply -f` directly for this file. + +1. In the tutorial of [Installing KubeSphere on Kubernetes](https://kubesphere-v3.netlify.app/docs/installing-on-kubernetes/introduction/overview/), you execute `kubectl apply -f` first for the file [kubesphere-installer.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml). After that, to enable App Store, create a local file cluster-configuration.yaml. + +```bash +vi cluster-configuration.yaml +``` + +2. Copy all the content in the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) and paste it to the local file just created. +3. In this local cluster-configuration.yaml file, navigate to `openpitrix` and enable App Store by changing `false` to `true` for `enabled`. Save the file after you finish. + +```bash +openpitrix: + enabled: true # Change "false" to "true" +``` + +4. Execute the following command to start installation: + +```bash +kubectl apply -f cluster-configuration.yaml +``` + +## Enable App Store after Installation + +1. Log in the console as `admin`. Click **Platform** at the top left corner and select **Clusters Management**. + +![clusters-management](https://ap3.qingstor.com/kubesphere-website/docs/20200828111130.png) + +2. Click **CRDs** and enter `clusterconfiguration` in the search bar. Click the result to view its detailed page. + +{{< notice info >}} + +A Custom Resource Definition (CRD) allows users to create a new type of resources without adding another API server. They can use these resources like any other native Kubernetes objects. + +{{}} + +3. In **Resource List**, click the three dots on the right of `ks-installer` and select **Edit YAML**. + +![edit-yaml](https://ap3.qingstor.com/kubesphere-website/docs/20200827182002.png) + +4. In this yaml file, navigate to `openpitrix` and change `false` to `true` for `enabled`. After you finish, click **Update** at the bottom right corner to save the configuration. + +```bash +openpitrix: + enabled: true # Change "false" to "true" +``` + +5. You can use the web kubectl to check the installation process by executing the following command: + +```bash +kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f +``` + +{{< notice tip >}} + +You can find the web kubectl tool by clicking the hammer icon at the bottom right corner of the console. + +{{}} + +## Verify the Installation of Component + +{{< tabs >}} + +{{< tab "Verify the Component in Dashboard" >}} + +Go to **Components** and check the status of OpenPitrix. You may see an image as follows: + +![openpitrix](https://ap3.qingstor.com/kubesphere-website/docs/20200829124018.png) + +{{}} + +{{< tab "Verify the Component through kubectl" >}} + +Execute the following command to check the status of pods: + +```bash +kubectl get pod -n openpitrix-system +``` + +The output may look as follows if the component runs successfully: + +```bash +NAME READY STATUS RESTARTS AGE +hyperpitrix-generate-kubeconfig-pznht 0/2 Completed 0 1h6m +hyperpitrix-release-app-job-hzdjf 0/1 Completed 0 1h6m +openpitrix-hyperpitrix-deployment-fb76645f4-crvmm 1/1 Running 0 1h6m +``` + +{{}} + +{{}} + diff --git a/content/zh/docs/pluggable-components/auditing-logs.md b/content/zh/docs/pluggable-components/auditing-logs.md new file mode 100644 index 000000000..ce801d30e --- /dev/null +++ b/content/zh/docs/pluggable-components/auditing-logs.md @@ -0,0 +1,203 @@ +--- +title: "KubeSphere Auditing Logs" +keywords: "Kubernetes, auditing, KubeSphere, logs" +description: "How to enable KubeSphere Auditing Logs" + +linkTitle: "KubeSphere Auditing Logs" +weight: 3525 +--- + +## What are KubeSphere Auditing Logs? + +KubeSphere Auditing Log System provides a security-relevant chronological set of records documenting the sequence of activities related to individual users, managers, or other components of the system. Each request to KubeSphere generates an event that is then written to a webhook and processed according to a certain rule. + +For more information, see Logging, Events, and Auditing. + +## Enable Auditing Logs before Installation + +### Installing on Linux + +When you install KubeSphere on Linux, you need to create a configuration file, which lists all KubeSphere components. + +1. In the tutorial of [Installing KubeSphere on Linux](https://kubesphere-v3.netlify.app/docs/installing-on-linux/introduction/multioverview/), you create a default file **config-sample.yaml**. Modify the file by executing the following command: + +```bash +vi config-sample.yaml +``` + +{{< notice note >}} + +If you adopt [All-in-one Installation](https://kubesphere-v3.netlify.app/docs/quick-start/all-in-one-on-linux/), you do not need to create a config-sample.yaml file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Auditing in this mode (e.g. for testing purpose), refer to the following section to see how Auditing can be installed after installation. + +{{}} + +2. In this file, navigate to `auditing` and change `false` to `true` for `enabled`. Save the file after you finish. + +```bash +auditing: + enabled: true # Change "false" to "true" +``` + +{{< notice note >}} + +By default, KubeKey will install Elasticsearch internally if Auditing is enabled. For a production environment, it is highly recommended that you set the following value in **config-sample.yaml** if you want to enable Auditing, especially `externalElasticsearchUrl` and `externalElasticsearchPort`. Once you provide the following information before installation, KubeKey will integrate your external Elasticsearch directly instead of installing an internal one. + +{{}} + +```bash +es: # Storage backend for logging, tracing, events and auditing. + elasticsearchMasterReplicas: 1 # total number of master nodes, it's not allowed to use even number + elasticsearchDataReplicas: 1 # total number of data nodes + elasticsearchMasterVolumeSize: 4Gi # Volume size of Elasticsearch master nodes + elasticsearchDataVolumeSize: 20Gi # Volume size of Elasticsearch data nodes + logMaxAge: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default. + elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log + externalElasticsearchUrl: # The URL of external Elasticsearch + externalElasticsearchPort: # The port of external Elasticsearch +``` + +3. Create a cluster using the configuration file: + +```bash +./kk create cluster -f config-sample.yaml +``` + +### **Installing on Kubernetes** + +When you install KubeSphere on Kubernetes, you need to download the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) for cluster setting. If you want to install Auditing, do not use `kubectl apply -f` directly for this file. + +1. In the tutorial of [Installing KubeSphere on Kubernetes](https://kubesphere-v3.netlify.app/docs/installing-on-kubernetes/introduction/overview/), you execute `kubectl apply -f` first for the file [kubesphere-installer.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml). After that, to enable Auditing, create a local file cluster-configuration.yaml. + +```bash +vi cluster-configuration.yaml +``` + +2. Copy all the content in the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) and paste it to the local file just created. +3. In this local cluster-configuration.yaml file, navigate to `auditing` and enable Auditing by changing `false` to `true` for `enabled`. Save the file after you finish. + +```bash +auditing: + enabled: true # Change "false" to "true" +``` + +{{< notice note >}} + +By default, ks-installer will install Elasticsearch internally if Auditing is enabled. For a production environment, it is highly recommended that you set the following value in **cluster-configuration.yaml** if you want to enable Auditing, especially `externalElasticsearchUrl` and `externalElasticsearchPort`. Once you provide the following information before installation, ks-installer will integrate your external Elasticsearch directly instead of installing an internal one. + +{{}} + +```bash +es: # Storage backend for logging, tracing, events and auditing. + elasticsearchMasterReplicas: 1 # total number of master nodes, it's not allowed to use even number + elasticsearchDataReplicas: 1 # total number of data nodes + elasticsearchMasterVolumeSize: 4Gi # Volume size of Elasticsearch master nodes + elasticsearchDataVolumeSize: 20Gi # Volume size of Elasticsearch data nodes + logMaxAge: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default. + elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log + externalElasticsearchUrl: # The URL of external Elasticsearch + externalElasticsearchPort: # The port of external Elasticsearch +``` + +4. Execute the following command to start installation: + +```bash +kubectl apply -f cluster-configuration.yaml +``` + +## Enable Auditing Logs after Installation + +1. Log in the console as `admin`. Click **Platform** at the top left corner and select **Clusters Management**. + +![clusters-management](https://ap3.qingstor.com/kubesphere-website/docs/20200828111130.png) + +2. Click **CRDs** and enter `clusterconfiguration` in the search bar. Click the result to view its detailed page. + +{{< notice info >}} + +A Custom Resource Definition (CRD) allows users to create a new type of resources without adding another API server. They can use these resources like any other native Kubernetes objects. + +{{}} + +3. In **Resource List**, click the three dots on the right of `ks-installer` and select **Edit YAML**. + +![edit-yaml](https://ap3.qingstor.com/kubesphere-website/docs/20200827182002.png) + +4. In this yaml file, navigate to `auditing` and change `false` to `true` for `enabled`. After you finish, click **Update** at the bottom right corner to save the configuration. + +```bash +auditing: + enabled: true # Change "false" to "true" +``` + +{{< notice note >}} + +By default, Elasticsearch will be installed internally if Auditing is enabled. For a production environment, it is highly recommended that you set the following value in this yaml file if you want to enable Auditing, especially `externalElasticsearchUrl` and `externalElasticsearchPort`. Once you provide the following information, KubeSphere will integrate your external Elasticsearch directly instead of installing an internal one. + +{{}} + +```bash +es: # Storage backend for logging, tracing, events and auditing. + elasticsearchMasterReplicas: 1 # total number of master nodes, it's not allowed to use even number + elasticsearchDataReplicas: 1 # total number of data nodes + elasticsearchMasterVolumeSize: 4Gi # Volume size of Elasticsearch master nodes + elasticsearchDataVolumeSize: 20Gi # Volume size of Elasticsearch data nodes + logMaxAge: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default. + elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log + externalElasticsearchUrl: # The URL of external Elasticsearch + externalElasticsearchPort: # The port of external Elasticsearch +``` + +5. You can use the web kubectl to check the installation process by executing the following command: + +```bash +kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f +``` + +{{< notice tip >}} + +You can find the web kubectl tool by clicking the hammer icon at the bottom right corner of the console. + +{{}} + +## Verify the Installation of Component + +{{< tabs >}} + +{{< tab "Verify the Component in Dashboard" >}} + +If you enable both Logging and Auditing, you can check the status of Auditing in **Logging** in **Components**. You may see an image as follows: + +![auditing](https://ap3.qingstor.com/kubesphere-website/docs/20200829121140.png) + +If you only enable Auditing without Logging installed, you cannot see the image above as the button **Logging** will not display. + +{{}} + +{{< tab "Verify the Component through kubectl" >}} + +Execute the following command to check the status of pods: + +```bash +kubectl get pod -n kubesphere-logging-system +``` + +The output may look as follows if the component runs successfully: + +```bash +NAME READY STATUS RESTARTS AGE +elasticsearch-logging-curator-elasticsearch-curator-159872n9g9g 0/1 Completed 0 2d10h +elasticsearch-logging-curator-elasticsearch-curator-159880tzb7x 0/1 Completed 0 34h +elasticsearch-logging-curator-elasticsearch-curator-1598898q8w7 0/1 Completed 0 10h +elasticsearch-logging-data-0 1/1 Running 1 2d20h +elasticsearch-logging-data-1 1/1 Running 1 2d20h +elasticsearch-logging-discovery-0 1/1 Running 1 2d20h +fluent-bit-6v5fs 1/1 Running 1 2d20h +fluentbit-operator-5bf7687b88-44mhq 1/1 Running 1 2d20h +kube-auditing-operator-7574bd6f96-p4jvv 1/1 Running 1 2d20h +kube-auditing-webhook-deploy-6dfb46bb6c-hkhmx 1/1 Running 1 2d20h +kube-auditing-webhook-deploy-6dfb46bb6c-jp77q 1/1 Running 1 2d20h +``` + +{{}} + +{{}} \ No newline at end of file diff --git a/content/zh/docs/pluggable-components/devops.md b/content/zh/docs/pluggable-components/devops.md new file mode 100644 index 000000000..3622f299a --- /dev/null +++ b/content/zh/docs/pluggable-components/devops.md @@ -0,0 +1,141 @@ +--- +title: "KubeSphere DevOps System" +keywords: "Kubernetes, Jenkins, KubeSphere, DevOps, cicd" +description: "How to Enable KubeSphere DevOps System" + +linkTitle: "KubeSphere DevOps System" +weight: 3520 +--- + +## What is KubeSphere DevOps System + +KubeSphere DevOps System is designed for CI/CD workflows in Kubernetes. Based on [Jenkins](https://jenkins.io/), it provides one-stop solutions to help both development and Ops teams build, test and publish apps to Kubernetes in a straight-forward way. It also features plugin management, Binary-to-Image (B2I), Source-to-Image (S2I), code dependency caching, code quality analysis, pipeline logging, etc. + +The DevOps system offers an enabling environment for users as apps can be automatically released to the same platform. It is also compatible with third-party private image registries (e.g. Harbor) and code repositories (e.g. GitLab/GitHub/SVN/BitBucket). As such, it creates excellent user experiences by providing users with comprehensive, visualized CI/CD pipelines which are extremely useful in air-gapped environments. + +For more information, see DevOps Administration. + +## Enable DevOps before Installation + +### Installing on Linux + +When you install KubeSphere on Linux, you need to create a configuration file, which lists all KubeSphere components. + +1. In the tutorial of [Installing KubeSphere on Linux](https://kubesphere-v3.netlify.app/docs/installing-on-linux/introduction/multioverview/), you create a default file **config-sample.yaml**. Modify the file by executing the following command: + +```bash +vi config-sample.yaml +``` + +{{< notice note >}} + +If you adopt [All-in-one Installation](https://kubesphere-v3.netlify.app/docs/quick-start/all-in-one-on-linux/), you do not need to create a config-sample.yaml file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable DevOps in this mode (e.g. for testing purpose), refer to the following section to see how DevOps can be installed after installation. + +{{}} + +2. In this file, navigate to `devops` and change `false` to `true` for `enabled`. Save the file after you finish. + +```bash +devops: + enabled: true # Change "false" to "true" +``` + +3. Create a cluster using the configuration file: + +```bash +./kk create cluster -f config-sample.yaml +``` + +### **Installing on Kubernetes** + +When you install KubeSphere on Kubernetes, you need to download the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) for cluster setting. If you want to install DevOps, do not use `kubectl apply -f` directly for this file. + +1. In the tutorial of [Installing KubeSphere on Kubernetes](https://kubesphere-v3.netlify.app/docs/installing-on-kubernetes/introduction/overview/), you execute `kubectl apply -f` first for the file [kubesphere-installer.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml). After that, to enable DevOps, create a local file cluster-configuration.yaml. + +```bash +vi cluster-configuration.yaml +``` + +2. Copy all the content in the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) and paste it to the local file just created. +3. In this local cluster-configuration.yaml file, navigate to `devops` and enable DevOps by changing `false` to `true` for `enabled`. Save the file after you finish. + +```bash +devops: + enabled: true # Change "false" to "true" +``` + +4. Execute the following command to start installation: + +```bash +kubectl apply -f cluster-configuration.yaml +``` + +## Enable DevOps after Installation + +1. Log in the console as `admin`. Click **Platform** at the top left corner and select **Clusters Management**. + +![clusters-management](https://ap3.qingstor.com/kubesphere-website/docs/20200828111130.png) + +2. Click **CRDs** and enter `clusterconfiguration` in the search bar. Click the result to view its detailed page. + +{{< notice info >}} + +A Custom Resource Definition (CRD) allows users to create a new type of resources without adding another API server. They can use these resources like any other native Kubernetes objects. + +{{}} + +3. In **Resource List**, click the three dots on the right of `ks-installer` and select **Edit YAML**. + +![edit-yaml](https://ap3.qingstor.com/kubesphere-website/docs/20200827182002.png) + +4. In this yaml file, navigate to `devops` and change `false` to `true` for `enabled`. After you finish, click **Update** at the bottom right corner to save the configuration. + +```bash +devops: + enabled: true # Change "false" to "true" +``` + +5. You can use the web kubectl to check the installation process by executing the following command: + +```bash +kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f +``` + +{{< notice tip >}} + +You can find the web kubectl tool by clicking the hammer icon at the bottom right corner of the console. + +{{}} + +## Verify the Installation of Component + +{{< tabs >}} + +{{< tab "Verify the Component in Dashboard" >}} + +Go to **Components** and check the status of DevOps. You may see an image as follows: + +![devops](https://ap3.qingstor.com/kubesphere-website/docs/20200829125245.png) + +{{}} + +{{< tab "Verify the Component through kubectl" >}} + +Execute the following command to check the status of pods: + +```bash +kubectl get pod -n kubesphere-devops-system +``` + +The output may look as follows if the component runs successfully: + +```bash +NAME READY STATUS RESTARTS AGE +ks-jenkins-68b8949bb-jcvkt 1/1 Running 0 1h3m +s2ioperator-0 1/1 Running 1 1h3m +uc-jenkins-update-center-8c898f44f-hqv78 1/1 Running 0 1h14m +``` + +{{}} + +{{}} \ No newline at end of file diff --git a/content/zh/docs/pluggable-components/logging.md b/content/zh/docs/pluggable-components/logging.md new file mode 100644 index 000000000..18451e2d6 --- /dev/null +++ b/content/zh/docs/pluggable-components/logging.md @@ -0,0 +1,196 @@ +--- +title: "KubeSphere Logging System" +keywords: "Kubernetes, Elasticsearch, KubeSphere, Logging, logs" +description: "How to Enable KubeSphere Logging System" + +linkTitle: "KubeSphere Logging System" +weight: 3535 +--- + +## What is KubeSphere Logging System + +KubeSphere provides a powerful, holistic and easy-to-use logging system for log collection, query and management. It covers logs from at varied levels, including tenants, infrastructure resources, and applications. Users can search logs from different dimensions, such as project, workload, Pod and keyword. Compared with Kibana, the tenant-based logging system of KubeSphere features better isolation and security among tenants as each tenant can only view his or her own logs. Apart from KubeSphere's own logging system, the container platform also allows users to add third-party log collectors, such as Elasticsearch, Kafka and Fluentd. + +For more information, see Logging, Events and Auditing. + +## Enable Logging before Installation + +### Installing on Linux + +When you install KubeSphere on Linux, you need to create a configuration file, which lists all KubeSphere components. + +1. In the tutorial of [Installing KubeSphere on Linux](https://kubesphere-v3.netlify.app/docs/installing-on-linux/introduction/multioverview/), you create a default file **config-sample.yaml**. Modify the file by executing the following command: + +```bash +vi config-sample.yaml +``` + +{{< notice note >}} + +If you adopt [All-in-one Installation](https://kubesphere-v3.netlify.app/docs/quick-start/all-in-one-on-linux/), you do not need to create a config-sample.yaml file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Logging in this mode (e.g. for testing purpose), refer to the following section to see how Logging can be installed after installation. + +{{}} + +2. In this file, navigate to `logging` and change `false` to `true` for `enabled`. Save the file after you finish. + +```bash +logging: + enabled: true # Change "false" to "true" +``` + +{{< notice note >}} + +By default, KubeKey will install Elasticsearch internally if Logging is enabled. For a production environment, it is highly recommended that you set the following value in **config-sample.yaml** if you want to enable Logging, especially `externalElasticsearchUrl` and `externalElasticsearchPort`. Once you provide the following information before installation, KubeKey will integrate your external Elasticsearch directly instead of installing an internal one. + +{{}} + +```bash +es: # Storage backend for logging, tracing, events and auditing. + elasticsearchMasterReplicas: 1 # total number of master nodes, it's not allowed to use even number + elasticsearchDataReplicas: 1 # total number of data nodes + elasticsearchMasterVolumeSize: 4Gi # Volume size of Elasticsearch master nodes + elasticsearchDataVolumeSize: 20Gi # Volume size of Elasticsearch data nodes + logMaxAge: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default. + elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log + externalElasticsearchUrl: # The URL of external Elasticsearch + externalElasticsearchPort: # The port of external Elasticsearch +``` +3. Create a cluster using the configuration file: + +```bash +./kk create cluster -f config-sample.yaml +``` + +### **Installing on Kubernetes** + +When you install KubeSphere on Kubernetes, you need to download the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) for cluster setting. If you want to install Logging, do not use `kubectl apply -f` directly for this file. + +1. In the tutorial of [Installing KubeSphere on Kubernetes](https://kubesphere-v3.netlify.app/docs/installing-on-kubernetes/introduction/overview/), you execute `kubectl apply -f` first for the file [kubesphere-installer.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml). After that, to enable Logging, create a local file cluster-configuration.yaml. + +```bash +vi cluster-configuration.yaml +``` + +2. Copy all the content in the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) and paste it to the local file just created. +3. In this local cluster-configuration.yaml file, navigate to `logging` and enable Logging by changing `false` to `true` for `enabled`. Save the file after you finish. + +```bash +logging: + enabled: true # Change "false" to "true" +``` + +{{< notice note >}} + +By default, ks-installer will install Elasticsearch internally if Logging is enabled. For a production environment, it is highly recommended that you set the following value in **cluster-configuration.yaml** if you want to enable Logging, especially `externalElasticsearchUrl` and `externalElasticsearchPort`. Once you provide the following information before installation, ks-installer will integrate your external Elasticsearch directly instead of installing an internal one. + +{{}} + +```bash +es: # Storage backend for logging, tracing, events and auditing. + elasticsearchMasterReplicas: 1 # total number of master nodes, it's not allowed to use even number + elasticsearchDataReplicas: 1 # total number of data nodes + elasticsearchMasterVolumeSize: 4Gi # Volume size of Elasticsearch master nodes + elasticsearchDataVolumeSize: 20Gi # Volume size of Elasticsearch data nodes + logMaxAge: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default. + elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log + externalElasticsearchUrl: # The URL of external Elasticsearch + externalElasticsearchPort: # The port of external Elasticsearch +``` + +4. Execute the following command to start installation: + +```bash +kubectl apply -f cluster-configuration.yaml +``` + +## Enable Logging after Installation + +1. Log in the console as `admin`. Click **Platform** at the top left corner and select **Clusters Management**. + +![clusters-management](https://ap3.qingstor.com/kubesphere-website/docs/20200828111130.png) + +2. Click **CRDs** and enter `clusterconfiguration` in the search bar. Click the result to view its detailed page. + +{{< notice info >}} + +A Custom Resource Definition (CRD) allows users to create a new type of resources without adding another API server. They can use these resources like any other native Kubernetes objects. + +{{}} + +3. In **Resource List**, click the three dots on the right of `ks-installer` and select **Edit YAML**. + +![edit-yaml](https://ap3.qingstor.com/kubesphere-website/docs/20200827182002.png) + +4. In this yaml file, navigate to `logging` and change `false` to `true` for `enabled`. After you finish, click **Update** at the bottom right corner to save the configuration. + +```bash +logging: + enabled: true # Change "false" to "true" +``` + +{{< notice note >}} + +By default, Elasticsearch will be installed internally if Logging is enabled. For a production environment, it is highly recommended that you set the following value in this yaml file if you want to enable Logging, especially `externalElasticsearchUrl` and `externalElasticsearchPort`. Once you provide the following information, KubeSphere will integrate your external Elasticsearch directly instead of installing an internal one. + +{{}} + +```bash +es: # Storage backend for logging, tracing, events and auditing. + elasticsearchMasterReplicas: 1 # total number of master nodes, it's not allowed to use even number + elasticsearchDataReplicas: 1 # total number of data nodes + elasticsearchMasterVolumeSize: 4Gi # Volume size of Elasticsearch master nodes + elasticsearchDataVolumeSize: 20Gi # Volume size of Elasticsearch data nodes + logMaxAge: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default. + elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log + externalElasticsearchUrl: # The URL of external Elasticsearch + externalElasticsearchPort: # The port of external Elasticsearch +``` + +5. You can use the web kubectl to check the installation process by executing the following command: + +```bash +kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f +``` + +{{< notice tip >}} + +You can find the web kubectl tool by clicking the hammer icon at the bottom right corner of the console. + +{{}} + +## Verify the Installation of Component + +{{< tabs >}} + +{{< tab "Verify the Component in Dashboard" >}} + +Go to **Components** and check the status of Logging. You may see an image as follows: + +![logging](https://ap3.qingstor.com/kubesphere-website/docs/20200829104152.png) + +{{}} + +{{< tab "Verify the Component through kubectl" >}} + +Execute the following command to check the status of pods: + +```bash +kubectl get pod -n kubesphere-logging-system +``` + +The output may look as follows if the component runs successfully: + +```bash +NAME READY STATUS RESTARTS AGE +elasticsearch-logging-data-0 1/1 Running 0 9m33s +elasticsearch-logging-data-1 1/1 Running 0 5m12s +elasticsearch-logging-discovery-0 1/1 Running 0 9m33s +fluent-bit-qpvrf 1/1 Running 0 4m56s +fluentbit-operator-5bf7687b88-z7bgg 1/1 Running 0 9m26s +logsidecar-injector-deploy-667c6c9579-662pm 2/2 Running 0 8m56s +logsidecar-injector-deploy-667c6c9579-tjckn 2/2 Running 0 8m56s +``` + +{{}} + +{{}} \ No newline at end of file diff --git a/content/zh/docs/pluggable-components/release-v200.md b/content/zh/docs/pluggable-components/release-v200.md index ba048fe22..7f0c64c6c 100644 --- a/content/zh/docs/pluggable-components/release-v200.md +++ b/content/zh/docs/pluggable-components/release-v200.md @@ -1,20 +1,20 @@ --- -title: "Release Notes For 2.0.0" -keywords: "kubernetes, docker, kubesphere, jenkins, istio, prometheus" -description: "KubeSphere Release Notes For 2.0.0" +title: "KubeSphere Alerting and Notification" +keywords: "kubernetes, alertmanager, kubesphere, alerting, notification" +description: "How to Enable Alerting and Notification System" -linkTitle: "Release Notes - 2.0.0" +linkTitle: "KubeSphere Alerting and Notification" weight: 500 --- -KubeSphere 2.0.0 was released on **May 18th, 2019**. +KubeSphere 2.0.0 was released on **May 18th, 2019**. ## What's New in 2.0.0 ### Component Upgrades - Support Kubernetes [Kubernetes 1.13.5](https://github.com/kubernetes/kubernetes/releases/tag/v1.13.5) -- Integrate [QingCloud Cloud Controller](https://github.com/yunify/qingcloud-cloud-controller-manager). After installing load balancer, QingCloud load balancer can be created through KubeSphere console and the backend workload is bound automatically.  +- Integrate [QingCloud Cloud Controller](https://github.com/yunify/qingcloud-cloud-controller-manager). After installing load balancer, QingCloud load balancer can be created through KubeSphere console and the backend workload is bound automatically.  - Integrate [QingStor CSI v0.3.0](https://github.com/yunify/qingstor-csi/tree/v0.3.0) storage plugin and support physical NeonSAN storage system. Support SAN storage service with high availability and high performance. - Integrate [QingCloud CSI v0.2.1](https://github.com/yunify/qingcloud-csi/tree/v0.2.1) storage plugin and support many types of volume to create QingCloud block services. - Harbor is upgraded to 1.7.5. diff --git a/content/zh/docs/pluggable-components/release-v2001.md b/content/zh/docs/pluggable-components/release-v2001.md new file mode 100644 index 000000000..f47697406 --- /dev/null +++ b/content/zh/docs/pluggable-components/release-v2001.md @@ -0,0 +1,92 @@ +--- +title: "KubeSphere Events System" +keywords: "kubernetes, events, kubesphere, k8s-events" +description: "How to enable KubeSphere events system" + +linkTitle: "KubeSphere Events System" +weight: 700 +--- + +KubeSphere 2.0.0 was released on **May 18th, 2019**. + +## What's New in 2.0.0 + +### Component Upgrades + +- Support Kubernetes [Kubernetes 1.13.5](https://github.com/kubernetes/kubernetes/releases/tag/v1.13.5) +- Integrate [QingCloud Cloud Controller](https://github.com/yunify/qingcloud-cloud-controller-manager). After installing load balancer, QingCloud load balancer can be created through KubeSphere console and the backend workload is bound automatically.  +- Integrate [QingStor CSI v0.3.0](https://github.com/yunify/qingstor-csi/tree/v0.3.0) storage plugin and support physical NeonSAN storage system. Support SAN storage service with high availability and high performance. +- Integrate [QingCloud CSI v0.2.1](https://github.com/yunify/qingcloud-csi/tree/v0.2.1) storage plugin and support many types of volume to create QingCloud block services. +- Harbor is upgraded to 1.7.5. +- GitLab is upgraded to 11.8.1. +- Prometheus is upgraded to 2.5.0. + +### Microservice Governance + +- Integrate Istio 1.1.1 and support visualization of service mesh management. +- Enable the access to the project's external websites and the application traffic governance. +- Provide built-in sample microservice [Bookinfo Application](https://istio.io/docs/examples/bookinfo/). +- Support traffic governance. +- Support traffic images. +- Provide load balancing of microservice based on Istio. +- Support canary release. +- Enable blue-green deployment. +- Enable circuit breaking. +- Enable microservice tracing. + +### DevOps (CI/CD Pipeline) + +- CI/CD pipeline provides email notification and supports the email notification during construction. +- Enhance CI/CD graphical editing pipelines, and more pipelines for common plugins and execution conditions. +- Provide source code vulnerability scanning based on SonarQube 7.4. +- Support [Source to Image](https://github.com/kubesphere/s2ioperator) feature. + +### Monitoring + +- Provide Kubernetes component independent monitoring page including etcd, kube-apiserver and kube-scheduler. +- Optimize several monitoring algorithm. +- Optimize monitoring resources. Reduce Prometheus storage and the disk usage up to 80%. + +### Logging + +- Provide unified log console in terms of tenant. +- Enable accurate and fuzzy retrieval. +- Support real-time and history logs. +- Support combined log query based on namespace, workload, Pod, container, key words and time limit.   +- Support detail page of single and direct logs. Pods and containers can be switched. +- [FluentBit Operator](https://github.com/kubesphere/fluentbit-operator) supports logging gathering settings: ElasticSearch, Kafka and Fluentd can be added, activated or turned off as log collectors. Before sending to log collectors, you can configure filtering conditions for needed logs. + +### Alerting and Notifications + +- Email notifications are available for cluster nodes and workload resources.  +- Notification rules: combined multiple monitoring resources are available. Different warning levels, detection cycle, push times and threshold can be configured. +- Time and notifiers can be set. +- Enable notification repeating rules for different levels. + +### Security Enhancement + +- Fix RunC Container Escape Vulnerability [Runc container breakout](https://log.qingcloud.com/archives/5127) +- Fix Alpine Docker's image Vulnerability [Alpine container shadow breakout](https://www.alpinelinux.org/posts/Docker-image-vulnerability-CVE-2019-5021.html) +- Support single and multi-login configuration items. +- Verification code is required after multiple invalid logins. +- Enhance passwords' policy and prevent weak passwords. +- Others security enhancements. + +### Interface Optimization + +- Optimize multiple user experience of console, such as the switch between DevOps project and other projects. +- Optimize many Chinese-English webpages. + +### Others + +- Support Etcd backup and recovery. +- Support regular cleanup of the docker's image. + +## Bugs Fixes + +- Fix delay updates of the resource and deleted pages. +- Fix the left dirty data after deleting the HPA workload. +- Fix incorrect Job status display. +- Correct resource quota, Pod usage and storage metrics algorithm. +- Adjust CPU usage percentages. +- many more bugfix diff --git a/content/zh/docs/pluggable-components/release-v201.md b/content/zh/docs/pluggable-components/release-v201.md deleted file mode 100644 index 2407dce8a..000000000 --- a/content/zh/docs/pluggable-components/release-v201.md +++ /dev/null @@ -1,19 +0,0 @@ ---- -title: "Release Notes For 2.0.1" -keywords: "kubernetes, docker, kubesphere, jenkins, istio, prometheus" -description: "KubeSphere Release Notes For 2.0.1" - -linkTitle: "Release Notes - 2.0.1" -weight: 400 ---- - -KubeSphere 2.0.1 was released on **June 9th, 2019**. - -## Bug Fix - -- Fix the issue that CI/CD pipeline cannot recognize correct special characters in the code branch. -- Fix CI/CD pipeline's issue of being unable to check logs. -- Fix no-log data output problem caused by index document fragmentation abnormity during the log query. -- Fix prompt exceptions when searching for logs that do not exist. -- Fix the line-overlap problem on traffic governance topology and fixed invalid image strategy application. -- Many more bugfix diff --git a/content/zh/docs/pluggable-components/release-v202.md b/content/zh/docs/pluggable-components/release-v202.md deleted file mode 100644 index 3c8fec965..000000000 --- a/content/zh/docs/pluggable-components/release-v202.md +++ /dev/null @@ -1,40 +0,0 @@ ---- -title: "Release Notes For 2.0.2" -keywords: "kubernetes, docker, kubesphere, jenkins, istio, prometheus" -description: "KubeSphere Release Notes For 2.0.2" - -linkTitle: "Release Notes - 2.0.2" -weight: 300 ---- - -KubeSphere 2.0.2 was released on July 9, 2019, which fixes known bugs and enhances existing feature. If you have installed versions of 1.0.x, 2.0.0 or 2.0.1, please download KubeSphere installer v2.0.2 to upgrade. - -## What's New in 2.0.2 - -### Enhanced Features - -- [API docs](/api-reference/api-docs/) are available on the official website. -- Block brute-force attacks. -- Standardize the maximum length of resource names. -- Upgrade the gateway of project (Ingress Controller) to the version of 0.24.1. Support Ingress grayscale release. - -## List of Fixed Bugs - -- Fix the issue that traffic topology displays resources outside of this project. -- Fix the extra service component issue from traffic topology under specific circumstances. -- Fix the execution issue when "Source to Image" reconstructs images under specific circumstances. -- Fix the page display problem when "Source to Image" job fails. -- Fix the log checking problem when Pod status is abnormal. -- Fix the issue that disk monitor cannot detect some types of volume mounting, such as LVM volume. -- Fix the problem of detecting deployed applications. -- Fix incorrect status of application component. -- Fix host node's number calculation errors. -- Fix input data loss caused by switching reference configuration buttons when adding environmental variables. -- Fix the rerun job issue that the Operator role cannot execute. -- Fix the initialization issue on IPv4 environment uuid. -- Fix the issue that the log detail page cannot be scrolled down to check past logs. -- Fix wrong APIServer addresses in KubeConfig files. -- Fix the issue that DevOps project's name cannot be changed. -- Fix the issue that container logs cannot specify query time. -- Fix the saving problem on relevant repository's secrets under certain circumstances. -- Fix the issue that application's service component creation page does not have image registry's secrets. diff --git a/content/zh/docs/pluggable-components/release-v210.md b/content/zh/docs/pluggable-components/release-v210.md deleted file mode 100644 index ae876bee6..000000000 --- a/content/zh/docs/pluggable-components/release-v210.md +++ /dev/null @@ -1,155 +0,0 @@ ---- -title: "Release Notes For 2.1.0" -keywords: "kubernetes, docker, kubesphere, jenkins, istio, prometheus" -description: "KubeSphere Release Notes For 2.1.0" - -linkTitle: "Release Notes - 2.1.0" -weight: 200 ---- - -KubeSphere 2.1.0 was released on Nov 11th, 2019, which fixes known bugs, adds some new features and brings some enhancement. If you have installed versions of 2.0.x, please upgrade it and enjoy the better user experience of v2.1.0. - -## Installer Enhancement - -- Decouple some components and make components including DevOps, service mesh, app store, logging, alerting and notification optional and pluggable -- Add Grafana (v5.2.4) as the optional component -- Upgrade Kubernetes to 1.15.5. It is also compatible with 1.14.x and 1.13.x -- Upgrade [OpenPitrix](https://openpitrix.io/) to v0.4.5 -- Upgrade the log forwarder Fluent Bit to v1.3.2 -- Upgrade Jenkins to v2.176.2 -- Upgrade Istio to 1.3.3 -- Optimize the high availability for core components - -## App Store - -### Features - -Support upload / test / review / deploy / publish/ classify / upgrade / deploy and delete apps, and provide nine built-in applications - -### Upgrade & Enhancement - -- The application repository configuration is moved from global to each workspace -- Support adding application repository to share applications in a workspace - -## Storage - -### Features - -- Support Local Volume with dynamic provisioning -- Provide the real-time monitoring feature for QingCloud block storage - -### Upgrade & Enhancement - -QingCloud CSI is adapted to CSI 1.1.0, supports upgrade, topology, create or delete a snapshot. It also supports creating PVC based on a snapshot - -### BUG Fixes - -Fix the StorageClass list display problem - -## Observability - -### Features - -- Support for collecting the file logs on the disk. It is used for the Pod which preserves the logs as the file on the disk -- Support integrating with external ElasticSearch 7.x -- Ability to search logs containinh Chinese words -- Add initContainer log display -- Ability to export logs -- Support for canceling the notification from alerting - -### UPGRADE & ENHANCEMENT - -- Improve the performance of log search -- Refine the hints when the logging service is abnormal -- Optimize the information when the monitoring metrics request is abnormal -- Support pod anti-affinity rule for Prometheus - -### BUG FIXES - -- Fix the mistaken highlights in the logs search result -- Fix log search not matching phrases correctly -- Fix the issue that log could not be retrieved for a deleted workload when it is searched by workload name -- Fix the issue where the results were truncated when the log is highlighted -- Fix some metrics exceptions: node `inode`, maximum pod tolerance -- Fix the issue with an incorrect number of alerting targets -- Fix filter failure problem of multi-metric monitoring -- Fix the problem of no logging and monitoring information on taint nodes (Adjust the toleration attributes of node-exporter and fluent-bit to deploy on all nodes by default, ignoring taints) - -## DevOps - -### Features - -- Add support for branch exchange and git log export in S2I -- Add B2I, ability to build Binary/WAR/JAR package and release to Kubernetes -- Support dependency cache for the pipeline, S2I, and B2I -- Support delete Kubernetes resource action in `kubernetesDeploy` step -- Multi-branch pipeline supports trigger other pipelines when create or delete the branch - -### Upgrades & Enhancement - -- Support BitBucket in the pipeline -- Support Cron script validation in the pipeline -- Support Jenkinsfile syntax validation -- Support custom the link in SonarQube -- Support event trigger build in the pipeline -- Optimize the agent node selection in the pipeline -- Accelerate the start speed of the pipeline -- Use dynamical volume as the work directory of the Agent in the pipeline, also contributes to Jenkins [#589](https://github.com/jenkinsci/kubernetes-plugin/pull/598) -- Optimize the Jenkins kubernetesDeploy plugin, add more resources and versions (v1, app/v1, extensions/v1beta1、apps/v1beta2、apps/v1beta1、autoscaling/v1、autoscaling/v2beta1、autoscaling/v2beta2、networking.k8s.io/v1、batch/v1beta1、batch/v2alpha1), also contributes to Jenkins [#614](https://github.com/jenkinsci/kubernetes-plugin/pull/614) -- Add support for PV, PVC, Network Policy in deploy step of the pipeline, also contributes to Jenkins [#87](https://github.com/jenkinsci/kubernetes-cd-plugin/pull/87)、[#88](https://github.com/jenkinsci/kubernetes-cd-plugin/pull/88) - -### Bug Fixes - -- Fix the issue that 400 bad request in GitHub Webhook -- incompatible change: DevOps Webhook's URL prefix is changed from `/webhook/xxx` to `/devops_webhook/xxx` - -## Authentication and authority - -### Features - -Support sync and authenticate with AD account - -### Upgrades & Enhancement - -- Reduce the LDAP component's RAM consumption -- Add protection against brute force attacks - -### Bug Fixes - -- Fix LDAP connection pool leak -- Fix the issue where users could not be added in the workspace -- Fix sensitive data transmission leaks - -## User Experience - -### Features - -Ability to wizard management of projects (namespace) that are not assigned to the workspace - -### Upgrades & Enhancement - -- Support bash-completion in web kubectl -- Optimize the host information display -- Add connection test of the email server -- Add prompt on resource list page -- Optimize the project overview page and project basic information -- Simplify the service creation process -- Simplify the workload creation process -- Support real-time status update in the resource list -- optimize YAML editing -- Support image search and image information display -- Add the pod list to the workload page -- Update the web terminal theme -- Support container switching in container terminal -- Optimize Pod information display, and add Pod scheduling information -- More detailed workload status display - -### Bug Fixes - -- Fix the issue where the default request resource of the project is displayed incorrectly -- Optimize the web terminal design, make it much easier to find -- Fix the Pod status update delay -- Fix the issue where a host could not be searched based on roles -- Fix DevOps project quantity error in workspace detail page -- Fix the issue with the workspace list pages not turning properly -- Fix the problem of inconsistent result ordering after query on workspace list page diff --git a/content/zh/docs/pluggable-components/release-v211.md b/content/zh/docs/pluggable-components/release-v211.md deleted file mode 100644 index d8acba698..000000000 --- a/content/zh/docs/pluggable-components/release-v211.md +++ /dev/null @@ -1,122 +0,0 @@ ---- -title: "Release Notes For 2.1.1" -keywords: "kubernetes, docker, kubesphere, jenkins, istio, prometheus" -description: "KubeSphere Release Notes For 2.1.1" - -linkTitle: "Release Notes - 2.1.1" -weight: 100 ---- - -KubeSphere 2.1.1 was released on Feb 23rd, 2020, which has fixed known bugs and brought some enhancements. For the users who have installed versions of 2.0.x or 2.1.0, make sure to read the user manual carefully about how to upgrade before doing that, and feel free to raise any questions on [GitHub](https://github.com/kubesphere/kubesphere/issues). - -## What's New in 2.1.1 - -## Installer - -### UPGRADE & ENHANCEMENT - -- Support Kubernetes v1.14.x、v1.15.x、v1.16.x、v1.17.x,also solve the issue of Kubernetes API Compatibility#[1829](https://github.com/kubesphere/kubesphere/issues/1829) -- Simplify the steps of installation on existing Kubernetes, and remove the step of specifying cluster's CA certification, also specifying Etcd certification is no longer mandatory step if users don't need Etcd monitoring metrics -- Backup the configuration of CoreDNS before upgrading - -### BUG FIXES - -- Fix the issue of importing apps to App Store - -## App Store - -### UPGRADE & ENHANCEMENT - -- Upgrade OpenPitrix to v0.4.8 - -### BUG FIXES - -- Fix the latest version display issue for the published app #[1130](https://github.com/kubesphere/kubesphere/issues/1130) -- Fix the column name display issue in app approval list page #[1498](https://github.com/kubesphere/kubesphere/issues/1498) -- Fix the searching issue by app name/workspace #[1497](https://github.com/kubesphere/kubesphere/issues/1497) -- Fix the issue of failing to create app with the same name of previously deleted app #[1821](https://github.com/kubesphere/kubesphere/pull/1821) #[1564](https://github.com/kubesphere/kubesphere/issues/1564) -- Fix the issue of failing to deploy apps in some cases #[1619](https://github.com/kubesphere/kubesphere/issues/1619) #[1730](https://github.com/kubesphere/kubesphere/issues/1730) - -## Storage - -### UPGRADE & ENHANCEMENT - -- Support CSI plugins of Alibaba Cloud and Tencent Cloud - -### BUG FIXES - -- Fix the paging issue of storage class list page #[1583](https://github.com/kubesphere/kubesphere/issues/1583) #[1591](https://github.com/kubesphere/kubesphere/issues/1591) -- Fix the issue that the value of imageFeatures parameter displays '2' when creating ceph storage class #[1593](https://github.com/kubesphere/kubesphere/issues/1593) -- Fix the issue that search filter fails to work in persistent volumes list page #[1582](https://github.com/kubesphere/kubesphere/issues/1582) -- Fix the display issue for abnormal persistent volume #[1581](https://github.com/kubesphere/kubesphere/issues/1581) -- Fix the display issue for the persistent volumes which associated storage class is deleted #[1580](https://github.com/kubesphere/kubesphere/issues/1580) #[1579](https://github.com/kubesphere/kubesphere/issues/1579) - -## Observability - -### UPGRADE & ENHANCEMENT - -- Upgrade Fluent Bit to v1.3.5 #[1505](https://github.com/kubesphere/kubesphere/issues/1505) -- Upgrade Kube-state-metrics to v1.7.2 -- Upgrade Elastic Curator to v5.7.6 #[517](https://github.com/kubesphere/ks-installer/issues/517) -- Fluent Bit Operator support to detect the location of soft linked docker log folder dynamically on host machines -- Fluent Bit Operator support to manage the instance of Fluent Bit by declarative configuration through updating the ConfigMap of Operator -- Fix the issue of sort orders in alert list page #[1397](https://github.com/kubesphere/kubesphere/issues/1397) -- Adjust the metric of container memory usage with 'container_memory_working_set_bytes' - -### BUG FIXES - -- Fix the lag issue of container logs #[1650](https://github.com/kubesphere/kubesphere/issues/1650) -- Fix the display issue that some replicas of workload have no logs on container detail log page #[1505](https://github.com/kubesphere/kubesphere/issues/1505) -- Fix the compatibility issue of Curator to support ElasticSearch 7.x #[517](https://github.com/kubesphere/ks-installer/issues/517) -- Fix the display issue of container log page during container initialization #[1518](https://github.com/kubesphere/kubesphere/issues/1518) -- Fix the blank node issue when these nodes are resized #[1464](https://github.com/kubesphere/kubesphere/issues/1464) -- Fix the display issue of components status in monitor center, to keep them up-to date #[1858](https://github.com/kubesphere/kubesphere/issues/1858) -- Fix the wrong monitoring targets number in alert detail page #[61](https://github.com/kubesphere/console/issues/61) - -## DevOps - -### BUG FIXES - -- Fix the issue of UNSTABLE state not visible in the pipeline #[1428](https://github.com/kubesphere/kubesphere/issues/1428) -- Fix the format issue of KubeConfig in DevOps pipeline #[1529](https://github.com/kubesphere/kubesphere/issues/1529) -- Fix the image repo compatibility issue in B2I, to support image repo of Alibaba Cloud #[1500](https://github.com/kubesphere/kubesphere/issues/1500) -- Fix the paging issue in DevOps pipelines' branches list page #[1517](https://github.com/kubesphere/kubesphere/issues/1517) -- Fix the issue of failing to display pipeline configuration after modifying it #[1522](https://github.com/kubesphere/kubesphere/issues/1522) -- Fix the issue of failing to download generated artifact in S2I job #[1547](https://github.com/kubesphere/kubesphere/issues/1547) -- Fix the issue of [data loss occasionally after restarting Jenkins]( https://kubesphere.com.cn/forum/d/283-jenkins) -- Fix the issue that only 'PR-HEAD' is fetched when binding pipeline with GitHub #[1780](https://github.com/kubesphere/kubesphere/issues/1780) -- Fix 414 issue when updating DevOps credential #[1824](https://github.com/kubesphere/kubesphere/issues/1824) -- Fix wrong s2ib/s2ir naming issue from B2I/S2I #[1840](https://github.com/kubesphere/kubesphere/issues/1840) -- Fix the issue of failing to drag and drop tasks on pipeline editing page #[62](https://github.com/kubesphere/console/issues/62) - -## Authentication and Authorization - -### UPGRADE & ENHANCEMENT - -- Generate client certification through CSR #[1449](https://github.com/kubesphere/kubesphere/issues/1449) - -### BUG FIXES - -- Fix content loss issue in KubeConfig token file #[1529](https://github.com/kubesphere/kubesphere/issues/1529) -- Fix the issue that users with different permission fail to log in on the same browser #[1600](https://github.com/kubesphere/kubesphere/issues/1600) - -## User Experience - -### UPGRADE & ENHANCEMENT - -- Support to edit SecurityContext in workload editing page #[1530](https://github.com/kubesphere/kubesphere/issues/1530) -- Support to configure init container in workload editing page #[1488](https://github.com/kubesphere/kubesphere/issues/1488) -- Add support of startupProbe, also add periodSeconds, successThreshold, failureThreshold parameters in probe editing page #[1487](https://github.com/kubesphere/kubesphere/issues/1487) -- Optimize the status update display of Pods #[1187](https://github.com/kubesphere/kubesphere/issues/1187) -- Optimize the error message report on console #[43](https://github.com/kubesphere/console/issues/43) - -### BUG FIXES - -- Fix the status display issue for the Pods that are not under running status #[1187](https://github.com/kubesphere/kubesphere/issues/1187) -- Fix the issue that the added annotation can't be deleted when creating service of QingCloud LoadBalancer #[1395](https://github.com/kubesphere/kubesphere/issues/1395) -- Fix the display issue when selecting workload on service editing page #[1596](https://github.com/kubesphere/kubesphere/issues/1596) -- Fix the issue of failing to edit configuration file when editing 'Job' #[1521](https://github.com/kubesphere/kubesphere/issues/1521) -- Fix the issue of failing to update the service of 'StatefulSet' #[1513](https://github.com/kubesphere/kubesphere/issues/1513) -- Fix the issue of image searching for QingCloud and Alibaba Cloud image repos #[1627](https://github.com/kubesphere/kubesphere/issues/1627) -- Fix resource ordering issue with the same creation timestamp #[1750](https://github.com/kubesphere/kubesphere/pull/1750) -- Fix the issue of failing to edit configuration file when editing service #[41](https://github.com/kubesphere/console/issues/41) diff --git a/content/zh/docs/pluggable-components/release-v300.md b/content/zh/docs/pluggable-components/release-v300.md index 98c787c91..15eacc468 100644 --- a/content/zh/docs/pluggable-components/release-v300.md +++ b/content/zh/docs/pluggable-components/release-v300.md @@ -1,9 +1,9 @@ --- -title: "Release Notes For 3.0.0" +title: "Overview" keywords: "kubernetes, docker, kubesphere, jenkins, istio, prometheus" description: "KubeSphere Release Notes For 3.0.0" -linkTitle: "Release Notes - 3.0.0" +linkTitle: "Overview" weight: 50 --- diff --git a/content/zh/docs/pluggable-components/service-mesh.md b/content/zh/docs/pluggable-components/service-mesh.md new file mode 100644 index 000000000..2035f722a --- /dev/null +++ b/content/zh/docs/pluggable-components/service-mesh.md @@ -0,0 +1,150 @@ +--- +title: "KubeSphere Service Mesh" +keywords: "Kubernetes, istio, KubeSphere, service-mesh, microservices" +description: "How to Enable KubeSphere Service Mesh" + +linkTitle: "KubeSphere Service Mesh" +weight: 3540 +--- + +## What is KubeSphere Service Mesh + +On the basis of [Istio](https://istio.io/), KubeSphere Service Mesh visualizes microservices governance and traffic management. It features a powerful toolkit including **circuit breaking, blue-green deployment, canary release, traffic mirroring, distributed tracing, observability and traffic control**. Developers can easily get started with Service Mesh without any code hacking, with the learning curve of Istio greatly reduced. All features of KubeSphere Service Mesh are designed to meet users' demand for their business. + +For more information, see related sections in Project Administration and Usage. + +## Enable Service Mesh before Installation + +### Installing on Linux + +When you install KubeSphere on Linux, you need to create a configuration file, which lists all KubeSphere components. + +1. In the tutorial of [Installing KubeSphere on Linux](https://kubesphere-v3.netlify.app/docs/installing-on-linux/introduction/multioverview/), you create a default file **config-sample.yaml**. Modify the file by executing the following command: + +```bash +vi config-sample.yaml +``` + +{{< notice note >}} + +If you adopt [All-in-one Installation](https://kubesphere-v3.netlify.app/docs/quick-start/all-in-one-on-linux/), you do not need to create a config-sample.yaml file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Service Mesh in this mode (e.g. for testing purpose), refer to the following section to see how Service Mesh can be installed after installation. + +{{}} + +2. In this file, navigate to `servicemesh` and change `false` to `true` for `enabled`. Save the file after you finish. + +```bash +servicemesh: + enabled: true # Change "false" to "true" +``` + +3. Create a cluster using the configuration file: + +```bash +./kk create cluster -f config-sample.yaml +``` + +### **Installing on Kubernetes** + +When you install KubeSphere on Kubernetes, you need to download the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) for cluster setting. If you want to install Service Mesh, do not use `kubectl apply -f` directly for this file. + +1. In the tutorial of [Installing KubeSphere on Kubernetes](https://kubesphere-v3.netlify.app/docs/installing-on-kubernetes/introduction/overview/), you execute `kubectl apply -f` first for the file [kubesphere-installer.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml). After that, to enable Service Mesh, create a local file cluster-configuration.yaml. + +```bash +vi cluster-configuration.yaml +``` + +2. Copy all the content in the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) and paste it to the local file just created. +3. In this local cluster-configuration.yaml file, navigate to `servicemesh` and enable Service Mesh by changing `false` to `true` for `enabled`. Save the file after you finish. + +```bash +servicemesh: + enabled: true # Change "false" to "true" +``` + +4. Execute the following command to start installation: + +```bash +kubectl apply -f cluster-configuration.yaml +``` + +## Enable Service Mesh after Installation + +1. Log in the console as `admin`. Click **Platform** at the top left corner and select **Clusters Management**. + +![clusters-management](https://ap3.qingstor.com/kubesphere-website/docs/20200828111130.png) + +2. Click **CRDs** and enter `clusterconfiguration` in the search bar. Click the result to view its detailed page. + +{{< notice info >}} + +A Custom Resource Definition (CRD) allows users to create a new type of resources without adding another API server. They can use these resources like any other native Kubernetes objects. + +{{}} + +3. In **Resource List**, click the three dots on the right of `ks-installer` and select **Edit YAML**. + +![edit-yaml](https://ap3.qingstor.com/kubesphere-website/docs/20200827182002.png) + +4. In this yaml file, navigate to `servicemesh` and change `false` to `true` for `enabled`. After you finish, click **Update** at the bottom right corner to save the configuration. + +```bash +servicemesh: + enabled: true # Change "false" to "true" +``` + +5. You can use the web kubectl to check the installation process by executing the following command: + +```bash +kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f +``` + +{{< notice tip >}} + +You can find the web kubectl tool by clicking the hammer icon at the bottom right corner of the console. + +{{}} + +## Verify the Installation of Component + +{{< tabs >}} + +{{< tab "Verify the Component in Dashboard" >}} + +Go to **Components** and check the status of Istio. You may see an image as follows: + +![Istio](https://ap3.qingstor.com/kubesphere-website/docs/20200829130918.png) + +{{}} + +{{< tab "Verify the Component through kubectl" >}} + +Execute the following command to check the status of pods: + +```bash +kubectl get pod -n istio-system +``` + +The output may look as follows if the component runs successfully: + +```bash +NAME READY STATUS RESTARTS AGE +istio-citadel-7f676f76d7-n2rsr 1/1 Running 0 1h29m +istio-galley-78688b475c-kvkbx 1/1 Running 0 1h29m +istio-ingressgateway-8569f8dcb-rmvl5 1/1 Running 0 1h29m +istio-init-crd-10-1.4.8-fpvwg 0/1 Completed 0 1h43m +istio-init-crd-11-1.4.8-5rc4g 0/1 Completed 0 1h43m +istio-init-crd-12-1.4.8-62zmp 0/1 Completed 0 1h43m +istio-init-crd-14-1.4.8-ngq4d 0/1 Completed 0 1h43m +istio-pilot-67fd55d974-g5bn2 2/2 Running 4 1h29m +istio-policy-668894cffc-8tpt4 2/2 Running 7 1h29m +istio-sidecar-injector-9c4d79658-g7fzf 1/1 Running 0 1h29m +istio-telemetry-57fc886bf8-kx5rj 2/2 Running 7 1h29m +jaeger-collector-76bf54b467-2fh2v 1/1 Running 0 1h17m +jaeger-operator-7559f9d455-k26xz 1/1 Running 0 1h29m +jaeger-query-b478c5655-s57k8 2/2 Running 0 1h17m +``` + +{{}} + +{{}} \ No newline at end of file diff --git a/content/zh/docs/project-user-guide/application-workloads/_index.md b/content/zh/docs/project-user-guide/application-workloads/_index.md index d28bdca57..92bb3627c 100644 --- a/content/zh/docs/project-user-guide/application-workloads/_index.md +++ b/content/zh/docs/project-user-guide/application-workloads/_index.md @@ -1,6 +1,6 @@ --- linkTitle: "Application Workloads" -weight: 2200 +weight: 2080 _build: render: false diff --git a/content/zh/docs/project-user-guide/configuration/_index.md b/content/zh/docs/project-user-guide/configuration/_index.md index 2cf101ca5..cb8d4a686 100644 --- a/content/zh/docs/project-user-guide/configuration/_index.md +++ b/content/zh/docs/project-user-guide/configuration/_index.md @@ -1,7 +1,7 @@ --- -linkTitle: "Installation" +linkTitle: "ConfigMap and Secrets" weight: 2100 _build: render: false ---- \ No newline at end of file +--- diff --git a/content/zh/docs/project-user-guide/grayscale-release/_index.md b/content/zh/docs/project-user-guide/grayscale-release/_index.md index 2cf101ca5..cdd13a9e1 100644 --- a/content/zh/docs/project-user-guide/grayscale-release/_index.md +++ b/content/zh/docs/project-user-guide/grayscale-release/_index.md @@ -1,7 +1,7 @@ --- -linkTitle: "Installation" +linkTitle: "Grayscale Release" weight: 2100 _build: render: false ---- \ No newline at end of file +--- diff --git a/content/zh/docs/project-user-guide/project-administration/_index.md b/content/zh/docs/project-user-guide/project-administration/_index.md index 2cf101ca5..a13eb7e12 100644 --- a/content/zh/docs/project-user-guide/project-administration/_index.md +++ b/content/zh/docs/project-user-guide/project-administration/_index.md @@ -1,7 +1,7 @@ --- -linkTitle: "Installation" -weight: 2100 +linkTitle: "Project Settings" +weight: 2150 _build: render: false ---- \ No newline at end of file +--- diff --git a/content/zh/docs/project-user-guide/storage/_index.md b/content/zh/docs/project-user-guide/storage/_index.md index 2cf101ca5..9ccf64d90 100644 --- a/content/zh/docs/project-user-guide/storage/_index.md +++ b/content/zh/docs/project-user-guide/storage/_index.md @@ -1,7 +1,7 @@ --- -linkTitle: "Installation" +linkTitle: "Volume Management" weight: 2100 _build: render: false ---- \ No newline at end of file +--- diff --git a/content/zh/docs/quick-start/_index.md b/content/zh/docs/quick-start/_index.md index 7ee7efbb8..e1b810318 100644 --- a/content/zh/docs/quick-start/_index.md +++ b/content/zh/docs/quick-start/_index.md @@ -11,12 +11,30 @@ icon: "/images/docs/docs.svg" --- -## Installing KubeSphere and Kubernetes on Linux +Quickstarts include six hands-on lab exercises that help you quickly get started with KubeSphere. It is highly recommended that you go though all of these parts to explore the basic feature of KubeSphere. -In this chapter, we will demonstrate how to use KubeKey to provision a new Kubernetes and KubeSphere cluster based on different infrastructures. Kubekey can help you to quickly build a production-ready cluster architecture on a set of machines from zero to one. It also helps you to easily scale the cluster and install pluggable components on existing architecture. +## [All-in-one Installation on Linux](https://kubesphere-v3.netlify.app/docs/quick-start/all-in-one-on-linux/) -## Most Popular Pages +Learn how to install KubeSphere on Linux with a minimal installation package. The tutorial serves as a basic kick-starter for you to understand the container platform, paving the way for learning the following guides. + +## [Minimal KubeSphere on Kubernetes](https://kubesphere-v3.netlify.app/docs/quick-start/minimal-kubesphere-on-k8s/) + +Learn how to install KubeSphere on existing Kubernetes clusters with a minimal installation package. Your Kubernetes clusters can be hosed on cloud or on-premises. + +## [Create Workspace, Project, Account and Role](https://kubesphere-v3.netlify.app/docs/quick-start/create-workspace-and-project/) + +Understand how you can take advantage of multi-tenant system in KubeSphere for fine-grained access control at different levels. + +## [Deploy Bookinfo](https://kubesphere-v3.netlify.app/docs/quick-start/deploy-bookinfo-to-k8s/) + +Explore KubeSphere service mesh by deploying Bookinfo and using different traffic management strategies, such as canary release. + +## [Compose and Deploy Wordpress](https://kubesphere-v3.netlify.app/docs/quick-start/composing-an-app/) + +Learn the entire process of deploying an example app in KubeSphere, including credential creation, volume creation, and component setting. + +## [Enable Pluggable Components](https://kubesphere-v3.netlify.app/docs/quick-start/enable-pluggable-components/) + +Install pluggable components on the platform so that you can explore KubeSphere in an all-around way. Pluggable components can be enabled both before and after the installation. -Below you will find some of the most common and helpful pages from this chapter. We highly recommend you to review them at first. -{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} diff --git a/content/zh/docs/quick-start/all-in-one-on-linux.md b/content/zh/docs/quick-start/all-in-one-on-linux.md index 4237501c5..44b48bfa7 100644 --- a/content/zh/docs/quick-start/all-in-one-on-linux.md +++ b/content/zh/docs/quick-start/all-in-one-on-linux.md @@ -1,8 +1,180 @@ --- -title: "All-in-one on Linux" -keywords: 'kubesphere, kubernetes, docker, multi-tenant' -description: 'All-in-one on Linux' +title: "All-in-one Installation on Linux" +keywords: 'KubeSphere, Kubernetes, All-in-one, Installation' +description: 'All-in-one Installation on Linux' -linkTitle: "All-in-one on Linux" +linkTitle: "All-in-one Installation on Linux" weight: 3010 --- + +For those who are new to KubeSphere and looking for a quick way to discover the platform, the all-in-one mode is your best choice to get started. It features rapid deployment and hassle-free configuration installation with KubeSphere and Kubernetes all provisioned on your machine. + +## Prerequisites + +If your machine is behind a firewall, you need to open relevant ports by following the document [Ports Requirement](../port-firewall). + +## Step 1: Prepare Linux Machine + +See the requirements for hardware and operating system shown below. To get started with all-in-one installation, you only need to prepare one host according to the following requirements. + +### Hardware Recommendation + +| System | Minimum Requirements | +| ------------------------------------------------------ | ------------------------------------------- | +| **Ubuntu** *16.04, 18.04* | CPU: 2 Cores, Memory: 4 G, Disk Space: 40 G | +| **Debian** *Buster, Stretch* | CPU: 2 Cores, Memory: 4 G, Disk Space: 40 G | +| **CentOS** *7*.x | CPU: 2 Cores, Memory: 4 G, Disk Space: 40 G | +| **Red Hat Enterprise Linux 7** | CPU: 2 Cores, Memory: 4 G, Disk Space: 40 G | +| **SUSE Linux Enterprise Server 15/openSUSE Leap 15.2** | CPU: 2 Cores, Memory: 4 G, Disk Space: 40 G | + +{{< notice note >}} + +The system requirements above and the instructions below are for the default minimal installation without any optional components enabled. If your machine has at least 8 cores and 16G memory, it is recommended that you enable all components. For more information, see Enable Pluggable Components. + +{{}} + +### Node Requirements + +- The node can be accessed through `SSH`. +- `sudo`/`curl`/`openssl` should be used. +- `ebtables`/`socat`/`ipset`/`conntrack` should be installed in advance. +- `docker` can be installed by yourself or by KubeKey. + +### Network and DNS Requirements + +- Make sure the DNS address in `/etc/resolv.conf` is available. Otherwise, it may cause some issues of DNS in clusters. +- If your network configuration uses Firewall or Security Group, you must ensure infrastructure components can communicate with each other through specific ports. It's recommended that you turn off the firewall or follow the guide [Network Access](https://github.com/kubesphere/kubekey/blob/master/docs/network-access.md). + +{{< notice tip >}} + +- It is recommended that your OS be clean (without any other software installed). Otherwise, there may be conflicts. +- It is recommended that a container image mirror (accelerator) be prepared if you have trouble downloading images from dockerhub.io. See [Configure registry mirrors for the Docker daemon](https://docs.docker.com/registry/recipes/mirror/#configure-the-docker-daemon). + +{{}} + +## Step 2: Download KubeKey + +{{< tabs >}} + +{{< tab "For users with poor network to GitHub" >}} + +For users in China, you can download the installer using this link. + +```bash +wget https://kubesphere.io/kubekey/releases/v1.0.0 +``` +{{}} + +{{< tab "For users with good network to GitHub" >}} + +For users with good network to GitHub, you can download it from [GitHub Release Page](https://github.com/kubesphere/kubekey/releases/tag/v1.0.0) or use the following link directly. + +```bash +wget https://github.com/kubesphere/kubekey/releases/download/v1.0.0/kubekey-v1.0.0-linux-amd64.tar.gz +``` +{{}} + +{{}} + +Unzip it. + +```bash +tar -zxvf v1.0.0 +``` + +Grant the execution right to `kk`: + +```bash +chmod +x kk +``` + +{{< notice info >}} + +Developed in Go language, KubeKey represents a brand-new installation tool as a replacement for the ansible-based installer used before. KubeKey provides users with flexible installation choices, as they can install KubeSphere and Kubernetes separately or install them at one time, which is convenient and efficient. + +{{}} + +## Step 3: Get Started with Installation + +In this QuickStart tutorial, you only need to execute one command for installation, the template of which is shown below: + +```bash +./kk create cluster [--with-kubernetes version] [--with-kubesphere version] +``` + +Create a Kubernetes cluster with KubeSphere installed (e.g. `--with-kubesphere v3.0.0`), this is an example for your reference: + + +```bash +./kk create cluster --with-kubernetes v1.17.9 --with-kubesphere [version] +``` + +{{< notice note >}} + +- Supported Kubernetes versions: *v1.15.12*, *v1.16.13*, *v1.17.9* (default), *v1.18.6*. +- For all-in-one installation, generally speaking, you do not need to change any configuration. +- KubeKey will install [OpenEBS](https://openebs.io/) to provision LocalPV for development and testing environment by default, which is convenient for new users. For other storage classes, see Storage Class Configuration. + +{{}} + +After you execute the command, you will see a table as below for environment check. + +![environment-check](https://ap3.qingstor.com/kubesphere-website/docs/environment-check.png) + +Make sure the above components marked with `y` are installed and input `yes` to continue. + +{{< notice note >}} + +If you download the binary file directly in Step 2, you do not need to install `docker` as KubeKey will install it automatically. + +{{}} + +## Step 4: Verify the Installation + +When you see the output as below, it means the installation finishes. + +![installation-complete](https://ap3.qingstor.com/kubesphere-website/docs/Installation-complete.png) + +Input the following command to check the result. + +```bash +kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f +``` + +The output displays the IP address and port number of the web console, which is exposed through `NodePort 30880` by default. Now, you can access the console through `EIP:30880` with the default account and password (`admin/P@88word`). + +```bash +##################################################### +### Welcome to KubeSphere! ### +##################################################### + +Console: http://192.168.0.2:30880 +Account: admin +Password: P@88w0rd + +NOTES: + 1. After logging into the console, please check the + monitoring status of service components in + the "Cluster Management". If any service is not + ready, please wait patiently until all components + are ready. + 2. Please modify the default password after login. + +##################################################### +https://kubesphere.io 20xx-xx-xx xx:xx:xx +##################################################### +``` + +{{< notice note >}} + +You may need to bind EIP and configure port forwarding in your environment for external users to access the console. Besides, make sure the port 30880 is opened in your security groups. + +{{}} + +After logging in the console, you can check the status of different components in **Components**. You may need to wait for some components to be up and running if you want to use related services. You can also use `kubectl get pod --all-namespaces` to inspect the running status of KubeSphere workloads. + +![components](https://ap3.qingstor.com/kubesphere-website/docs/components.png) + +## Enable Pluggable Components (Optional) + +The guide above is used only for minimal installation by default. To enable other components in KubeSphere, see Enable Pluggable Components for more details. diff --git a/content/zh/docs/quick-start/create-workspace-and-project.md b/content/zh/docs/quick-start/create-workspace-and-project.md index 954f8648d..2b80e89b3 100644 --- a/content/zh/docs/quick-start/create-workspace-and-project.md +++ b/content/zh/docs/quick-start/create-workspace-and-project.md @@ -1,8 +1,252 @@ --- -title: "Create Workspace, Project, Account, Role" -keywords: 'kubesphere, kubernetes, docker, multi-tenant' -description: 'Create Workspace, Project, Account, and Role' +title: "Create Workspace, Project, Account and Role" +keywords: 'KubeSphere, Kubernetes, Multi-tenant, Workspace, Account, Role, Project' +description: 'Create Workspace, Project, Account and Role' -linkTitle: "Create Workspace, Project, Account, Role" +linkTitle: "Create Workspace, Project, Account and Role" weight: 3030 --- + + +## Objective + +This guide demonstrates how to create roles and user accounts which are required for the following tutorials. Meanwhile, you will learn how to create projects and DevOps projects within your workspace where your workloads are running. After this tutorial, you will become familiar with KubeSphere multi-tenant management system. + +## Prerequisites + +KubeSphere needs to be installed in your machine. + +## Estimated Time + +About 15 minutes. + +## Architecture + +The multi-tenant system of KubeSphere features **three** levels of hierarchical structure which are cluster, workspace and project. A project in KubeSphere is a Kubernetes namespace. + +You can create multiple workspaces within a Kubernetes cluster. Under each workspace, you can also create multiple projects. + +Each level has multiple built-in roles. Besides, KubeSphere allows you to create roles with customized authorization as well. The KubeSphere hierarchy is applicable for enterprise users with different teams or groups, and different roles within each team. + +## Hands-on Lab + +### Task 1: Create an Account + +After KubeSphere is installed, you need to add different users with varied roles to the platform so that they can work at different levels on various resources. Initially, you only have one default account, which is admin, granted the role `platform-admin`. In the first task, you will create an account `user-manager` and further create more accounts as `user-manager`. + +1. Log in the web console as `admin` with the default account and password (`admin/P@88w0rd`). + +{{< notice tip >}} + +For account security, it is highly recommended that you change your password the first time you log in the console. To change your password, select **User Settings** in the drop-down menu at the top right corner. In **Password Setting**, set a new password. + +{{}} + +2. After you log in the console, click **Platform** at the top left corner and select **Access Control**. + + ![access-control](https://ap3.qingstor.com/kubesphere-website/docs/access-control.png) + +In **Account Roles**, there are four available built-in roles as shown below. The account to be created next will be assigned the role `users-manager`. + +| Built-in Roles | Description | +| ------------------ | ------------------------------------------------------------ | +| workspaces-manager | Workspace manager in the platform who manages all workspaces in the platform. | +| users-manager | User manager in the platform who manages all users. | +| platform-regular | Normal user in the platform who has no access to any resources before joining a workspace or cluster. | +| platform-admin | Platform administrator who can manage all resources in the platform. | + +{{< notice note >}} + +Built-in roles are created automatically by KubeSphere and cannot be edited or deleted. + +{{}} + +3. In **Accounts**, click **Create**. In the pop-up window, provide all the necessary information (marked with *) and select `users-manager` for **Role**. Refer to the image below as an example. + +![create-account](https://ap3.qingstor.com/kubesphere-website/docs/create-account.jpg) + +Click **OK** after you finish. A newly-created account will display in the account list in **Accounts**. + +4. Log out of the console and log back in with the account `user-manager` to create four accounts that will be used in the following tutorials. + +{{< notice tip >}} + +To log out, click your username at the top right corner and select **Log Out**. + +{{}} + +For detailed information about the four accounts you need to create, refer to the table below. + +| Account | Role | Description | +| --------------- | ------------------ | ------------------------------------------------------------ | +| ws-manager | workspaces-manager | Create and manage all workspaces. | +| ws-admin | platform-regular | Manage all resources in a specified workspace (This account is used to invite new members to a workspace in this example). | +| project-admin | platform-regular | Create and manage projects and DevOps projects, and invite new members into the projects. | +| project-regular | platform-regular | `project-regular` will be invited to a project or DevOps project by `project-admin`. This account will be used to create workloads, pipelines and other resources in a specified project. | + +5. Verify the four accounts created. + +![account-list](https://ap3.qingstor.com/kubesphere-website/docs/account-list.png) + +### Task 2: Create a Workspace + +In this task, you need to create a workspace using the account `ws-manager` created in the previous task. As the basic logic unit for the management of projects, DevOps projects and organization members, workspaces underpin multi-tenant system of KubeSphere. + +1. Log in KubeSphere as `ws-manager` which has the authorization to manage all workspaces on the platform. Click **Platform** at the top left corner. In **Workspaces**, you can see there is only one default workspace **system-workspace** listed, where system-related components and services run. You are not allowed to delete this workspace. + +![create-workspace](https://ap3.qingstor.com/kubesphere-website/docs/create-workspace.jpg) + +2. Click **Create** on the right, name the new workspace `demo-workspace` and set the user `ws-admin` as the workspace manager shown in the screenshot below: + +![create-workspace](https://ap3.qingstor.com/kubesphere-website/docs/create-workspace.png) + +Click **Create** after you finish. + +3. Log out of the console and log back in as `ws-admin`. In **Workspace Settings**, select **Workspace Members** and click **Invite Member**. + +![invite-member](https://ap3.qingstor.com/kubesphere-website/docs/20200827111048.png) + +4. Invite both `project-admin` and `project-regular` to the workspace. Grant them the role `workspace-self-provisioner` and `workspace-viewer` respectively. + +{{< notice note >}} + +The actual role name follows a naming convention: `workspace name-role name`. For example, in this workspace named `demo`, the actual role name of the role `workspace-viewer` is `demo-workspace-viewer`. + +{{}} + +![invite-member](https://ap3.qingstor.com/kubesphere-website/docs/20200827113124.png) + +5. After you add both `project-admin` and `project-regular` to the workspace, click **OK**. In **Workspace Members**, you can see three members listed. + +| Account | Role | Description | +| --------------- | -------------------------- | ------------------------------------------------------------ | +| ws-admin | workspace-admin | Manage all resources under the workspace (We use this account to invite new members to the workspace). | +| project-admin | workspace-self-provisioner | Create and manage projects and DevOps projects, and invite new members to join the projects. | +| project-regular | workspace-viewer | `project-regular` will be invited by `project-admin` to join a project or DevOps project. The account can be used to create workloads, pipelines, etc. | + +### Task 3: Create a Project + +In this task, you need to create a project using the account `project-admin` created in the previous task. A project in KubeSphere is the same as a namespace in Kubernetes, which provides virtual isolation for resources. For more information, see [Namespaces](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/). + +1. Log in KubeSphere as `project-admin`. In **Projects**, click **Create**. + +![kubesphere-projects](https://ap3.qingstor.com/kubesphere-website/docs/kubesphere-projects.png) + +2. Enter the project name (e.g. `demo-project`) and click **OK** to finish. You can also add an alias and description for the project. + +![demo-project](https://ap3.qingstor.com/kubesphere-website/docs/demo-project.png) + +3. In **Projects**, click the project created just now to view its detailed information. + +![click-demo-project](https://ap3.qingstor.com/kubesphere-website/docs/click-demo-project.png) + +4. In the overview page of the project, the project quota remains unset by default. You can click **Set** and specify resource requests and limits based on your needs (e.g. 1 core for CPU and 1000Gi for memory). + +![project-overview](https://ap3.qingstor.com/kubesphere-website/docs/quota.png) + +![set-quota](https://ap3.qingstor.com/kubesphere-website/docs/20200827134613.png) + +5. Invite `project-regular` to this project and grant this user the role `operator`. Please refer to the image below for specific steps. + +![](https://ap3.qingstor.com/kubesphere-website/docs/20200827135424.png) + +{{< notice info >}} + +The user granted the role `operator` will be a project maintainer who can manage resources other than users and roles in the project. + +{{}} + +#### Set Gateway + +Before creating a route, you need to enable a gateway for this project. The gateway is an [NGINX Ingress controller](https://github.com/kubernetes/ingress-nginx) running in the project. + +{{< notice info >}} + +A route refers to Ingress in Kubernetes, which is an API object that manages external access to the services in a cluster, typically HTTP. + +{{}} + +6. To set a gateway, go to **Advanced Settings** in **Project Settings** and click **Set Gateway**. The account `project-admin` is still used in this step. + +![set-gateway](https://ap3.qingstor.com/kubesphere-website/docs/20200827141823.png) + +7. Choose the access method **NodePort** and click **Save**. + +![nodeport](https://ap3.qingstor.com/kubesphere-website/docs/20200827141958.png) + +8. Under **Internet Access**, it can be seen that the Gateway Address and the NodePort of http and https all display in the page. + +{{< notice note >}} + +If you want to expose services using the type `LoadBalancer`, you need to use the [LoadBalancer plugin of cloud providers](https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/). If your Kubernetes cluster is running in a bare metal environment, it is recommended you use [Porter](https://github.com/kubesphere/porter) as the LoadBalancer plugin. + +{{}} + +![NodePort-setting-done](https://ap3.qingstor.com/kubesphere-website/docs/20200827142411.png) + +### Task 4: Create a Role + +After you finish the above tasks, you know that users can be granted different roles at different levels. The roles used in previous tasks are all built-in ones created by KubeSphere itself. In this task, you will learn how to define a role yourself to meet the needs in your work. + +1. Log in the console as `admin` again and go to **Access Control**. +2. In **Account Roles**, there are four system roles listed which cannot be deleted or edited. Click **Create** and set a **Role Identifier**. In this example, a role named `roles-manager` will be created. + +![create-role](https://ap3.qingstor.com/kubesphere-website/docs/20200827153339.png) + +{{< notice note >}} + +It is recommended you enter a description for the role as it explains what the role is used for. The role created here will be responsible for role management only, including adding and deleting roles. + +{{}} + +Click **Edit Authorization** to continue. + +3. In **Access Control**, select the authorization that you want the user granted this role to have. For example, **Users View**, **Roles Management** and **Roles View** are selected for this role. Click **OK** to finish. + +![edit-authorization](https://ap3.qingstor.com/kubesphere-website/docs/20200827153651.png) + +{{< notice note >}} + +**Depend on** means the major authorization (the one listed after **Depend on**) needs to be selected first so that the affiliated authorization can be assigned. + +{{}} + +4. Newly-created roles will be listed in **Account Roles**. You can click the three dots on the right to edit it. + +![roles-manager](https://ap3.qingstor.com/kubesphere-website/docs/20200827154723.png) + +5. In **Accounts**, you can add a new account and grant it the role `roles-manager` or change the role of an existing account to `roles-manager` by editing it. + +![edit-role](https://ap3.qingstor.com/kubesphere-website/docs/20200827155205.png) + +{{< notice note >}} + +The role of `roles-manager` overlaps with `users-manager` while the latter is also capable of user management. This example is only for demonstration purpose. You can create customized roles based on your needs. + +{{}} + +### Task 5: Create a DevOps Project (Optional) + +{{< notice note >}} + +To create a DevOps project, you need to install KubeSphere DevOps system in advance, which is a pluggable component providing CI/CD pipelines, Binary-to-image, Source-to-image features, and more. For more information about how to enable DevOps, see KubeSphere DevOps System. + +{{}} + +1. Log in the console as `project-admin` for this task. In **DevOps Projects**, click **Create**. + +![devops-project](https://ap3.qingstor.com/kubesphere-website/docs/20200827145521.png) + +2. Enter the DevOps project name (e.g. `demo-devops`) and click **OK**. You can also add an alias and description for the project. + +![devops-project](https://ap3.qingstor.com/kubesphere-website/docs/20200827145755.png) + +3. In **DevOps Projects**, click the project created just now to view its detailed information. + +![new-devops-project](https://ap3.qingstor.com/kubesphere-website/docs/20200827150523.png) + +4. Go to **Project Management** and select **Project Members**. Click **Invite Member** to grant `project-regular` the role of `maintainer`, who is allowed to create pipelines and credentials. + +![devops-invite-member](https://ap3.qingstor.com/kubesphere-website/docs/20200827150704.png) + +Congratulations! You are now familiar with the multi-tenant management system of KubeSphere. In the next several tutorials, the account `project-regular` will also be used to demonstrate how to create applications and resources in a project or DevOps project. diff --git a/content/zh/docs/quick-start/enable-pluggable-compoents.md b/content/zh/docs/quick-start/enable-pluggable-compoents.md deleted file mode 100644 index 390d6dd9e..000000000 --- a/content/zh/docs/quick-start/enable-pluggable-compoents.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -title: "Enable Pluggable Components" -keywords: 'kubesphere, kubernetes, docker, multi-tenant' -description: 'Enable Pluggable Components' - -linkTitle: "Enable Pluggable Components" -weight: 3060 ---- diff --git a/content/zh/docs/quick-start/enable-pluggable-components.md b/content/zh/docs/quick-start/enable-pluggable-components.md new file mode 100644 index 000000000..e0caa45a9 --- /dev/null +++ b/content/zh/docs/quick-start/enable-pluggable-components.md @@ -0,0 +1,152 @@ +--- +title: "Enable Pluggable Components" +keywords: 'KubeSphere, Kubernetes, pluggable, components' +description: 'Enable Pluggable Components' + +linkTitle: "Enable Pluggable Components" +weight: 3060 +--- + +This tutorial demonstrates how to enable pluggable components of KubeSphere both before and after the installation. KubeSphere features ten pluggable components which are listed below. + +| Configuration Item | Corresponding Component | Description | +| ------------------ | ------------------------------------- | ------------------------------------------------------------ | +| alerting | KubeSphere alerting system | Enable users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from. | +| auditing | KubeSphere audit log system | Provide a security-relevant chronological set of records, recording the sequence of activities that happen in the platform, initiated by different tenants. | +| devops | KubeSphere DevOps system | Provide an out-of-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image and Binary-to-Image. | +| events | KubeSphere events system | Provide a graphical web console for the exporting, filtering and alerting of Kubernetes events in multi-tenant Kubernetes clusters. | +| logging | KubeSphere logging system | Provide flexible logging functions for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd. | +| metrics_server | HPA | The Horizontal Pod Autoscaler automatically scales the number of pods based on needs. | +| networkpolicy | Network policy | Allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods). | +| notification | KubeSphere notification system | Allow users to set `AlertManager` as its sender. Receivers include Email, WeChat Work, and Slack. | +| openpitrix | KubeSphere App Store | Provide an app store for Helm-based applications and allow users to manage apps throughout the entire lifecycle. | +| servicemesh | KubeSphere Service Mesh (Istio-based) | Provide fine-grained traffic management, observability and tracing, and visualized traffic topology. | + +For more information about each component, see Overview of Enable Pluggable Components. + +{{< notice note >}} + +- By default, the above components are not enabled except `metrics_server`. In some cases, you need to manually disable it by changing `true` to `false` in the configuration. This is because the component may already be installed in your environment, especially for cloud-hosted Kubernetes clusters. +- `multicluster` is not covered in this tutorial. If you want to enable this feature, you need to set a corresponding value for `clusterRole`. For more information, see [Multi-cluster Management](https://kubesphere-v3.netlify.app/docs/multicluster-management/). +- Make sure your machine meets the hardware requirements before the installation. Here is the recommendation if you want to enable all pluggable components: CPU ≥ 8 Cores, Memory ≥ 16 G, Disk Space ≥ 100 G. + +{{}} + +## Enable Pluggable Components before Installation + +### **Installing on Linux** + +When you install KubeSphere on Linux, you need to create a configuration file, which lists all KubeSphere components. + +1. In the tutorial of [Installing KubeSphere on Linux](https://kubesphere-v3.netlify.app/docs/installing-on-linux/introduction/multioverview/), you create a default file **config-sample.yaml**. Modify the file by executing the following command: + +```bash +vi config-sample.yaml +``` + +{{< notice note >}} + +If you adopt [All-in-one Installation](https://kubesphere-v3.netlify.app/docs/quick-start/all-in-one-on-linux/), you do not need to create a config-sample.yaml file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable pluggable components in this mode (e.g. for testing purpose), refer to the following section to see how pluggable components can be installed after installation. + +{{}} + +2. In this file, enable the pluggable components you want to install by changing `false` to `true` for `enabled`. Here is [an example file](https://github.com/kubesphere/kubekey/blob/master/docs/config-example.md) for your reference. Save the file after you finish. +3. Create a cluster using the configuration file: + +```bash +./kk create cluster -f config-sample.yaml +``` + +### Installing on Kubernetes + +When you install KubeSphere on Kubernetes, you need to download the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) for cluster setting. If you want to install pluggable components, do not use `kubectl apply -f` directly for this file. + +1. In the tutorial of [Installing KubeSphere on Kubernetes](https://kubesphere-v3.netlify.app/docs/installing-on-kubernetes/introduction/overview/), you execute `kubectl apply -f` first for the file [kubesphere-installer.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml). After that, to enable pluggable components, create a local file cluster-configuration.yaml. + +```bash +vi cluster-configuration.yaml +``` + +2. Copy all the content in the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) and paste it to the local file just created. +3. In this local cluster-configuration.yaml file, enable the pluggable components you want to install by changing `false` to `true` for `enabled`. Here is [an example file](https://github.com/kubesphere/ks-installer/blob/master/deploy/cluster-configuration.yaml) for your reference. Save the file after you finish. +4. Execute the following command to start installation: + +```bash +kubectl apply -f cluster-configuration.yaml +``` + +Whether you install KubeSphere on Linux or on Kubernetes, you can check the status of the components you have enabled in the web console of KubeSphere after installation. Go to **Components**, and you can see an image below: + +![KubeSphere-components](https://ap3.qingstor.com/kubesphere-website/docs/20200828145506.png) + +## Enable Pluggable Components after Installation + +KubeSphere web console provides a convenient way for users to view and operate on different resources. To enable pluggable components after installation, you only need to make few adjustments in the console directly. For those who are accustomed to the Kubernetes command-line tool, kubectl, they will have no difficulty in using KubeSphere as the tool is integrated into the console. + +1. Log in the console as `admin`. Click **Platform** at the top left corner and select **Clusters Management**. + +![clusters-management](https://ap3.qingstor.com/kubesphere-website/docs/20200828111130.png) + +2. Click **CRDs** and enter `clusterconfiguration` in the search bar. Click the result to view its detailed page. + +![crds](https://ap3.qingstor.com/kubesphere-website/docs/20200828111321.png) + +{{< notice info >}} + +A Custom Resource Definition (CRD) allows users to create a new type of resources without adding another API server. They can use these resources like any other native Kubernetes objects. + +{{}} + +3. In **Resource List**, click the three dots on the right of `ks-installer` and select **Edit YAML**. + +![edit-ks-installer](https://ap3.qingstor.com/kubesphere-website/docs/20200827182002.png) + +4. In this yaml file, enable the pluggable components you want to install by changing `false` to `true` for `enabled`. After you finish, click **Update** to save the configuration. + +![enable-components](https://ap3.qingstor.com/kubesphere-website/docs/20200828112036.png) + +5. You can use the web kubectl to check the installation process by executing the following command: + +```bash +kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f +``` + +{{< notice tip >}} + +You can find the web kubectl tool by clicking the hammer icon at the bottom right corner of the console. + +{{}} + +6. The output will display a message as below if the component is successfully installed. + +```bash +##################################################### +### Welcome to KubeSphere! ### +##################################################### + +Console: http://192.168.0.2:30880 +Account: admin +Password: P@88w0rd + +NOTES: + 1. After logging into the console, please check the + monitoring status of service components in + the "Cluster Management". If any service is not + ready, please wait patiently until all components + are ready. + 2. Please modify the default password after login. + +##################################################### +https://kubesphere.io 20xx-xx-xx xx:xx:xx +##################################################### +``` + +7. In **Components**, you can see the status of different components. + +![components](https://ap3.qingstor.com/kubesphere-website/docs/20200828115111.png) + +{{< notice tip >}} + +If you do not see relevant components in the above image, some pods may not be ready yet. You can execute `kubectl get pod --all-namespaces` through kubectl to see the status of pods. + +{{}} \ No newline at end of file diff --git a/content/zh/docs/quick-start/minimal-kubesphere-on-k8s.md b/content/zh/docs/quick-start/minimal-kubesphere-on-k8s.md index 36fd2ce80..666e90c89 100644 --- a/content/zh/docs/quick-start/minimal-kubesphere-on-k8s.md +++ b/content/zh/docs/quick-start/minimal-kubesphere-on-k8s.md @@ -1,8 +1,62 @@ --- title: "Minimal KubeSphere on Kubernetes" -keywords: 'kubesphere, kubernetes, docker, multi-tenant' -description: 'Install a Minimal KubeSphere on Kubernetes' +keywords: 'KubeSphere, Kubernetes, Minimal, Installation' +description: 'Minimal Installation of KubeSphere on Kubernetes' linkTitle: "Minimal KubeSphere on Kubernetes" weight: 3020 --- + +In addition to installing KubeSphere on a Linux machine, you can also deploy it on existing Kubernetes clusters directly. This QuickStart guide walks you through the general steps of completing a minimal KubeSphere installation on Kubernetes. For more information, see [Installing on Kubernetes](https://kubesphere-v3.netlify.app/docs/installing-on-kubernetes/). + +{{< notice note >}} + +- To install KubeSphere on Kubernetes, your Kubernetes version must be `1.15.x, 1.16.x, 1.17.x, or 1.18.x`; +- Make sure your machine meets the minimal hardware requirement: CPU > 1 Core, Memory > 2 G; +- A default Storage Class in your Kubernetes cluster needs to be configured before the installation; +- The CSR signing feature is activated in kube-apiserver when it is started with the `--cluster-signing-cert-file` and `--cluster-signing-key-file` parameters. See [RKE installation issue](https://github.com/kubesphere/kubesphere/issues/1925#issuecomment-591698309). +- For more information about the prerequisites of installing KubeSphere on Kubernetes, see [Prerequisites](https://kubesphere-v3.netlify.app/docs/installing-on-kubernetes/introduction/prerequisites/). + +{{}} + +## Deploy KubeSphere + +After you make sure your machine meets the prerequisites, you can follow the steps below to install KubeSphere. + +- Please read the note below before you execute the commands to start installation: + +{{< notice note >}} + +- If your server has trouble accessing GitHub, you can copy the content in [kubesphere-installer.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml) and [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) respectively and past it to local files. You then can use `kubectl apply -f` for the local files to install KubeSphere. +- In cluster-configuration.yaml, you need to disable `metrics_server` manually by changing `true` to `false` if the component has already been installed in your environment, especially for cloud-hosted Kubernetes clusters. + +{{}} + +```bash +kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml +``` + +```bash +kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml +``` + +- Inspect the logs of installation: + +```bash +kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f +``` + +- Use `kubectl get pod --all-namespaces` to see whether all pods are running normally in relevant namespaces of KubeSphere. If they are, check the port (30880 by default) of the console through the following command: + +```bash +kubectl get svc/ks-console -n kubesphere-system +``` + +- Make sure port 30880 is opened in security groups and access the web console through the NodePort (`IP:30880`) with the default account and password (`admin/P@88w0rd`). +- After logging in the console, you can check the status of different components in **Components**. You may need to wait for some components to be up and running if you want to use related services. + +![components](https://ap3.qingstor.com/kubesphere-website/docs/components.png) + +## Enable Pluggable Components (Optional) + +The guide above is used only for minimal installation by default. To enable other components in KubeSphere, see Enable Pluggable Components for more details. diff --git a/content/zh/docs/release/release-v300.md b/content/zh/docs/release/release-v300.md index 11dddee95..e459798ca 100644 --- a/content/zh/docs/release/release-v300.md +++ b/content/zh/docs/release/release-v300.md @@ -1,5 +1,5 @@ --- -title: "Release Notes for 3.0.0" +title: "Release Notes For 3.0.0" keywords: "Kubernetes, KubeSphere, release-notes" description: "KubeSphere Release Notes for 3.0.0" @@ -7,99 +7,100 @@ linkTitle: "Release Notes - 3.0.0" weight: 50 --- -## 如何获取 v3.0.0 +## How to get v3.0.0 - [Install KubeSphere v3.0.0 on Linux](https://github.com/kubesphere/kubekey) - [Install KubeSphere v3.0.0 on existing Kubernetes](https://github.com/kubesphere/ks-installer) -# 发行注记 +## Release Notes -## **安装** +## **Installer** -### 功能 +### FEATURES -- 全新的开箱即用的 installer: [KubeKey](https://github.com/kubesphere/kubekey),v1.0.0,极大降低对不同操作系统环境的依赖,通过更简单、高效的方式快速部署 Kubernetes + KubeSphere 环境 +- A brand-new installer: [KubeKey](https://github.com/kubesphere/kubekey), v1.0.0, which is a turnkey solution to installing Kubernetes with KubeSphere on different platforms. It is more easy to use and reduces the dependency on OS environment -### 升级和优化 +### UPGRADES & ENHANCEMENTS -- 新版 [ks-installer](https://github.com/kubesphere/ks-installer),v3.0.0,兼容 Kubernetes 1.15.x、1.16.x、1.17.x 和 1.18.x -- [KubeKey](https://github.com/kubesphere/kubekey) 官方验证并支持 Kubernetes 1.15.12、1.16.13、1.17.9 和 1.18.6(注意,请避免使用 KubeKey 安装 Kubernetes 1.15~1.15.5 和 1.16~1.16.2, 因为这些版本的 Kubernetes 有 [API 验证失败的问题](https://github.com/kubernetes/kubernetes/issues/83778)) -- 增加对开源操作系统 EulerOS, UOS 和 KylinOS 的支持 -- 增加对鲲鹏和飞腾 CPU 的支持 -- 使用 ClusterConfiguration 替代之前的 ConfigMap 资源对象存储 ks-installer 相关的安装配置信息 +- Be compatible with Kubernetes 1.15.x, 1.16.x, 1.17.x and 1.18.x for [ks-installer](https://github.com/kubesphere/ks-installer), v3.0.0 +- [KubeKey](https://github.com/kubesphere/kubekey) officially supports Kubernetes 1.15.12, 1.16.13, 1.17.9 and 1.18.6 (Please avoid using KubeKey to install Kubernetes 1.15 to 1.15.5 and 1.16 to 1.16.2, because Kubernetes has an [API validation issue](https://github.com/kubernetes/kubernetes/issues/83778)) +- Add support for EulerOS, UOS and KylinOS +- Add support for Kunpeng and Phytium CPU +- Use ClusterConfiguration to store ks-installer's configuration instead of ConfigMap -## **集群管理** +## **Cluster Management** -### 功能 +### FEATURES -- 支持多集群统一化管理 -- 支持跨集群联邦部署 +- Support management of multiple Kubernetes clusters +- Support Federated Deployment across multiple clusters -## **可观察性** +## **Observability** -### 功能 +### FEATURES -- 支持在 KubeSphere 控制台添加第三方应用监控指标 -- 支持 K8s 及 KubeSphere 操作审计,并支持审计记录的归档、检索和告警 -- 支持 K8s 事件管理,并支持基于 [kube-events](https://github.com/kubesphere/kube-events) 的事件的归档、检索和告警 -- 支持租户级操作审计和 K8s 事件的检索,授权用户仅能检索自己权限允许范围内的操作审计记录和 K8s 事件 -- 支持将审计记录和 K8s 事件归档至 Elasticsearch,Kafka 或者 Fluentd -- 基于 [Notification Manager](https://github.com/kubesphere/notification-manager) 支持多租户通知 -- 支持 Alertmanager v0.21.0 +- Support custom monitoring for 3rd-party application metrics in KubeSphere console +- Add Kubernetes and KubeSphere auditing support, including audit event archiving, searching and alerting +- Add Kubernetes event management support, including Kubernetes event archiving, searching and alerting based by [kube-events](https://github.com/kubesphere/kube-events) +- Add tenant control to auditing and support Kubernetes event searching. A tenant user can only search his or her own auditing logs and Kubernetes events +- Support archiving auditing logs and Kubernetes events to Elasticsearch, Kafka or Fluentd +- Add multi-tenant notification support by [Notification Manager](https://github.com/kubesphere/notification-manager) +- Support Alertmanager v0.21.0 -### 升级和优化 +### UPGRADES & ENHANCEMENTS -- 升级 Prometheus Operator 至 v0.38.3(KubeSphere 定制版) -- 升级 Prometheus 至 v2.20.1 -- 升级 Node Exporter 至 v0.18.1 -- 升级 kube-state-metrics 至 v1.9.6 -- 升级 metrics server 至 v0.3.7 -- metrics-server 调整为缺省开启 -- 升级 Fluent Bit Operator 至 v0.2.0 -- 升级 Fluent Bit 至 v1.4.6 -- 极大改善日志检索效率 -- 允许平台管理员查看已被删除的项目(namespace)下的 pod 的日志 -- 优化落盘日志收集配置 +- Upgrade Prometheus Operator to v0.38.3 (KubeSphere customized version ) +- Upgrade Prometheus to v2.20.1 +- Upgrade Node Exporter to v0.18.1 +- Upgrade kube-state-metrics to v1.9.6 +- Upgrade metrics server to v0.3.7 +- metrics-server is enabled by default +- Upgrade Fluent Bit Operator to v0.2.0 +- Upgrade Fluent Bit to v1.4.6 +- Significantly improve log searching performance +- Allow platform admins to view pod logs from deleted namespaces +- Adjust the display style of log searching results in Toolbox +- Optimize log collection configuration for log files on pod's volume -### 问题修复 +### BUG FIXES -- 修复新创建项目的监控数据图时间轴偏移问题 (#[2868](https://github.com/kubesphere/kubesphere/issues/2868)) -- 修复工作负载级别的告警在某些场景下无法正常工作的问题 (#[2834](https://github.com/kubesphere/kubesphere/issues/2834)) -- 修复节点在 NotReady 状态下没有监控数据的问题 +- Fix time skew in metric graphs for newly created namespaces (#[2868](https://github.com/kubesphere/kubesphere/issues/2868)) +- Fix workload-level alerting not working as expected (#[2834](https://github.com/kubesphere/kubesphere/issues/2834)) +- Fix no metric data for NotReady nodes ## **DevOps** -### 功能 +### FEATURES -- 重构 DevOps 模块的架构,使用 CRDs 方式管理 DevOps 资源 +- Refactor DevOps framework, and use CRDs to manage DevOps resources -### 升级和优化 +### UPGRADES & ENHANCEMENTS -- 在安装包中删除 Sonarqube,调整为支持对接外部 Sonarqube +- Remove Sonarqube from installer default packages, and support for external Sonarqube -### 问题修复 +### BUG FIXES -- 修复 DevOps 权限数据在偶发场景下丢失的问题 +- Fix the issue that DevOps permission data is missing in a very limited number of cases -- 修复 DevOps 的 Stage 页面按钮无法正常工作的问题 (#[449](https://github.com/kubesphere/console/issues/449)) -- 修复流水线参数无法正常提交保存的问题 (#[2699](https://github.com/kubesphere/kubesphere/issues/2699)) +- Fix the issue that the Button in the Stage page doesn't work (#[449](https://github.com/kubesphere/console/issues/449)) +- Fix the issue that the parameterized pipeline failed to send the parameter's value (#[2699](https://github.com/kubesphere/kubesphere/issues/2699)) -## **应用商店** +## **App Store** -### 功能 +### FEATURES -- 支持 Helm V3 -- 支持将应用模板部署到多集群之中 -- 支持应用模板升级 -- 支持查看应用仓库同步过程中产生的事件 +- Support Helm V3 +- Support deploying application templates onto multiple clusters +- Support application template upgrade +- Users can view events that occur during repository synchronization -### 升级和优化 +### UPGRADES & ENHANCEMENTS -- 用户能使用相同的应用仓库名称 -- 支持应用模板中的 CRD 资源 -- 将 OpenPitrix 下的所有 Service 对象整合到一个 Service 之中 -- 在添加应用仓库时,支持 HTTP 验证方式 -- 应用仓库中新增和升级以下应用: +- Users can use the same application repository name +- Support the application template which contains CRDs +- Merge all OpenPitrix services into one service +- Support HTTP basic authentication when adding an application repository +- Add and upgrade below apps in App Store: AWS EBS CSI Driver 0.5.0 - Helm 0.3.0 AWS EFS CSI Driver 0.3.0 - Helm 0.1.0 AWS FSX CSI Driver 0.1.0 - Helm 0.1.0 @@ -119,81 +120,81 @@ weight: 50 Redis Exporter 1.3.4 - Helm 3.4.1 Tomcat 8.5.41 - Helm 0.4.1+1 -### 问题修复 +### BUG FIXES -- 修复 attachment IDs 字段长度不足的问题 +- Fix the issue of insufficient length of attachment IDs -## **网络** +## **Network** -### 功能 +### FEATURES -- 支持项目级租户网络隔离和网络防火墙策略管理 -- 支持企业空间级租户网络隔离 -- 支持增删改和查看原生 K8s 网络策略 +- Support project network isolation by adding controllers to manage custom project network policies +- Support workspace network isolation +- Support adding, viewing, modifying and deleting native Kubernetes network policies -## 微服务治理 +## **Service Mesh** -### 功能 +### FEATURES -- 支持清理 Jaeger ES 索引 +- Support cleaning Jaeger ES Indexer -### 升级和优化 +### UPGRADES & ENHANCEMENTS -- 升级 Istio 至 v1.4.8 +- Upgrade Istio to v1.4.8 -## **存储** +## **Storage** -### 功能 +### FEATURES -- 支持存储卷快照管理 -- 支持存储容量管理 -- 支持存储卷监控 +- Support volume snapshot management +- Support storage capacity management +- Support volume monitoring -## **安全** +## **Security** -### 功能 +### FEATURES -- 支持 LDAP,OAuth2 认证插件 -- 支持自定义企业空间角色 -- 支持自定义 DevOps 工程角色 -- 支持跨集群安全权限控制 -- 支持 pod security context (#[1453](https://github.com/kubesphere/kubesphere/issues/1453)) +- Support LDAP and OAuth login +- Support custom workspace roles +- Support custom DevOps project roles +- Support access control across multiple clusters +- Support pod security context (#[1453](https://github.com/kubesphere/kubesphere/issues/1453)) -### 升级和优化 +### UPGRADES & ENHANCEMENTS -- 简化了角色的自定义方式,将关联紧密的权限项聚合为权限组 -- 优化内置角色 +- Simplify the role definition +- Optimize built-in roles -### 问题修复 +### BUG FIXES -- 修复由于集群节点时间不同步导致的登录失败问题 +- Fix the issue of login failure due to node clock skew -## **全球化** +## **Globalization** -### 功能 +### FEATURES -- Web 控制台增加对西班牙语、繁体中文的支持 +- Add support for new languages in the web console, including Spanish and Traditional Chinese -## **用户体验** +## **User Experience** -### 功能 +### FEATURES -- 工具箱新增支持“访问历史”快捷操作,用户可以查看自己之前访问过的集群、企业空间、项目和 DevOps 工程,并且支持通过键盘快捷键方式快速启动 +- Add support for history record viewing in Toolbox. Users can re-visit the Clusters/Workspaces/Projects/DevOps Projects that they recently visited, which can also be launched through shortcut keys -### 升级和优化 +### UPGRADES & ENHANCEMENTS -- 重构和优化全局导航栏 -- 重构和优化详情页的痕迹导航 -- 重构和优化资源列表页的数据自刷新 -- 简化项目(namespace)的创建过程 -- 重构和优化应用的创建,支持通过 YAML 创建应用 -- 支持通过 YAML 方式修正工作负载 -- 调整工具箱中日志检索页面的数据展示方式 -- 重构和优化应用商店中应用部署的表单页 -- 支持 helm chart schema (#[schema-files](https://helm.sh/docs/topics/charts/#schema-files)) +- Refactor global navigation +- Refactor breadcrumbs in detail pages +- Refactor data watching in the resources list +- Simplify project creation +- Refactor composing application creation, and support creating a composing application through YAML +- Support workload revision through YAML +- Optimize the display of log query results +- Refactor app store deployment form +- Support helm chart schema (#[schema-files](https://helm.sh/docs/topics/charts/#schema-files)) -### 问题修复 +### BUG FIXES -- 修复编辑 ingress annotations 的报错问题 (#[1931](https://github.com/kubesphere/kubesphere/issues/1931)) -- 修复编辑工作负载容器探针的报错问题 -- 修复 XSS 安全问题 \ No newline at end of file +- Fix the error when editing ingress annotations (#[1931](https://github.com/kubesphere/kubesphere/issues/1931)) +- Fix container probes when editing in workload edit template modal +- Fix XSS security problems of the server-side templates \ No newline at end of file