Merge pull request #264 from Sherlock113/multiwording

Update multicluster guide
This commit is contained in:
pengfei 2020-09-14 18:13:10 +08:00 committed by GitHub
commit bf9c93fe32
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
4 changed files with 44 additions and 53 deletions

View File

@ -1,23 +1,22 @@
---
title: "Agent Connection"
keywords: 'kubernetes, kubesphere, multicluster, agent-connection'
keywords: 'Kubernetes, KubeSphere, multicluster, agent-connection'
description: 'Overview'
weight: 2343
weight: 3013
---
## Prerequisites
You have already installed at least two KubeSphere clusters, please refer to [Installing on Linux](../../../installing-on-linux) or [Installing on Kubernetes](../../../installing-on-kubernetes) if not yet.
You have already installed at least two KubeSphere clusters. Please refer to [Installing on Linux](../../../installing-on-linux) or [Installing on Kubernetes](../../../installing-on-kubernetes) if they are not ready yet.
{{< notice note >}}
Multi-cluster management requires Kubesphere to be installed on the target clusters. If you have an existing cluster, please install a minimal KubeSphere on it as an agent, see [Installing Minimal KubeSphere on Kubernetes](../../installing-on-kubernetes/minimal-kubesphere-on-k8s) for details.
Multi-cluster management requires Kubesphere to be installed on the target clusters. If you have an existing cluster, you can deploy KubeSphere on it with a minimal installation so that it can be imported. See [Minimal KubeSphere on Kubernetes](../../../quick-start/minimal-kubesphere-on-k8s/) for details.
{{</ notice >}}
## Agent Connection
The component [Tower](https://github.com/kubesphere/tower) of KubeSphere is used for agent connection. Tower is a tool for network connection between clusters through the agent. If the H Cluster cannot access the M Cluster directly, you can expose the proxy service address of the H cluster. This enables the M Cluster to connect to the H cluster through the agent. This method is applicable when the M Cluster is in a private environment (e.g. IDC) and the H Cluster is able to expose the proxy service. The agent connection is also applicable when your clusters are distributed in different cloud providers.
The component [Tower](https://github.com/kubesphere/tower) of KubeSphere is used for agent connection. Tower is a tool for network connection between clusters through the agent. If the H Cluster cannot access the M Cluster directly, you can expose the proxy service address of the H cluster. This enables the M Cluster to connect to the H cluster through the agent. This method is applicable when the M Cluster is in a private environment (e.g. IDC) and the H Cluster is able to expose the proxy service. The agent connection is also applicable when your clusters are distributed across different cloud providers.
### Prepare a Host Cluster
@ -25,11 +24,11 @@ The component [Tower](https://github.com/kubesphere/tower) of KubeSphere is used
{{< tab "KubeSphere has been installed" >}}
If you already have a standalone KubeSphere installed, you can change the `clusterRole` to a host cluster by editing the cluster configuration and **wait for a while**.
If you already have a standalone KubeSphere installed, you can set the value of `clusterRole` to `host` by editing the cluster configuration. You need to **wait for a while** so that the change can take effect.
- Option A - Use Web Console:
Use `cluster-admin` account to enter **Cluster Management → CRDs**, search for the keyword `ClusterConfiguration` and enter its detailed page, edit the YAML of `ks-installer`. This is similar to Enable Pluggable Components.
Use `admin` account to log in the console and go to **CRDs** on the **Cluster Management** page. Enter the keyword `ClusterConfiguration` and go to its detail page. Edit the YAML of `ks-installer`, which is similar to [Enable Pluggable Components](../../../pluggable-components/).
- Option B - Use Kubectl:
@ -37,7 +36,7 @@ Use `cluster-admin` account to enter **Cluster Management → CRDs**, search for
kubectl edit cc ks-installer -n kubesphere-system
```
Scroll down and change the value of `clusterRole` to `host`, then click **Update** to make it effective:
Scroll down and set the value of `clusterRole` to `host`, then click **Update** (if you use the web console) to make it effective:
```yaml
multicluster:
@ -48,27 +47,20 @@ multicluster:
{{< tab "KubeSphere has not been installed" >}}
There is no big difference if you just start the installation. Please fill in the `jwtSecret` with the value shown as above in `config-sample.yaml` or `cluster-configuration.yaml`:
```yaml
authentication:
jwtSecret: gfIwilcc0WjNGKJ5DLeksf2JKfcLgTZU
```
Then scroll down and change the `clusterRole` to `member`:
There is no big difference if you define a host cluster before installation. Please note that the `clusterRole` in `config-sample.yaml` or `cluster-configuration.yaml` has to be set as follows:
```yaml
multicluster:
clusterRole: member
clusterRole: host
```
{{</ tab >}}
{{</ tabs >}}
Then you can use the **kubectl** to retrieve the installation logs to verify the status. Wait for a while, you will be able to see the successful logs return if the host cluster is ready.
You can use **kubectl** to retrieve the installation logs to verify the status by running the following command. Wait for a while, and you will be able to see the successful log return if the host cluster is ready.
```
```bash
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
```

View File

@ -1,18 +1,17 @@
---
title: "Direct Connection"
keywords: 'kubernetes, kubesphere, multicluster, hybrid-cloud'
keywords: 'Kubernetes, KubeSphere, multicluster, hybrid-cloud, direct-connection'
description: 'Overview'
weight: 2340
weight: 3011
---
## Prerequisites
You have already installed at least two KubeSphere clusters, please refer to [Installing on Linux](../../../installing-on-linux) or [Installing on Kubernetes](../../../installing-on-kubernetes) if not yet.
You have already installed at least two KubeSphere clusters. Please refer to [Installing on Linux](../../../installing-on-linux) or [Installing on Kubernetes](../../../installing-on-kubernetes) if they are not ready yet.
{{< notice note >}}
Multi-cluster management requires Kubesphere to be installed on the target clusters. If you have an existing cluster, please install a minimal KubeSphere on it as an agent, see [Installing Minimal KubeSphere on Kubernetes](../../installing-on-kubernetes/minimal-kubesphere-on-k8s) for details.
Multi-cluster management requires Kubesphere to be installed on the target clusters. If you have an existing cluster, you can deploy KubeSphere on it with a minimal installation so that it can be imported. See [Minimal KubeSphere on Kubernetes](../../../quick-start/minimal-kubesphere-on-k8s/) for details.
{{</ notice >}}
## Direct Connection
@ -25,11 +24,11 @@ If the kube-apiserver address of Member Cluster (hereafter referred to as **M**
{{< tab "KubeSphere has been installed" >}}
If you already have a standalone KubeSphere installed, you can change the `clusterRole` to a host cluster by editing the cluster configuration and **wait for a while**.
If you already have a standalone KubeSphere installed, you can set the value of `clusterRole` to `host` by editing the cluster configuration. You need to **wait for a while** so that the change can take effect.
- Option A - Use Web Console:
Use `cluster-admin` account to enter **Cluster Management → CRDs**, search for the keyword `ClusterConfiguration` and enter its detailed page, edit the YAML of `ks-installer`. This is similar to Enable Pluggable Components.
Use `admin` account to log in the console and go to **CRDs** on the **Cluster Management** page. Enter the keyword `ClusterConfiguration` and go to its detail page. Edit the YAML of `ks-installer`, which is similar to [Enable Pluggable Components](../../../pluggable-components/).
- Option B - Use Kubectl:
@ -37,7 +36,7 @@ Use `cluster-admin` account to enter **Cluster Management → CRDs**, search for
kubectl edit cc ks-installer -n kubesphere-system
```
Scroll down and change the value of `clusterRole` to `host`, then click **Update** to make it effective:
Scroll down and set the value of `clusterRole` to `host`, then click **Update** (if you use the web console) to make it effective:
```yaml
multicluster:
@ -48,7 +47,7 @@ multicluster:
{{< tab "KubeSphere has not been installed" >}}
There is no big difference if you just start the installation. Please note that the `clusterRole` in `config-sample.yaml` or `cluster-configuration.yaml` has to be set like following:
There is no big difference if you define a host cluster before installation. Please note that the `clusterRole` in `config-sample.yaml` or `cluster-configuration.yaml` has to be set as follows:
```yaml
multicluster:
@ -59,20 +58,22 @@ multicluster:
{{</ tabs >}}
Then you can use the **kubectl** to retrieve the installation logs to verify the status. Wait for a while, you will be able to see the successful logs return if the host cluster is ready.
You can use **kubectl** to retrieve the installation logs to verify the status by running the following command. Wait for a while, and you will be able to see the successful log return if the host cluster is ready.
```
```bash
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
```
### Prepare a Member Cluster
In order to manage the member cluster within the host cluster, we need to make the jwtSecret same between them. So first you need to get it from the host by the following command.
In order to manage the member cluster within the host cluster, you need to make `jwtSecret` the same between them. Therefore, you need to get it first from the host cluster by the following command.
```bash
kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep jwtSecret
```
The output may look like this:
```yaml
jwtSecret: "gfIwilcc0WjNGKJ5DLeksf2JKfcLgTZU"
```
@ -81,11 +82,11 @@ jwtSecret: "gfIwilcc0WjNGKJ5DLeksf2JKfcLgTZU"
{{< tab "KubeSphere has been installed" >}}
If you already have a standalone KubeSphere installed, you can change the `clusterRole` to a host cluster by editing the cluster configuration and **wait for a while**.
If you already have a standalone KubeSphere installed, you can set the value of `clusterRole` to `member` by editing the cluster configuration. You need to **wait for a while** so that the change can take effect.
- Option A - Use Web Console:
Use `cluster-admin` account to enter **Cluster Management → CRDs**, search for the keyword `ClusterConfiguration` and enter its detailed page, edit the YAML of `ks-installer`. This is similar to Enable Pluggable Components.
Use `admin` account to log in the console and go to **CRDs** on the **Cluster Management** page. Enter the keyword `ClusterConfiguration` and go to its detail page. Edit the YAML of `ks-installer`, which is similar to [Enable Pluggable Components](../../../pluggable-components/).
- Option B - Use Kubectl:
@ -93,14 +94,14 @@ Use `cluster-admin` account to enter **Cluster Management → CRDs**, search for
kubectl edit cc ks-installer -n kubesphere-system
```
Then input the corresponding jwtSecret shown above:
Input the corresponding `jwtSecret` shown above:
```yaml
authentication:
jwtSecret: gfIwilcc0WjNGKJ5DLeksf2JKfcLgTZU
```
Then scroll down and change the value of `clusterRole` to `member`, then click **Update** to make it effective:
Scroll down and set the value of `clusterRole` to `member`, then click **Update** (if you use the web console) to make it effective:
```yaml
multicluster:
@ -111,16 +112,16 @@ multicluster:
{{< tab "KubeSphere has not been installed" >}}
There is no big difference if you just start the installation. Please fill in the `jwtSecret` with the value shown as above in `config-sample.yaml` or `cluster-configuration.yaml`:
There is no big difference if you define a member cluster before installation. Please note that the `clusterRole` in `config-sample.yaml` or `cluster-configuration.yaml` has to be set as follows:
```yaml
authentication:
jwtSecret: gfIwilcc0WjNGKJ5DLeksf2JKfcLgTZU
```
Then scroll down and change the `clusterRole` to `member`:
Scroll down and set the value of `clusterRole` to `member`:
```
```yaml
multicluster:
clusterRole: member
```
@ -129,15 +130,15 @@ multicluster:
{{</ tabs >}}
Then you can use the **kubectl** to retrieve the installation logs to verify the status. Wait for a while, you will be able to see the successful logs return if the host cluster is ready.
You can use **kubectl** to retrieve the installation logs to verify the status by running the following command. Wait for a while, and you will be able to see the successful log return if the member cluster is ready.
```
```bash
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
```
### Import Cluster
1. Open the H Cluster Dashboard and click **Add Cluster**.
1. Open the H Cluster dashboard and click **Add Cluster**.
![Add Cluster](https://ap3.qingstor.com/kubesphere-website/docs/20200827231611.png)
@ -147,7 +148,7 @@ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=
3. In **Connection Method**, select **Direct Connection to Kubernetes cluster**.
4. [Retrieve the KubeConfig](../retrieve-kubeconfig), then copy the KubeConfig of the Member Cluster and paste it into the box.
4. [Retrieve the KubeConfig](../retrieve-kubeconfig), copy the KubeConfig of the Member Cluster and paste it into the box.
{{< notice tip >}}
Please make sure the `server` address in KubeConfig is accessible on any node of the H Cluster. For `KubeSphere API Server` address, you can fill in the KubeSphere APIServer address or leave it blank.

View File

@ -1,14 +1,13 @@
---
title: "Kubernetes Federation in KubeSphere"
keywords: 'kubernetes, kubesphere, multicluster, hybrid-cloud'
keywords: 'Kubernetes, KubeSphere, federation, multicluster, hybrid-cloud'
description: 'Overview'
weight: 2340
weight: 3007
---
The multi-cluster feature relates to the network connection among multiple clusters. Therefore, it is important to understand the topological relations of clusters as the workload can be reduced.
Before you use the multi-cluster feature, you need to create a Host Cluster (hereafter referred to as **H** Cluster), which is actually a KubeSphere cluster that has enabled the multi-cluster feature. All the clusters managed by the H Cluster are called Member Cluster (hereafter referred to as **M** Cluster). They are common KubeSphere clusters that do not have the multi-cluster feature enabled. There can only be one H Cluster while multiple M Clusters can exist at the same time. In a multi-cluster architecture, the network between the H Cluster and the M Cluster can be connected directly or through an agent. The network between M Clusters can be set in a completely isolated environment.
Before you use the multi-cluster feature, you need to create a Host Cluster (hereafter referred to as **H** Cluster), which is actually a KubeSphere cluster with the multi-cluster feature enabled. All the clusters managed by the H Cluster are called Member Cluster (hereafter referred to as **M** Cluster). They are common KubeSphere clusters that do not have the multi-cluster feature enabled. There can only be one H Cluster while multiple M Clusters can exist at the same time. In a multi-cluster architecture, the network between the H Cluster and the M Cluster can be connected directly or through an agent. The network between M Clusters can be set in a completely isolated environment.
![Kubernetes Federation in KubeSphere](https://ap3.qingstor.com/kubesphere-website/docs/20200907232319.png)

View File

@ -1,16 +1,15 @@
---
title: "Overview"
keywords: 'kubernetes, kubesphere, multicluster, hybrid-cloud'
keywords: 'Kubernetes, KubeSphere, multicluster, hybrid-cloud'
description: 'Overview'
weight: 2335
weight: 3006
---
Today, it's very common for organizations to run and manage multiple Kubernetes Clusters on different cloud providers or infrastructures. Each Kubernetes cluster is a relatively self-contained unit. And the upstream community is struggling to research and develop the multi-cluster management solution, such as [kubefed](https://github.com/kubernetes-sigs/kubefed).
Today, it's very common for organizations to run and manage multiple Kubernetes clusters across different cloud providers or infrastructures. As each Kubernetes cluster is a relatively self-contained unit, the upstream community is struggling to research and develop a multi-cluster management solution. That said, Kubernetes Cluster Federation ([KubeFed](https://github.com/kubernetes-sigs/kubefed) for short) may be a possible approach among others.
The most common use cases in multi-cluster management including **service traffic load balancing, development and production isolation, decoupling of data processing and data storage, cross-cloud backup and disaster recovery, flexible allocation of computing resources, low latency access with cross-region services, and no vendor lock-in,** etc.
The most common use cases of multi-cluster management include service traffic load balancing, development and production isolation, decoupling of data processing and data storage, cross-cloud backup and disaster recovery, flexible allocation of computing resources, low latency access with cross-region services, and vendor lock-in avoidance.
KubeSphere is developed to address the multi-cluster and multi-cloud management challenges and implement the proceeding user scenarios, providing users with a unified control plane to distribute applications and its replicas to multiple clusters from public cloud to on-premise environment. KubeSphere also provides rich observability cross multiple clusters including centralized monitoring, logging, events, and auditing logs.
KubeSphere is developed to address multi-cluster and multi-cloud management challenges and implement the proceeding user scenarios, providing users with a unified control plane to distribute applications and its replicas to multiple clusters from public cloud to on-premises environments. KubeSphere also provides rich observability cross multiple clusters including centralized monitoring, logging, events, and auditing logs.
![KubeSphere Multi-cluster Management](/images/docs/multi-cluster-overview.jpg)