diff --git a/content/en/docs/installing-on-kubernetes/_index.md b/content/en/docs/installing-on-kubernetes/_index.md index 51adfedde..32096c401 100644 --- a/content/en/docs/installing-on-kubernetes/_index.md +++ b/content/en/docs/installing-on-kubernetes/_index.md @@ -1,9 +1,9 @@ --- -title: "Installing on Kubernetes" +title: "Installing KubeSphere on Kubernetes" description: "Help you to better understand KubeSphere with detailed graphics and contents" layout: "single" -linkTitle: "Installing on Kubernetes" +linkTitle: "Uninstalling KubeSphere from Kubernetes" weight: 2500 icon: "/images/docs/docs.svg" diff --git a/content/en/docs/installing-on-kubernetes/uninstalling/_index.md b/content/en/docs/installing-on-kubernetes/uninstalling/_index.md new file mode 100644 index 000000000..55d950cfd --- /dev/null +++ b/content/en/docs/installing-on-kubernetes/uninstalling/_index.md @@ -0,0 +1,7 @@ +--- +linkTitle: "Uninstalling" +weight: 2300 + +_build: + render: false +--- diff --git a/content/en/docs/installing-on-kubernetes/uninstalling/uninstalling-kubesphere-from-k8s.md b/content/en/docs/installing-on-kubernetes/uninstalling/uninstalling-kubesphere-from-k8s.md new file mode 100644 index 000000000..6f4531add --- /dev/null +++ b/content/en/docs/installing-on-kubernetes/uninstalling/uninstalling-kubesphere-from-k8s.md @@ -0,0 +1,224 @@ +--- +title: "Uninstalling KubeSphere from Kubernetes" +keywords: 'kubernetes, kubesphere, air gapped, installation' +description: 'How to uninstalling KubeSphere from Kubernetes' + + +weight: 2240 +--- + +The air-gapped installation is almost the same as the online installation except it creates a local registry to host the Docker images. We will demonstrate how to install KubeSphere and Kubernetes on air-gapped environment. + +> Note: The dependencies in different operating systems may cause upexpected problems. If you encounter any installation problems on air-gapped environment, please describe your OS information and error logs on [GitHub](https://github.com/kubesphere/kubesphere/issues). + +## Prerequisites + +- If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information. +> - Installer will use `/var/lib/docker` as the default directory where all Docker related files, including the images, are stored. We recommend you to add additional storage to a disk with at least 100G mounted at `/var/lib/docker` and `/mnt/registry` respectively, use the [fdisk](https://www.computerhope.com/unix/fdisk.htm) command for reference. +- Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [OpenEBS](https://openebs.io/) to provide storage service with dynamic provisioning. It is convenient for testing and development. For production, please [configure supported persistent storage service](../storage-configuration) and prepare [high availability configuration](../master-ha) before installation. +- Since the air-gapped machines cannot connect to apt or yum source, please use clean Linux machine to avoid this problem. + +## Step 1: Prepare Linux Hosts + +The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements. + +- Supported OSes: CentOS 7.4 ~ 7.7 (64-bit), Ubuntu 16.04.5/16.04.6/18.04.1/18.04.2/18.04.3 LTS (64-bit) +- Time synchronization is required across all nodes, otherwise the installation may not succeed; +- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`; +- If you are using `Ubuntu 18.04`, you need to use the user `root`. +- Ensure your disk of each node is at least 100G. +- CPU and memory in total of all machines: 2 cores and 4 GB for minimal installation; 8 cores and 16 GB for complete installation. + + +The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes. + +> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide. + +| Host IP | Host Name | Role | +| --- | --- | --- | +|192.168.0.1|master|master, etcd| +|192.168.0.2|node1|node| +|192.168.0.3|node2|node| + +### Cluster Architecture + +#### Single Master, Single Etcd, Two Nodes + +![Architecture](/cluster-architecture.svg) + +## Step 2: Download Installer Package + +Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`. + +```bash +curl -L https://kubesphere.io/download/offline/latest > kubesphere-all-offline-v2.1.1.tar.gz \ +&& tar -zxf kubesphere-all-offline-v2.1.1.tar.gz && cd kubesphere-all-offline-v2.1.1/conf +``` + +## Step 3: Configure Host Template + +> This step is only for multi-node installation, you can skip this step if you choose all-in-one installation. + +Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file. + +> Note: +> +> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`. +> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`. +> - master, node1 and node2 are the host names of each node and all host names should be in lowercase. + +### hosts.ini + +```ini +[all] +master ansible_connection=local ip=192.168.0.1 +node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD +node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD + +[local-registry] +master + +[kube-master] +master + +[kube-node] +node1 +node2 + +[etcd] +master + +[k8s-cluster:children] +kube-node +kube-master +``` + +> Note: +> +> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here. +> - Installer will use a node as the local registry for docker images, defaults to "master" in the group `[local-registry]`. +> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively. +> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`. +> +> Parameters Specification: +> +> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection. +> - `ansible_host`: The name of the host to be connected. +> - `ip`: The ip of the host to be connected. +> - `ansible_user`: The default ssh user name to use. +> - `ansible_become_pass`: Allows you to set the privilege escalation password. +> - `ansible_ssh_pass`: The password of the host to be connected using root. + +## Step 4: Enable All Components + +> This is step is complete installation. You can skip this step if you choose a minimal installation. + +Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default. + +```yaml +# LOGGING CONFIGURATION +# logging is an optional component when installing KubeSphere, and +# Kubernetes builtin logging APIs will be used if logging_enabled is set to false. +# Builtin logging only provides limited functions, so recommend to enable logging. +logging_enabled: true # Whether to install logging system +elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number +elasticsearch_data_replica: 2 # total number of data nodes +elasticsearch_volume_size: 20Gi # Elasticsearch volume size +log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default. +elk_prefix: logstash # the string making up index names. The index name will be formatted as ks--log +kibana_enabled: false # Kibana Whether to install built-in Grafana +#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption. +#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port + +#DevOps Configuration +devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image) +jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default +jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default +jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default +jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters +jenkinsJavaOpts_Xmx: 6g +jenkinsJavaOpts_MaxRAM: 8g +sonarqube_enabled: true # Whether to install built-in SonarQube +#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption. +#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token + +# Following components are all optional for KubeSphere, +# Which could be turned on to install it before installation or later by updating its value to true +openpitrix_enabled: true # KubeSphere application store +metrics_server_enabled: true # For KubeSphere HPA to use +servicemesh_enabled: true # KubeSphere service mesh system(Istio-based) +notification_enabled: true # KubeSphere notification system +alerting_enabled: true # KubeSphere alerting system +``` + +## Step 5: Install KubeSphere to Linux Machines + +> Note: +> +> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default. +> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions. +> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation. +> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts. + +**1.** Enter `scripts` folder, and execute `install.sh` using `root` user: + +```bash +cd ../cripts +./install.sh +``` + +**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume. + +```bash +################################################ + KubeSphere Installer Menu +################################################ +* 1) All-in-one +* 2) Multi-node +* 3) Quit +################################################ +https://kubesphere.io/ 2020-02-24 +################################################ +Please input an option: 2 + +``` + +**3.** Verify the multi-node installation: + +**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go. + +```bash +successsful! +##################################################### +### Welcome to KubeSphere! ### +##################################################### + +Console: http://192.168.0.1:30880 +Account: admin +Password: P@88w0rd + +NOTE:Please modify the default password after login. +##################################################### +``` + +> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components). + +**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in. + +![Login](https://pek3b.qingstor.com/kubesphere-docs/png/20191017172215.png) + +Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up. + +![Landing Page](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png) + +## Enable Pluggable Components + +If you already have set up minimal installation, you still can edit the ConfigMap of ks-installer using the following command. Please make sure there is enough resource in your machines, see [Pluggable Components Overview](/en/installation/pluggable-components/). + +```bash +kubectl edit cm -n kubesphere-system ks-installer +``` + +## FAQ + +If you have further questions please do not hesitate to raise issues on [GitHub](https://github.com/kubesphere/kubesphere/issues). diff --git a/content/en/docs/installing-on-linux/_index.md b/content/en/docs/installing-on-linux/_index.md index 2442646b9..08045fdd9 100644 --- a/content/en/docs/installing-on-linux/_index.md +++ b/content/en/docs/installing-on-linux/_index.md @@ -18,6 +18,6 @@ In this chapter, we will demonstrate how to use KubeKey to provision a new Kuber Below you will find some of the most common and helpful pages from this chapter. We highly recommend you to review them at first. -{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} +{{< popularPage icon="/images/docs/qingcloud-2.svg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} {{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} diff --git a/content/en/docs/installing-on-linux/cluster-operation/_index.md b/content/en/docs/installing-on-linux/cluster-operation/_index.md index f5b0c58a2..f57fde055 100644 --- a/content/en/docs/installing-on-linux/cluster-operation/_index.md +++ b/content/en/docs/installing-on-linux/cluster-operation/_index.md @@ -1,6 +1,6 @@ --- linkTitle: "Cluster Operation" -weight: 2200 +weight: 2445 _build: render: false diff --git a/content/en/docs/installing-on-linux/uninstalling/uninstalling-kubesphere-and-Kubernetes.md b/content/en/docs/installing-on-linux/uninstalling/uninstalling-kubesphere-and-Kubernetes.md index 53d70eacb..a6af5a24c 100644 --- a/content/en/docs/installing-on-linux/uninstalling/uninstalling-kubesphere-and-Kubernetes.md +++ b/content/en/docs/installing-on-linux/uninstalling/uninstalling-kubesphere-and-Kubernetes.md @@ -1,5 +1,5 @@ --- -title: "Uninstalling KubeSphere and kubernetes" +title: "Uninstalling KubeSphere and Kubernetes" keywords: 'kubernetes, kubesphere, uninstalling, remove-cluster' description: 'How to uninstall KubeSphere and kubernetes' diff --git a/content/en/docs/multicluster-management/_index.md b/content/en/docs/multicluster-management/_index.md index dab34fb0e..24a32d2f8 100644 --- a/content/en/docs/multicluster-management/_index.md +++ b/content/en/docs/multicluster-management/_index.md @@ -11,9 +11,13 @@ icon: "/images/docs/docs.svg" --- -## Installing KubeSphere and Kubernetes on Linux +Today, it's very common for organizations to run and manage multiple Kubernetes Clusters on different cloud providers or infrastructures. Each Kubernetes cluster is a relatively self-contained unit. And the upstream community is struggling to research and develop the multi-cluster management solution, such as [kubefed](https://github.com/kubernetes-sigs/kubefed). -In this chapter, we will demonstrate how to use KubeKey to provision a new Kubernetes and KubeSphere cluster based on different infrastructures. Kubekey can help you to quickly build a production-ready cluster architecture on a set of machines from zero to one. It also helps you to easily scale the cluster and install pluggable components on existing architecture. +The most common use cases in multi-cluster management including **service traffic load balancing, development and production isolation, decoupling of data processing and data storage, cross-cloud backup and disaster recovery, flexible allocation of computing resources, low latency access with cross-region services, and no vendor lock-in,** etc. + +KubeSphere is developed to address the multi-cluster and multi-cloud management challenges and implement the proceeding user scenarios, providing users with a unified control plane to distribute applications and its replicas to multiple clusters from public cloud to on-premise environment. KubeSphere also provides rich observability cross multiple clusters including centralized monitoring, logging, events, and auditing logs. + +![KubeSphere Multi-cluster Management](/images/docs/multi-cluster-overview.jpg) ## Most Popular Pages diff --git a/content/en/docs/multicluster-management/enable-multicluster/agent-connection.md b/content/en/docs/multicluster-management/enable-multicluster/agent-connection.md index a208c432e..9fea17bbd 100644 --- a/content/en/docs/multicluster-management/enable-multicluster/agent-connection.md +++ b/content/en/docs/multicluster-management/enable-multicluster/agent-connection.md @@ -1,10 +1,214 @@ --- -title: "Kubernetes Federation in KubeSphere" -keywords: 'kubernetes, kubesphere, multicluster, hybrid-cloud' +title: "Agent Connection" +keywords: 'kubernetes, kubesphere, multicluster, agent-connection' description: 'Overview' -weight: 2340 +weight: 2343 --- -TBD +## Prerequisites + +You have already installed at least two KubeSphere clusters, please refer to [Installing on Linux](../../../installing-on-linux) or [Installing on Kubernetes](../../../installing-on-kubernetes) if not yet. + +{{< notice note >}} +Multi-cluster management requires Kubesphere to be installed on the target clusters. If you have an existing cluster, please install a minimal KubeSphere on it as an agent, see [Installing Minimal KubeSphere on Kubernetes](../../../installing-on-kubernetes/minimal-kubesphere-on-k8s) for details. +{{}} + +## Agent Connection + +The component [Tower](https://github.com/kubesphere/tower) of KubeSphere is used for agent connection. Tower is a tool for network connection between clusters through the agent. If the H Cluster cannot access the M Cluster directly, you can expose the proxy service address of the H cluster. This enables the M Cluster to connect to the H cluster through the agent. This method is applicable when the M Cluster is in a private environment (e.g. IDC) and the H Cluster is able to expose the proxy service. The agent connection is also applicable when your clusters are distributed in different cloud providers. + +### Prepare a Host Cluster + +{{< tabs >}} + +{{< tab "KubeSphere has been installed" >}} + +If you already have a standalone KubeSphere installed, you can change the `clusterRole` to a host cluster by editing the cluster configuration and **wait for a while**. + +- Option A - Use Web Console: + +Use `cluster-admin` account to enter **Cluster Management → CRDs**, search for the keyword `ClusterConfiguration` and enter its detailed page, edit the YAML of `ks-installer`. This is similar to Enable Pluggable Components. + +- Option B - Use Kubectl: + +```shell +kubectl edit cc ks-installer -n kubesphere-system +``` + +Scroll down and change the value of `clusterRole` to `host`, then click **Update** to make it effective: + +```yaml +multicluster: + clusterRole: host +``` + +{{}} + +{{< tab "KubeSphere has not been installed" >}} + +There is no big difference if you just start the installation. Please fill in the `jwtSecret` with the value shown as above in `config-sample.yaml` or `cluster-configuration.yaml`: + +```yaml +authentication: + jwtSecret: gfIwilcc0WjNGKJ5DLeksf2JKfcLgTZU +``` + +Then scroll down and change the `clusterRole` to `member`: + +```yaml +multicluster: + clusterRole: member +``` + +{{}} + +{{}} + +Then you can use the **kubectl** to retrieve the installation logs to verify the status. Wait for a while, you will be able to see the successful logs return if the host cluster is ready. + +``` +kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f +``` + +#### Set Proxy Service Address + +After the installation of the Host Cluster, a proxy service called tower will be created in `kubesphere-system`, whose type is **LoadBalancer**. + +{{< tabs >}} + +{{< tab "There is a LoadBalancer in your cluster" >}} + +If a LoadBalancer plugin is available for the cluster, you can see a corresponding address for `EXTERNAL-IP`, which will be acquired by KubeSphere automatically. That means we can skip the step to set the proxy. + +```shell +$ kubectl -n kubesphere-system get svc +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +tower LoadBalancer 10.233.63.191 139.198.110.23 8080:30721/TCP 16h +``` + +> Generally, there is always a LoadBalancer solution in the public cloud, and the external IP should be allocated by Load Balancer automatically. If your clusters are running in an on-premises environment (Especially for the **bare metal environment**), we recommend you to use [Porter](https://github.com/porter/porter) as the LB solution. + +{{}} + +{{< tab "There is not a LoadBalancer in your cluster" >}} + +1. If you cannot see a corresponding address displayed (the EXTERNAL-IP is pending), you need to manually set the proxy address. For example, you have an available public IP address `139.198.120.120`. And the port `8080` of this IP address has been forwarded to the port `30721` of the cluster. + +```shell +kubectl -n kubesphere-system get svc +``` + +``` +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +tower LoadBalancer 10.233.63.191 8080:30721/TCP 16h +``` + +2. Change the ConfigMap of the ks-installer and input the the address you set before. You can also edit the ConfigMap from **Configuration → ConfigMaps**, search for the keyword `kubesphere-config`, then edit its YAML and add the following configuration: + +```bash +kubectl -n kubesphere-system edit cm kubesphere-config +``` + +``` +multicluster: + clusterRole: host + proxyPublishAddress: http://139.198.120.120:8080 # Add this line to set the address to access tower +``` + +3. Save and update the ConfigMap, then restart the Deployment `ks-apiserver`. + +```shell +kubectl -n kubesphere-system rollout restart deployment ks-apiserver +``` + +{{}} + +{{}} + + +### Prepare a Member Cluster + +In order to manage the member cluster within the host cluster, we need to make the jwtSecret same between them. So first you need to get it from the host by the following command. + +```bash +kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep jwtSecret +``` + +```yaml +jwtSecret: "gfIwilcc0WjNGKJ5DLeksf2JKfcLgTZU" +``` + +{{< tabs >}} + +{{< tab "KubeSphere has been installed" >}} + +If you already have a standalone KubeSphere installed, you can change the `clusterRole` to a host cluster by editing the cluster configuration and **wait for a while**. + +- Option A - Use Web Console: + +Use `cluster-admin` account to enter **Cluster Management → CRDs**, search for the keyword `ClusterConfiguration` and enter its detailed page, edit the YAML of `ks-installer`. This is similar to Enable Pluggable Components. + +- Option B - Use Kubectl: + +```shell +kubectl edit cc ks-installer -n kubesphere-system +``` + +Then input the corresponding jwtSecret shown above: + +```yaml +authentication: + jwtSecret: gfIwilcc0WjNGKJ5DLeksf2JKfcLgTZU +``` + +Then scroll down and change the value of `clusterRole` to `member`, then click **Update** to make it effective: + +```yaml +multicluster: + clusterRole: member +``` + +{{}} + +{{< tab "KubeSphere has not been installed" >}} + +There is no big difference if you just start the installation. Please fill in the `jwtSecret` with the value shown as above in `config-sample.yaml` or `cluster-configuration.yaml`: + +```yaml +authentication: + jwtSecret: gfIwilcc0WjNGKJ5DLeksf2JKfcLgTZU +``` + +Then scroll down and change the `clusterRole` to `member`: + +```yaml +multicluster: + clusterRole: member +``` + +{{}} + +{{}} + + +### Import Cluster + +1. Open the H Cluster Dashboard and click **Add Cluster**. + +![Add Cluster](https://ap3.qingstor.com/kubesphere-website/docs/20200827231611.png) + +2. Enter the basic information of the imported cluster and click **Next**. + +![Import Cluster](https://ap3.qingstor.com/kubesphere-website/docs/20200827211842.png) + +3. In **Connection Method**, select **Cluster connection agent** and Click **Import**. + +![agent-en](/images/docs/agent-en.png) + +4. Create an `agent.yaml` file in the M Cluster based on the instruction, then copy and paste the deployment to the file. Execute `kubectl create -f agent.yaml` on the node and wait for the agent to be up and running. Please make sure the proxy address is accessible to the M Cluster. + +5. You can see the cluster you have imported in the H Cluster when the cluster agent is up and running. + +![Azure AKS](https://ap3.qingstor.com/kubesphere-website/docs/20200827231650.png) diff --git a/content/en/docs/multicluster-management/enable-multicluster/direct-connection.md b/content/en/docs/multicluster-management/enable-multicluster/direct-connection.md index 133a174c8..ac9a7a534 100644 --- a/content/en/docs/multicluster-management/enable-multicluster/direct-connection.md +++ b/content/en/docs/multicluster-management/enable-multicluster/direct-connection.md @@ -1,5 +1,5 @@ --- -title: "Overview" +title: "Direct Connection" keywords: 'kubernetes, kubesphere, multicluster, hybrid-cloud' description: 'Overview' @@ -7,4 +7,154 @@ description: 'Overview' weight: 2340 --- -TBD +## Prerequisites + +You have already installed at least two KubeSphere clusters, please refer to [Installing on Linux](../../../installing-on-linux) or [Installing on Kubernetes](../../../installing-on-kubernetes) if not yet. + +{{< notice note >}} +Multi-cluster management requires Kubesphere to be installed on the target clusters. If you have an existing cluster, please install a minimal KubeSphere on it as an agent, see [Installing Minimal KubeSphere on Kubernetes](../../../installing-on-kubernetes/minimal-kubesphere-on-k8s) for details. +{{}} + +## Direct Connection + +If the kube-apiserver address of Member Cluster (hereafter referred to as **M** Cluster) is accessible on any node of the Host Cluster (hereafter referred to as **H** Cluster), you can adopt **Direction Connection**. This method is applicable when the kube-apiserver address of M Cluster can be exposed or H Cluster and M Cluster are in the same private network or subnet. + +### Prepare a Host Cluster + +{{< tabs >}} + +{{< tab "KubeSphere has been installed" >}} + +If you already have a standalone KubeSphere installed, you can change the `clusterRole` to a host cluster by editing the cluster configuration and **wait for a while**. + +- Option A - Use Web Console: + +Use `cluster-admin` account to enter **Cluster Management → CRDs**, search for the keyword `ClusterConfiguration` and enter its detailed page, edit the YAML of `ks-installer`. This is similar to Enable Pluggable Components. + +- Option B - Use Kubectl: + +```shell +kubectl edit cc ks-installer -n kubesphere-system +``` + +Scroll down and change the value of `clusterRole` to `host`, then click **Update** to make it effective: + +```yaml +multicluster: + clusterRole: host +``` + +{{}} + +{{< tab "KubeSphere has not been installed" >}} + +There is no big difference if you just start the installation. Please note that the `clusterRole` in `config-sample.yaml` or `cluster-configuration.yaml` has to be set like following: + +```yaml +multicluster: + clusterRole: host +``` + +{{}} + +{{}} + +Then you can use the **kubectl** to retrieve the installation logs to verify the status. Wait for a while, you will be able to see the successful logs return if the host cluster is ready. + +``` +kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f +``` + +### Prepare a Member Cluster + +In order to manage the member cluster within the host cluster, we need to make the jwtSecret same between them. So first you need to get it from the host by the following command. + +```bash +kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep jwtSecret +``` + +```yaml +jwtSecret: "gfIwilcc0WjNGKJ5DLeksf2JKfcLgTZU" +``` + +{{< tabs >}} + +{{< tab "KubeSphere has been installed" >}} + +If you already have a standalone KubeSphere installed, you can change the `clusterRole` to a host cluster by editing the cluster configuration and **wait for a while**. + +- Option A - Use Web Console: + +Use `cluster-admin` account to enter **Cluster Management → CRDs**, search for the keyword `ClusterConfiguration` and enter its detailed page, edit the YAML of `ks-installer`. This is similar to Enable Pluggable Components. + +- Option B - Use Kubectl: + +```shell +kubectl edit cc ks-installer -n kubesphere-system +``` + +Then input the corresponding jwtSecret shown above: + +```yaml +authentication: + jwtSecret: gfIwilcc0WjNGKJ5DLeksf2JKfcLgTZU +``` + +Then scroll down and change the value of `clusterRole` to `member`, then click **Update** to make it effective: + +```yaml +multicluster: + clusterRole: member +``` + +{{}} + +{{< tab "KubeSphere has not been installed" >}} + +There is no big difference if you just start the installation. Please fill in the `jwtSecret` with the value shown as above in `config-sample.yaml` or `cluster-configuration.yaml`: + +```yaml +authentication: + jwtSecret: gfIwilcc0WjNGKJ5DLeksf2JKfcLgTZU +``` + +Then scroll down and change the `clusterRole` to `member`: + +``` +multicluster: + clusterRole: member +``` + +{{}} + +{{}} + +Then you can use the **kubectl** to retrieve the installation logs to verify the status. Wait for a while, you will be able to see the successful logs return if the host cluster is ready. + +``` +kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f +``` + +### Import Cluster + +1. Open the H Cluster Dashboard and click **Add Cluster**. + +![Add Cluster](https://ap3.qingstor.com/kubesphere-website/docs/20200827231611.png) + +2. Enter the basic information of the cluster and click **Next**. + +![Import Cluster](https://ap3.qingstor.com/kubesphere-website/docs/20200827211842.png) + +3. In **Connection Method**, select **Direct Connection to Kubernetes cluster**. + +4. [Retrieve the KubeConfig](../retrieve-kubeconfig), then copy the KubeConfig of the Member Cluster and paste it into the box. + +{{< notice tip >}} +Please make sure the `server` address in KubeConfig is accessible on any node of the H Cluster. For `KubeSphere API Server` address, you can fill in the KubeSphere APIServer address or leave it blank. +{{}} + +![import a cluster - direct connection](/images/docs/direct_import_en.png) + +5. Click **Import** and wait for cluster initialization to finish. + +![Azure AKS](https://ap3.qingstor.com/kubesphere-website/docs/20200827231650.png) diff --git a/content/en/docs/multicluster-management/enable-multicluster/retrieve-kubeconfig.md b/content/en/docs/multicluster-management/enable-multicluster/retrieve-kubeconfig.md index 133a174c8..19f6306bd 100644 --- a/content/en/docs/multicluster-management/enable-multicluster/retrieve-kubeconfig.md +++ b/content/en/docs/multicluster-management/enable-multicluster/retrieve-kubeconfig.md @@ -1,10 +1,42 @@ --- -title: "Overview" +title: "Retrieve KubeConfig" keywords: 'kubernetes, kubesphere, multicluster, hybrid-cloud' description: 'Overview' -weight: 2340 +weight: 2345 --- -TBD +## Prerequisites + +You have a KubeSphere cluster. + +## Explore KubeConfig File + +Go to `$HOME/.kube`, and see what files are there. Typically, there is a file named config. Use the following command to retrieve the KubeConfig file: + +```bash +cat $HOME/.kube/config +``` + +``` +apiVersion: v1 +clusters: +- cluster: + certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01EZ3dPREE1hqaVE3NXhwbGFQNUgwSm5ySk5peTBacFh6QWxjYzZlV2JlaXJ1VgpUbmZUVjZRY3pxaVcrS3RBdFZVbkl4MCs2VTgzL3FiKzdINHk2RnA0aVhUaDJxRHJ6Qkd4dG1UeFlGdC9OaFZlCmhqMHhEbHVMOTVUWkRjOUNmSFgzdGZJeVh5WFR3eWpnQ2g1RldxbGwxVS9qVUo2RjBLVVExZ1pRTFp4TVJMV0MKREM2ZFhvUGlnQ3BNaVRPVXl5SVNhWUVjYVNBMEo5VWZmSGd4ditVcXVleTc0cEM2emszS0lOT2tGMkI1MllxeApUa09OT2VkV2hDUExMZkUveVJqeGw1aFhPL1Z4REFaVC9HQ1Y1a0JZN0toNmRhendmUllOa21IQkhDMD0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=hqaVE3NXhwbGFQNUgwSm5ySk5peTBacFh6QWxjYzZlV2JlaXJ1VgpUbmZUVjZRY3pxaVcrS3RBdFZVbkl4MCs2VTgzL3FiKzdINHk2RnA0aVhUaDJxRHJ6Qkd4dG1UeFlGdC9OaFZlCmhqMHhEbHVMOTVUWkRjOUNmSFgzdGZJeVh5WFR3eWpnQ2g1RldxbGwxVS9qVUo2RjBLVVExZ1pRTFp4TVJMV0MKREM2ZFhvUGlnQ3BNaVRPVXl5SVNhWUVjYVNBMEo5VWZmSGd4ditVcXVleTc0cEM2emszS0lOT2tGMkI1MllxeApUa09OT2VkV2hDUExMZkUveVJqeGw1aFhPL1Z4REFaVC9HQ1Y1a0JZN0toNmRhendmUllOa21IQkhDMD0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= + server: https://lb.kubesphere.local:6443 + name: cluster.local +contexts: +- context: + cluster: cluster.local + user: kubernetes-admin + name: kubernetes-admin@cluster.local +current-context: kubernetes-admin@cluster.local +kind: Config +preferences: {} +users: +- name: kubernetes-admin + user: + client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJRzd5REpscVdjdTh3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TURBNE1EZ3dPVEkzTXpkYUZ3MHlNVEE0TURnd09USTNNemhhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnsOTJBUkJDNTRSR3BsZ3VmCmw5a0hPd0lEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFEQ2FUTXNBR1Vhdnhrazg0NDZnOGNRQUJpSmk5RTZiREV5TwphRnJubC8reGRzRmgvOTFiMlNpM3ZwaHFkZ2k5bXRYWkhhaWI5dnQ3aXdtSEFwbGQxUkhBU25sMFoxWFh1dkhzCmMzcXVIU0puY3dmc3JKT0I4UG9NRjVnaG10a0dPV3g0M2RHTTNHQnpGTVJ4ZGcrNmttNjRNUGhneXl6NTJjYUoKbzhPajNja1Uzd1NWNkxvempRcFVaUnZHV25qQjEwUXFPWXBtQUk4VCtlZkxKZzhuY0drK3V3UUVTeXBYWExpYwoxWVQ2QkFJeFhEK2tUUU1hOFhjdUhHZzlWRkdsUm9yK1EvY3l0S3RDeHVncFlxQ2xvbHVpckFUUnpsemRXamxYCkVQaHVjRWs2UUdIZEpObjd0M2NwRGkzSUdYYXJFdGxQQmFwck9nSGpkOHZVOStpWXdoQT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=TJBUkJDNTRSR3BsZ3VmCmw5a0hPd0lEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFEQ2FUTXNBR1Vhdnhrazg0NDZnOGNRQUJpSmk5RTZiREV5TwphRnJubC8reGRzRmgvOTFiMlNpM3ZwaHFkZ2k5bXRYWkhhaWI5dnQ3aXdtSEFwbGQxUkhBU25sMFoxWFh1dkhzCmMzcXVIU0puY3dmc3JKT0I4UG9NRjVnaG10a0dPV3g0M2RHTTNHQnpGTVJ4ZGcrNmttNjRNUGhneXl6NTJjYUoKbzhPajNja1Uzd1NWNkxvempRcFVaUnZHV25qQjEwUXFPWXBtQUk4VCtlZkxKZzhuY0drK3V3UUVTeXBYWExpYwoxWVQ2QkFJeFhEK2tUUU1hOFhjdUhHZzlWRkdsUm9yK1EvY3l0S3RDeHVncFlxQ2xvbHVpckFUUnpsemRXamxYCkVQaHVjRWs2UUdIZEpObjd0M2NwRGkzSUdYYXJFdGxQQmFwck9nSGpkOHZVOStpWXdoQT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= + client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBeXBLWkdtdmdiSHdNaU9pVU80UHZKZXB2MTJaaE1yRUIxK2xlVnM0dHIzMFNGQ0p1Ck8wc09jL2lUNmFuWEJzUU1XNDF6V3hwV1B5elkzWXlUWEJMTlIrM01pWTl2SFhUeWJ6eitTWnNlTzVENytHL3MKQnR5NkovNGpJb2pZZlRZNTFzUUxyRVJydStmVnNGeUU0U2dXbE1HYWdqV0RIMFltM0VJsOTJBUkJDNTRSR3BsZ3VmCmw5a0hPd0lEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFEQ2FUTXNBR1Vhdnhrazg0NDZnOGNRQUJpSmk5RTZiREV5TwphRnJubC8reGRzRmgvOTFiMlNpM3ZwaHFkZ2k5bXRYWkhhaWI5dnQ3aXdtSEFwbGQxUkhBU25sMFoxWFh1dkhzCmMzcXVIU0puY3dmc3JKT0I4UG9NRjVnaG10a0dPV3g0M2RHTTNHQnpGTVJ4ZGcrNmttNjRNUGhneXl6NTJjYUoKbzhPajNja1Uzd1NWNkxvempRcFVaUnZHV25qQjEwUXFPWXBtQUk4VCtlZkxKZzhuY0drK3V3UUVTeXBYWExpYwoxWVQ2QkFJeFhEK2tUUU1hOFhjdUhHZzlWRkdsUm9yK1EvY3l0S3RDeHVncFlxQ2xvbHVpckFUUnpsemRXamxYCkVQaHVjRWs2UUdIZEpObjd0M2NwRGkzSUdYYXJFdGxQQmFwck9nSGpkOHZVOStpWXdoQT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=Ygo3THE3a2tBMURKNTBld2pMUTNTd1Yxd2p6N2ZjeDYvbzUwRnJnK083dEJMVVdQNTNHaDQ1VjJpUEp2NkdPYk1uCjhIWElmem83cW5XRFQvU20ybW5HbitUdVY4THdLVWFXL2wya3FkRUNnWUVBcS9zRmR1RDk2Z3VoT2ZaRnczcWMKblZGekNGQ3JsMkUvVkdYQy92SmV1WnJLQnFtSUtNZFI3ajdLWS9WRFVlMnJocVd6MFh2Wm9Sa1FoMkdwWkdIawpDd3NzcENKTVl4L0hETTVaWlBvcittb1J6VE5HNHlDNGhTRGJ2VEFaTmV1VTZTK1hzL1JSTDJ6WnUwemNQQXk1CjJJRVgwelFpZ1JzK3VzS3Jkc1FVZXZrQ2dZQUUrQUNWeDJnMC94bmFsMVFJNmJsK3Y2TDJrZVJtVGppcHB4Wm0KS1JEd2xnaXpsWGxsTjhyQmZwSGNiK1ZnZ282anN2eHFrb0pkTEhBLzFDME5IMWVuS1NoUTlpZVFpeWNsZngwdQpKOE1oeW1JM0RBZUg1REJyOG1rZ0pwNnJwUXNBc1paYmVhOHlLTzV5eVdCYTN6VGxOVnQvNDRibGg5alpnTWNMCjNyUXFVUUtCZ1FETVlXdEt2S0hOQllXV0p5enFERnFPbS9qY3Z3andvcURibUZVMlU3UGs2aUdNVldBV3VYZ3cKSm5qQWtES01GN0JXSnJRUjR6RHVoQlhvQVMxWVhiQ2lGd2hTcXVjWGhFSGlwQ3Nib0haVVRtT1pXUUh4Vlp4bQowU1NiRXFZU2MvZHBDZ1BHRk9IaW1FdUVic05kc2JjRmRETDQyODZHb0psQUxCOGc3VWRUZUE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo= +``` diff --git a/content/en/docs/multicluster-management/introduction/kubefed-in-kubesphere.md b/content/en/docs/multicluster-management/introduction/kubefed-in-kubesphere.md index a208c432e..7a85a3334 100644 --- a/content/en/docs/multicluster-management/introduction/kubefed-in-kubesphere.md +++ b/content/en/docs/multicluster-management/introduction/kubefed-in-kubesphere.md @@ -7,4 +7,6 @@ description: 'Overview' weight: 2340 --- -TBD +The multi-cluster feature relates to the network connection among multiple clusters. Therefore, it is important to understand the topological relations of clusters as the workload can be reduced. + +Before you use the multi-cluster feature, you need to create a Host Cluster (hereafter referred to as **H** Cluster), which is actually a KubeSphere cluster that has enabled the multi-cluster feature. All the clusters managed by the H Cluster are called Member Cluster (hereafter referred to as **M** Cluster). They are common KubeSphere clusters that do not have the multi-cluster feature enabled. There can only be one H Cluster while multiple M Clusters can exist at the same time. In a multi-cluster architecture, the network between the H Cluster and the M Cluster can be connected directly or through an agent. The network between M Clusters can be set in a completely isolated environment. diff --git a/content/en/docs/multicluster-management/introduction/overview.md b/content/en/docs/multicluster-management/introduction/overview.md index 133a174c8..818f2cfd4 100644 --- a/content/en/docs/multicluster-management/introduction/overview.md +++ b/content/en/docs/multicluster-management/introduction/overview.md @@ -4,7 +4,13 @@ keywords: 'kubernetes, kubesphere, multicluster, hybrid-cloud' description: 'Overview' -weight: 2340 +weight: 2335 --- -TBD +Today, it's very common for organizations to run and manage multiple Kubernetes Clusters on different cloud providers or infrastructures. Each Kubernetes cluster is a relatively self-contained unit. And the upstream community is struggling to research and develop the multi-cluster management solution, such as [kubefed](https://github.com/kubernetes-sigs/kubefed). + +The most common use cases in multi-cluster management including **service traffic load balancing, development and production isolation, decoupling of data processing and data storage, cross-cloud backup and disaster recovery, flexible allocation of computing resources, low latency access with cross-region services, and no vendor lock-in,** etc. + +KubeSphere is developed to address the multi-cluster and multi-cloud management challenges and implement the proceeding user scenarios, providing users with a unified control plane to distribute applications and its replicas to multiple clusters from public cloud to on-premise environment. KubeSphere also provides rich observability cross multiple clusters including centralized monitoring, logging, events, and auditing logs. + +![KubeSphere Multi-cluster Management](/images/docs/multi-cluster-overview.jpg) diff --git a/static/images/docs/agent-en.png b/static/images/docs/agent-en.png new file mode 100644 index 000000000..6ba82edab Binary files /dev/null and b/static/images/docs/agent-en.png differ diff --git a/static/images/docs/direct_import_en.png b/static/images/docs/direct_import_en.png new file mode 100644 index 000000000..ba517cd04 Binary files /dev/null and b/static/images/docs/direct_import_en.png differ diff --git a/static/images/docs/multi-cluster-overview.jpg b/static/images/docs/multi-cluster-overview.jpg new file mode 100644 index 000000000..f9b666860 Binary files /dev/null and b/static/images/docs/multi-cluster-overview.jpg differ diff --git a/static/images/docs/qingcloud-2.svg b/static/images/docs/qingcloud-2.svg new file mode 100644 index 000000000..1d90038aa --- /dev/null +++ b/static/images/docs/qingcloud-2.svg @@ -0,0 +1,6 @@ + + + + + +