mirror of
https://github.com/kubesphere/website.git
synced 2025-12-30 00:52:47 +00:00
commit
91ff0b6abb
|
|
@ -0,0 +1,56 @@
|
|||
---
|
||||
title: "Node Management"
|
||||
keywords: "Kubernetes, KubeSphere, taints, nodes, labels, requests, limits"
|
||||
description: "Monitor node status and learn how to add node labels or taints."
|
||||
|
||||
linkTitle: "Node Management"
|
||||
weight: 8100
|
||||
---
|
||||
|
||||
Kubernetes runs your workloads by placing containers into Pods to run on nodes. A node may be a virtual or physical machine, depending on the cluster. Each node contains the services necessary to run Pods, managed by the control plane. For more information about nodes, see the [official documentation of Kubernetes](https://kubernetes.io/docs/concepts/architecture/nodes/).
|
||||
|
||||
This tutorial demonstrates what a cluster administrator can view and do for nodes within a cluster.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
You need a user granted a role including the authorization of **Cluster Management**. For example, you can log in to the console as `admin` directly or create a new role with the authorization and assign it to a user.
|
||||
|
||||
## Node Status
|
||||
|
||||
Cluster nodes are only accessible to cluster administrators. Some node metrics are very important to clusters. Therefore, it is the administrator's responsibility to watch over these numbers and make sure nodes are available. Follow the steps below to view node status.
|
||||
|
||||
1. Click **Platform** in the upper-left corner and select **Cluster Management**.
|
||||
|
||||
2. If you have enabled the [multi-cluster feature](../../multicluster-management/) with member clusters imported, you can select a specific cluster to view its nodes. If you have not enabled the feature, refer to the next step directly.
|
||||
|
||||
3. Choose **Cluster Nodes** under **Nodes**, where you can see detailed information of node status.
|
||||
|
||||
- **Name**: The node name and subnet IP address.
|
||||
- **Status**: The current status of a node, indicating whether a node is available or not.
|
||||
- **Role**: The role of a node, indicating whether a node is a worker or master.
|
||||
- **CPU Usage**: The real-time CPU usage of a node.
|
||||
- **Memory Usage**: The real-time memory usage of a node.
|
||||
- **Pods**: The real-time usage of Pods on a node.
|
||||
- **Allocated CPU**: This metric is calculated based on the total CPU requests of Pods on a node. It represents the amount of CPU reserved for workloads on this node, even if workloads are using fewer CPU resources. This figure is vital to the Kubernetes scheduler (kube-scheduler), which favors nodes with lower allocated CPU resources when scheduling a Pod in most cases. For more details, refer to [Managing Resources for Containers](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/).
|
||||
- **Allocated Memory**: This metric is calculated based on the total memory requests of Pods on a node. It represents the amount of memory reserved for workloads on this node, even if workloads are using fewer memory resources.
|
||||
|
||||
{{< notice note >}}
|
||||
**CPU** and **Allocated CPU** are different most times, so are **Memory** and **Allocated Memory**, which is normal. As a cluster administrator, you need to focus on both metrics instead of just one. It's always a good practice to set resource requests and limits for each node to match their real usage. Over-allocating resources can lead to low cluster utilization, while under-allocating may result in high pressure on a cluster, leaving the cluster unhealthy.
|
||||
{{</ notice >}}
|
||||
|
||||
## Node Management
|
||||
|
||||
Click a node from the list and you can go to its detail page.
|
||||
|
||||
- **Cordon/Uncordon**: Marking a node as unschedulable is very useful during a node reboot or other maintenance. The Kubernetes scheduler will not schedule new Pods to this node if it's been marked unschedulable. Besides, this does not affect existing workloads already on the node. In KubeSphere, you mark a node as unschedulable by clicking **Cordon** on the node detail page. The node will be schedulable if you click the button (**Uncordon**) again.
|
||||
- **Labels**: Node labels can be very useful when you want to assign Pods to specific nodes. Label a node first (for example, label GPU nodes with `node-role.kubernetes.io/gpu-node`), and then add the label in **Advanced Settings** [when you create a workload](../../project-user-guide/application-workloads/deployments/#step-5-configure-advanced-settings) so that you can allow Pods to run on GPU nodes explicitly. To add node labels, click **More** and select **Edit Labels**.
|
||||
|
||||
- **Taints**: Taints allow a node to repel a set of pods. You add or remove node taints on the node detail page. To add or delete taints, click **More** and select **Edit Taints** from the drop-down menu.
|
||||
|
||||
{{< notice note >}}
|
||||
Be careful when you add taints as they may cause unexpected behavior, leading to services unavailable. For more information, see [Taints and Tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/).
|
||||
{{</ notice >}}
|
||||
|
||||
## Add and Remove Nodes
|
||||
|
||||
Currently, you cannot add or remove nodes directly from the KubeSphere console, but you can do it by using [KubeKey](https://github.com/kubesphere/kubekey). For more information, see [Add New Nodes](../../installing-on-linux/cluster-operation/add-new-nodes/) and [Remove Nodes](../../installing-on-linux/cluster-operation/remove-nodes/).
|
||||
|
|
@ -0,0 +1,56 @@
|
|||
---
|
||||
title: "Node Management"
|
||||
keywords: "Kubernetes, KubeSphere, taints, nodes, labels, requests, limits"
|
||||
description: "Monitor node status and learn how to add node labels or taints."
|
||||
|
||||
linkTitle: "Node Management"
|
||||
weight: 8100
|
||||
---
|
||||
|
||||
Kubernetes runs your workloads by placing containers into Pods to run on nodes. A node may be a virtual or physical machine, depending on the cluster. Each node contains the services necessary to run Pods, managed by the control plane. For more information about nodes, see the [official documentation of Kubernetes](https://kubernetes.io/docs/concepts/architecture/nodes/).
|
||||
|
||||
This tutorial demonstrates what a cluster administrator can view and do for nodes within a cluster.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
You need a user granted a role including the authorization of **Cluster Management**. For example, you can log in to the console as `admin` directly or create a new role with the authorization and assign it to a user.
|
||||
|
||||
## Node Status
|
||||
|
||||
Cluster nodes are only accessible to cluster administrators. Some node metrics are very important to clusters. Therefore, it is the administrator's responsibility to watch over these numbers and make sure nodes are available. Follow the steps below to view node status.
|
||||
|
||||
1. Click **Platform** in the upper-left corner and select **Cluster Management**.
|
||||
|
||||
2. If you have enabled the [multi-cluster feature](../../multicluster-management/) with member clusters imported, you can select a specific cluster to view its nodes. If you have not enabled the feature, refer to the next step directly.
|
||||
|
||||
3. Choose **Cluster Nodes** under **Nodes**, where you can see detailed information of node status.
|
||||
|
||||
- **Name**: The node name and subnet IP address.
|
||||
- **Status**: The current status of a node, indicating whether a node is available or not.
|
||||
- **Role**: The role of a node, indicating whether a node is a worker or the control plane.
|
||||
- **CPU Usage**: The real-time CPU usage of a node.
|
||||
- **Memory Usage**: The real-time memory usage of a node.
|
||||
- **Pods**: The real-time usage of Pods on a node.
|
||||
- **Allocated CPU**: This metric is calculated based on the total CPU requests of Pods on a node. It represents the amount of CPU reserved for workloads on this node, even if workloads are using fewer CPU resources. This figure is vital to the Kubernetes scheduler (kube-scheduler), which favors nodes with lower allocated CPU resources when scheduling a Pod in most cases. For more details, refer to [Managing Resources for Containers](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/).
|
||||
- **Allocated Memory**: This metric is calculated based on the total memory requests of Pods on a node. It represents the amount of memory reserved for workloads on this node, even if workloads are using fewer memory resources.
|
||||
|
||||
{{< notice note >}}
|
||||
**CPU** and **Allocated CPU** are different most times, so are **Memory** and **Allocated Memory**, which is normal. As a cluster administrator, you need to focus on both metrics instead of just one. It's always a good practice to set resource requests and limits for each node to match their real usage. Over-allocating resources can lead to low cluster utilization, while under-allocating may result in high pressure on a cluster, leaving the cluster unhealthy.
|
||||
{{</ notice >}}
|
||||
|
||||
## Node Management
|
||||
|
||||
Click a node from the list and you can go to its detail page.
|
||||
|
||||
- **Cordon/Uncordon**: Marking a node as unschedulable is very useful during a node reboot or other maintenance. The Kubernetes scheduler will not schedule new Pods to this node if it's been marked unschedulable. Besides, this does not affect existing workloads already on the node. In KubeSphere, you mark a node as unschedulable by clicking **Cordon** on the node detail page. The node will be schedulable if you click the button (**Uncordon**) again.
|
||||
- **Labels**: Node labels can be very useful when you want to assign Pods to specific nodes. Label a node first (for example, label GPU nodes with `node-role.kubernetes.io/gpu-node`), and then add the label in **Advanced Settings** [when you create a workload](../../project-user-guide/application-workloads/deployments/#step-5-configure-advanced-settings) so that you can allow Pods to run on GPU nodes explicitly. To add node labels, click **More** and select **Edit Labels**.
|
||||
|
||||
- **Taints**: Taints allow a node to repel a set of pods. You add or remove node taints on the node detail page. To add or delete taints, click **More** and select **Edit Taints** from the drop-down menu.
|
||||
|
||||
{{< notice note >}}
|
||||
Be careful when you add taints as they may cause unexpected behavior, leading to services unavailable. For more information, see [Taints and Tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/).
|
||||
{{</ notice >}}
|
||||
|
||||
## Add and Remove Nodes
|
||||
|
||||
Currently, you cannot add or remove nodes directly from the KubeSphere console, but you can do it by using [KubeKey](https://github.com/kubesphere/kubekey). For more information, see [Add New Nodes](../../installing-on-linux/cluster-operation/add-new-nodes/) and [Remove Nodes](../../installing-on-linux/cluster-operation/remove-nodes/).
|
||||
|
|
@ -0,0 +1,74 @@
|
|||
---
|
||||
title: "Cluster Shutdown and Restart"
|
||||
description: "Learn how to gracefully shut down your cluster and restart it."
|
||||
layout: "single"
|
||||
|
||||
linkTitle: "Cluster Shutdown and Restart"
|
||||
weight: 8800
|
||||
|
||||
icon: "/images/docs/docs.svg"
|
||||
---
|
||||
This document describes the process of gracefully shutting down your Kubernetes cluster and how to restart it. You might need to temporarily shut down your cluster for maintenance reasons.
|
||||
|
||||
{{< notice warning >}}
|
||||
Shutting down a cluster is very dangerous. You must fully understand the operation and its consequences. Please make an etcd backup before you proceed.
|
||||
Usually, it is recommended to maintain your nodes one by one instead of restarting the whole cluster.
|
||||
{{</ notice >}}
|
||||
|
||||
## Prerequisites
|
||||
- Take an [etcd backup](https://etcd.io/docs/current/op-guide/recovery/#snapshotting-the-keyspace) prior to shutting down a cluster.
|
||||
- SSH [passwordless login](https://man.openbsd.org/ssh.1#AUTHENTICATION) is set up between hosts.
|
||||
|
||||
## Shut Down a Kubernetes Cluster
|
||||
{{< notice tip >}}
|
||||
|
||||
- You must back up your etcd data before you shut down the cluster as your cluster can be restored if you encounter any issues when restarting the cluster.
|
||||
- Using the method in this tutorial can shut down a cluster gracefully, while the possibility of data corruption still exists.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
### Step 1: Get the node list
|
||||
```bash
|
||||
nodes=$(kubectl get nodes -o name)
|
||||
```
|
||||
### Step 2: Shut down all nodes
|
||||
```bash
|
||||
for node in ${nodes[@]}
|
||||
do
|
||||
echo "==== Shut down $node ===="
|
||||
ssh $node sudo shutdown -h 1
|
||||
done
|
||||
```
|
||||
Then you can shut down other cluster dependencies, such as external storage.
|
||||
|
||||
## Restart a Cluster Gracefully
|
||||
You can restart a cluster gracefully after shutting down the cluster gracefully.
|
||||
|
||||
### Prerequisites
|
||||
You have shut down your cluster gracefully.
|
||||
|
||||
{{< notice tip >}}
|
||||
Usually, a cluster can be used after restarting, but the cluster may be unavailable due to unexpected conditions. For example:
|
||||
|
||||
- etcd data corruption during the shutdown.
|
||||
- Node failures.
|
||||
- Unexpected network errors.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
### Step 1: Check all cluster dependencies' status
|
||||
Ensure all cluster dependencies are ready, such as external storage.
|
||||
### Step 2: Power on cluster machines
|
||||
Wait for the cluster to be up and running, which may take about 10 minutes.
|
||||
### Step 3: Check all master nodes' status
|
||||
Check the status of core components, such as etcd services, and make sure everything is ready.
|
||||
```bash
|
||||
kubectl get nodes -l node-role.kubernetes.io/master
|
||||
```
|
||||
|
||||
### Step 4: Check all worker nodes' status
|
||||
```bash
|
||||
kubectl get nodes -l node-role.kubernetes.io/worker
|
||||
```
|
||||
|
||||
If your cluster fails to restart, please try to [restore the etcd cluster](https://etcd.io/docs/current/op-guide/recovery/#restoring-a-cluster).
|
||||
|
|
@ -0,0 +1,74 @@
|
|||
---
|
||||
title: "Cluster Shutdown and Restart"
|
||||
description: "Learn how to gracefully shut down your cluster and restart it."
|
||||
layout: "single"
|
||||
|
||||
linkTitle: "Cluster Shutdown and Restart"
|
||||
weight: 8800
|
||||
|
||||
icon: "/images/docs/docs.svg"
|
||||
---
|
||||
This document describes the process of gracefully shutting down your Kubernetes cluster and how to restart it. You might need to temporarily shut down your cluster for maintenance reasons.
|
||||
|
||||
{{< notice warning >}}
|
||||
Shutting down a cluster is very dangerous. You must fully understand the operation and its consequences. Please make an etcd backup before you proceed.
|
||||
Usually, it is recommended to maintain your nodes one by one instead of restarting the whole cluster.
|
||||
{{</ notice >}}
|
||||
|
||||
## Prerequisites
|
||||
- Take an [etcd backup](https://etcd.io/docs/current/op-guide/recovery/#snapshotting-the-keyspace) prior to shutting down a cluster.
|
||||
- SSH [passwordless login](https://man.openbsd.org/ssh.1#AUTHENTICATION) is set up between hosts.
|
||||
|
||||
## Shut Down a Kubernetes Cluster
|
||||
{{< notice tip >}}
|
||||
|
||||
- You must back up your etcd data before you shut down the cluster as your cluster can be restored if you encounter any issues when restarting the cluster.
|
||||
- Using the method in this tutorial can shut down a cluster gracefully, while the possibility of data corruption still exists.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
### Step 1: Get the node list
|
||||
```bash
|
||||
nodes=$(kubectl get nodes -o name)
|
||||
```
|
||||
### Step 2: Shut down all nodes
|
||||
```bash
|
||||
for node in ${nodes[@]}
|
||||
do
|
||||
echo "==== Shut down $node ===="
|
||||
ssh $node sudo shutdown -h 1
|
||||
done
|
||||
```
|
||||
Then you can shut down other cluster dependencies, such as external storage.
|
||||
|
||||
## Restart a Cluster Gracefully
|
||||
You can restart a cluster gracefully after shutting down the cluster gracefully.
|
||||
|
||||
### Prerequisites
|
||||
You have shut down your cluster gracefully.
|
||||
|
||||
{{< notice tip >}}
|
||||
Usually, a cluster can be used after restarting, but the cluster may be unavailable due to unexpected conditions. For example:
|
||||
|
||||
- etcd data corruption during the shutdown.
|
||||
- Node failures.
|
||||
- Unexpected network errors.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
### Step 1: Check all cluster dependencies' status
|
||||
Ensure all cluster dependencies are ready, such as external storage.
|
||||
### Step 2: Power on cluster machines
|
||||
Wait for the cluster to be up and running, which may take about 10 minutes.
|
||||
### Step 3: Check the status of all control plane components
|
||||
Check the status of core components, such as etcd services, and make sure everything is ready.
|
||||
```bash
|
||||
kubectl get nodes -l node-role.kubernetes.io/master
|
||||
```
|
||||
|
||||
### Step 4: Check all worker nodes' status
|
||||
```bash
|
||||
kubectl get nodes -l node-role.kubernetes.io/worker
|
||||
```
|
||||
|
||||
If your cluster fails to restart, please try to [restore the etcd cluster](https://etcd.io/docs/current/op-guide/recovery/#restoring-a-cluster).
|
||||
|
|
@ -0,0 +1,40 @@
|
|||
---
|
||||
title: "SSH Connection Failure"
|
||||
keywords: "Installation, SSH, KubeSphere, Kubernetes"
|
||||
description: "Understand why the SSH connection may fail when you use KubeKey to create a cluster."
|
||||
linkTitle: "SSH Connection Failure"
|
||||
Weight: 16600
|
||||
---
|
||||
|
||||
When you use KubeKey to set up a cluster, you create a configuration file which contains necessary host information. Here is an example of the field `hosts`:
|
||||
|
||||
```bash
|
||||
spec:
|
||||
hosts:
|
||||
- {name: master, address: 192.168.0.2, internalAddress: 192.168.0.2, user: ubuntu, password: Testing123}
|
||||
- {name: node1, address: 192.168.0.3, internalAddress: 192.168.0.3, user: ubuntu, password: Testing123}
|
||||
- {name: node2, address: 192.168.0.4, internalAddress: 192.168.0.4, user: ubuntu, password: Testing123}
|
||||
```
|
||||
|
||||
Before you start to use the `./kk` command to create your cluster, it is recommended that you test the connection between the taskbox and other instances using SSH.
|
||||
|
||||
## Possible Error Message
|
||||
|
||||
```bash
|
||||
Failed to connect to xx.xxx.xx.xxx: could not establish connection to xx.xxx.xx.xxx:xx: ssh: handshake failed: ssh: unable to authenticate , attempted methods [none], no supported methods remain node=xx.xxx.xx.xxx
|
||||
```
|
||||
|
||||
If you see an error message as above, verify that:
|
||||
|
||||
- You are using the correct port number. Port `22` is the default port of SSH and you need to add the port number after the IP address if your port is different. For example:
|
||||
|
||||
```bash
|
||||
hosts:
|
||||
- {name: master, address: 192.168.0.2, internalAddress: 192.168.0.2, port: 8022, user: ubuntu, password: Testing123}
|
||||
```
|
||||
|
||||
- SSH connections are not restricted in `/etc/ssh/sshd_config`. For example, `PasswordAuthentication` should be set to `true`.
|
||||
|
||||
- You are using the correct username, password or key. Note that the user must have sudo privileges.
|
||||
|
||||
- Your firewall configurations allow SSH connections.
|
||||
|
|
@ -0,0 +1,40 @@
|
|||
---
|
||||
title: "SSH Connection Failure"
|
||||
keywords: "Installation, SSH, KubeSphere, Kubernetes"
|
||||
description: "Understand why the SSH connection may fail when you use KubeKey to create a cluster."
|
||||
linkTitle: "SSH Connection Failure"
|
||||
Weight: 16600
|
||||
---
|
||||
|
||||
When you use KubeKey to set up a cluster, you create a configuration file which contains necessary host information. Here is an example of the field `hosts`:
|
||||
|
||||
```bash
|
||||
spec:
|
||||
hosts:
|
||||
- {name: control plane, address: 192.168.0.2, internalAddress: 192.168.0.2, user: ubuntu, password: Testing123}
|
||||
- {name: node1, address: 192.168.0.3, internalAddress: 192.168.0.3, user: ubuntu, password: Testing123}
|
||||
- {name: node2, address: 192.168.0.4, internalAddress: 192.168.0.4, user: ubuntu, password: Testing123}
|
||||
```
|
||||
|
||||
Before you start to use the `./kk` command to create your cluster, it is recommended that you test the connection between the taskbox and other instances using SSH.
|
||||
|
||||
## Possible Error Message
|
||||
|
||||
```bash
|
||||
Failed to connect to xx.xxx.xx.xxx: could not establish connection to xx.xxx.xx.xxx:xx: ssh: handshake failed: ssh: unable to authenticate , attempted methods [none], no supported methods remain node=xx.xxx.xx.xxx
|
||||
```
|
||||
|
||||
If you see an error message as above, verify that:
|
||||
|
||||
- You are using the correct port number. Port `22` is the default port of SSH and you need to add the port number after the IP address if your port is different. For example:
|
||||
|
||||
```bash
|
||||
hosts:
|
||||
- {name: master, address: 192.168.0.2, internalAddress: 192.168.0.2, port: 8022, user: ubuntu, password: Testing123}
|
||||
```
|
||||
|
||||
- SSH connections are not restricted in `/etc/ssh/sshd_config`. For example, `PasswordAuthentication` should be set to `true`.
|
||||
|
||||
- You are using the correct username, password or key. Note that the user must have sudo privileges.
|
||||
|
||||
- Your firewall configurations allow SSH connections.
|
||||
|
|
@ -0,0 +1,40 @@
|
|||
---
|
||||
title: "SSH Connection Failure"
|
||||
keywords: "Installation, SSH, KubeSphere, Kubernetes"
|
||||
description: "Understand why the SSH connection may fail when you use KubeKey to create a cluster."
|
||||
linkTitle: "SSH Connection Failure"
|
||||
Weight: 16600
|
||||
---
|
||||
|
||||
When you use KubeKey to set up a cluster, you create a configuration file which contains necessary host information. Here is an example of the field `hosts`:
|
||||
|
||||
```bash
|
||||
spec:
|
||||
hosts:
|
||||
- {name: control plane, address: 192.168.0.2, internalAddress: 192.168.0.2, user: ubuntu, password: Testing123}
|
||||
- {name: node1, address: 192.168.0.3, internalAddress: 192.168.0.3, user: ubuntu, password: Testing123}
|
||||
- {name: node2, address: 192.168.0.4, internalAddress: 192.168.0.4, user: ubuntu, password: Testing123}
|
||||
```
|
||||
|
||||
Before you start to use the `./kk` command to create your cluster, it is recommended that you test the connection between the taskbox and other instances using SSH.
|
||||
|
||||
## Possible Error Message
|
||||
|
||||
```bash
|
||||
Failed to connect to xx.xxx.xx.xxx: could not establish connection to xx.xxx.xx.xxx:xx: ssh: handshake failed: ssh: unable to authenticate , attempted methods [none], no supported methods remain node=xx.xxx.xx.xxx
|
||||
```
|
||||
|
||||
If you see an error message as above, verify that:
|
||||
|
||||
- You are using the correct port number. Port `22` is the default port of SSH and you need to add the port number after the IP address if your port is different. For example:
|
||||
|
||||
```bash
|
||||
hosts:
|
||||
- {name: control plane, address: 192.168.0.2, internalAddress: 192.168.0.2, port: 8022, user: ubuntu, password: Testing123}
|
||||
```
|
||||
|
||||
- SSH connections are not restricted in `/etc/ssh/sshd_config`. For example, `PasswordAuthentication` should be set to `true`.
|
||||
|
||||
- You are using the correct username, password or key. Note that the user must have sudo privileges.
|
||||
|
||||
- Your firewall configurations allow SSH connections.
|
||||
|
|
@ -0,0 +1,113 @@
|
|||
---
|
||||
title: "Deploy KubeSphere on DigitalOcean Kubernetes"
|
||||
keywords: 'Kubernetes, KubeSphere, DigitalOcean, Installation'
|
||||
description: 'Learn how to deploy KubeSphere on DigitalOcean.'
|
||||
|
||||
weight: 4230
|
||||
---
|
||||
|
||||

|
||||
|
||||
This guide walks you through the steps of deploying KubeSphere on [DigitalOcean Kubernetes](https://www.digitalocean.com/products/kubernetes/).
|
||||
|
||||
## Prepare a DOKS Cluster
|
||||
|
||||
A Kubernetes cluster in DO is a prerequisite for installing KubeSphere. Go to your [DO account](https://cloud.digitalocean.com/) and refer to the image below to create a cluster from the navigation menu.
|
||||
|
||||

|
||||
|
||||
You need to select:
|
||||
|
||||
1. Kubernetes version (for example, *1.18.6-do.0*)
|
||||
2. Datacenter region (for example, *Frankfurt*)
|
||||
3. VPC network (for example, *default-fra1*)
|
||||
4. Cluster capacity (for example, 2 standard nodes with 2 vCPUs and 4GB of RAM each)
|
||||
5. A name for the cluster (for example, *kubesphere-3*)
|
||||
|
||||

|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- To install KubeSphere 3.2.1 on Kubernetes, your Kubernetes version must be v1.19.x, v1.20.x, v1.21.x, or v1.22.x (experimental).
|
||||
- 2 nodes are included in this example. You can add more nodes based on your own needs especially in a production environment.
|
||||
- The machine type Standard / 4 GB / 2 vCPUs is for minimal installation. If you plan to enable several pluggable components or use the cluster for production, you can upgrade your nodes to a more powerfull type (such as CPU-Optimized / 8 GB / 4 vCPUs). It seems that DigitalOcean provisions the master nodes based on the type of the worker nodes, and for Standard ones the API server can become unresponsive quite soon.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
When the cluster is ready, you can download the config file for kubectl.
|
||||
|
||||

|
||||
|
||||
## Install KubeSphere on DOKS
|
||||
|
||||
Now that the cluster is ready, you can install KubeSphere following the steps below:
|
||||
|
||||
- Install KubeSphere using kubectl. The following commands are only for the default minimal installation.
|
||||
|
||||
```bash
|
||||
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/kubesphere-installer.yaml
|
||||
|
||||
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/cluster-configuration.yaml
|
||||
```
|
||||
|
||||
- Inspect the logs of installation:
|
||||
|
||||
```bash
|
||||
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
|
||||
```
|
||||
|
||||
When the installation finishes, you can see the following message:
|
||||
|
||||
```bash
|
||||
#####################################################
|
||||
### Welcome to KubeSphere! ###
|
||||
#####################################################
|
||||
Console: http://10.XXX.XXX.XXX:30880
|
||||
Account: admin
|
||||
Password: P@88w0rd
|
||||
NOTES:
|
||||
1. After logging into the console, please check the
|
||||
monitoring status of service components in
|
||||
the "Cluster Management". If any service is not
|
||||
ready, please wait patiently until all components
|
||||
are ready.
|
||||
2. Please modify the default password after login.
|
||||
#####################################################
|
||||
https://kubesphere.io 2020-xx-xx xx:xx:xx
|
||||
```
|
||||
|
||||
## Access KubeSphere Console
|
||||
|
||||
Now that KubeSphere is installed, you can access the web console of KubeSphere by following the steps below.
|
||||
|
||||
- Go to the Kubernetes Dashboard provided by DigitalOcean.
|
||||
|
||||

|
||||
|
||||
- Select the **kubesphere-system** namespace.
|
||||
|
||||

|
||||
|
||||
- In **Services** under **Service**, edit the service **ks-console**.
|
||||
|
||||

|
||||
|
||||
- Change the type from `NodePort` to `LoadBalancer`. Save the file when you finish.
|
||||
|
||||

|
||||
|
||||
- Access the KubeSphere's web console using the endpoint generated by DO.
|
||||
|
||||

|
||||
|
||||
{{< notice tip >}}
|
||||
|
||||
Instead of changing the service type to `LoadBalancer`, you can also access KubeSphere console via `NodeIP:NodePort` (service type set to `NodePort`). You need to get the public IP of one of your nodes.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
- Log in to the console with the default account and password (`admin/P@88w0rd`). In the cluster overview page, you can see the dashboard.
|
||||
|
||||
## Enable Pluggable Components (Optional)
|
||||
|
||||
The example above demonstrates the process of a default minimal installation. To enable other components in KubeSphere, see [Enable Pluggable Components](../../../pluggable-components/) for more details.
|
||||
|
|
@ -0,0 +1,113 @@
|
|||
---
|
||||
title: "Deploy KubeSphere on DigitalOcean Kubernetes"
|
||||
keywords: 'Kubernetes, KubeSphere, DigitalOcean, Installation'
|
||||
description: 'Learn how to deploy KubeSphere on DigitalOcean.'
|
||||
|
||||
weight: 4230
|
||||
---
|
||||
|
||||

|
||||
|
||||
This guide walks you through the steps of deploying KubeSphere on [DigitalOcean Kubernetes](https://www.digitalocean.com/products/kubernetes/).
|
||||
|
||||
## Prepare a DOKS Cluster
|
||||
|
||||
A Kubernetes cluster in DO is a prerequisite for installing KubeSphere. Go to your [DO account](https://cloud.digitalocean.com/) and refer to the image below to create a cluster from the navigation menu.
|
||||
|
||||

|
||||
|
||||
You need to select:
|
||||
|
||||
1. Kubernetes version (for example, *1.18.6-do.0*)
|
||||
2. Datacenter region (for example, *Frankfurt*)
|
||||
3. VPC network (for example, *default-fra1*)
|
||||
4. Cluster capacity (for example, 2 standard nodes with 2 vCPUs and 4GB of RAM each)
|
||||
5. A name for the cluster (for example, *kubesphere-3*)
|
||||
|
||||

|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- To install KubeSphere 3.2.1 on Kubernetes, your Kubernetes version must be v1.19.x, v1.20.x, v1.21.x, or v1.22.x (experimental).
|
||||
- 2 nodes are included in this example. You can add more nodes based on your own needs especially in a production environment.
|
||||
- The machine type Standard / 4 GB / 2 vCPUs is for minimal installation. If you plan to enable several pluggable components or use the cluster for production, you can upgrade your nodes to a more powerful type (such as CPU-Optimized / 8 GB / 4 vCPUs). It seems that DigitalOcean provisions the control plane nodes based on the type of the worker nodes, and for Standard ones the API server can become unresponsive quite soon.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
When the cluster is ready, you can download the config file for kubectl.
|
||||
|
||||

|
||||
|
||||
## Install KubeSphere on DOKS
|
||||
|
||||
Now that the cluster is ready, you can install KubeSphere following the steps below:
|
||||
|
||||
- Install KubeSphere using kubectl. The following commands are only for the default minimal installation.
|
||||
|
||||
```bash
|
||||
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/kubesphere-installer.yaml
|
||||
|
||||
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/cluster-configuration.yaml
|
||||
```
|
||||
|
||||
- Inspect the logs of installation:
|
||||
|
||||
```bash
|
||||
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
|
||||
```
|
||||
|
||||
When the installation finishes, you can see the following message:
|
||||
|
||||
```bash
|
||||
#####################################################
|
||||
### Welcome to KubeSphere! ###
|
||||
#####################################################
|
||||
Console: http://10.XXX.XXX.XXX:30880
|
||||
Account: admin
|
||||
Password: P@88w0rd
|
||||
NOTES:
|
||||
1. After logging into the console, please check the
|
||||
monitoring status of service components in
|
||||
the "Cluster Management". If any service is not
|
||||
ready, please wait patiently until all components
|
||||
are ready.
|
||||
2. Please modify the default password after login.
|
||||
#####################################################
|
||||
https://kubesphere.io 2020-xx-xx xx:xx:xx
|
||||
```
|
||||
|
||||
## Access KubeSphere Console
|
||||
|
||||
Now that KubeSphere is installed, you can access the web console of KubeSphere by following the steps below.
|
||||
|
||||
- Go to the Kubernetes Dashboard provided by DigitalOcean.
|
||||
|
||||

|
||||
|
||||
- Select the **kubesphere-system** namespace.
|
||||
|
||||

|
||||
|
||||
- In **Services** under **Service**, edit the service **ks-console**.
|
||||
|
||||

|
||||
|
||||
- Change the type from `NodePort` to `LoadBalancer`. Save the file when you finish.
|
||||
|
||||

|
||||
|
||||
- Access the KubeSphere's web console using the endpoint generated by DO.
|
||||
|
||||

|
||||
|
||||
{{< notice tip >}}
|
||||
|
||||
Instead of changing the service type to `LoadBalancer`, you can also access KubeSphere console via `NodeIP:NodePort` (service type set to `NodePort`). You need to get the public IP of one of your nodes.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
- Log in to the console with the default account and password (`admin/P@88w0rd`). In the cluster overview page, you can see the dashboard.
|
||||
|
||||
## Enable Pluggable Components (Optional)
|
||||
|
||||
The example above demonstrates the process of a default minimal installation. To enable other components in KubeSphere, see [Enable Pluggable Components](../../../pluggable-components/) for more details.
|
||||
|
|
@ -0,0 +1,226 @@
|
|||
---
|
||||
title: "Add Edge Nodes"
|
||||
keywords: 'Kubernetes, KubeSphere, KubeEdge'
|
||||
description: 'Add edge nodes to your cluster.'
|
||||
linkTitle: "Add Edge Nodes"
|
||||
weight: 3630
|
||||
---
|
||||
|
||||
KubeSphere leverages [KubeEdge](https://kubeedge.io/en/), to extend native containerized application orchestration capabilities to hosts at edge. With separate cloud and edge core modules, KubeEdge provides complete edge computing solutions while the installation may be complex and difficult.
|
||||
|
||||

|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
For more information about different components of KubeEdge, see [the KubeEdge documentation](https://docs.kubeedge.io/en/docs/kubeedge/#components).
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
After an edge node joins your cluster, the native KubeEdge cloud component requires you to manually configure iptables so that you can use commands such as `kubectl logs` and `kubectl exec`. In this connection, KubeSphere features an efficient and convenient way to add edge nodes to a Kubernetes cluster. It uses supporting components (for example, EdgeWatcher) to automatically configure iptables.
|
||||
|
||||

|
||||
|
||||
This tutorial demonstrates how to add an edge node to your cluster.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- You have enabled [KubeEdge](../../../pluggable-components/kubeedge/).
|
||||
- You have an available node to serve as an edge node. The node can run either Ubuntu (recommended) or CentOS. This tutorial uses Ubuntu 18.04 as an example.
|
||||
- Edge nodes, unlike Kubernetes cluster nodes, should work in a separate network.
|
||||
|
||||
## Configure an Edge Node
|
||||
|
||||
You need to install a container runtime and configure EdgeMesh on your edge node.
|
||||
|
||||
### Install a container runtime
|
||||
|
||||
[KubeEdge](https://docs.kubeedge.io/en/docs/) supports several container runtimes including Docker, containerd, CRI-O and Virtlet. For more information, see [the KubeEdge documentation](https://docs.kubeedge.io/en/docs/advanced/cri/).
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
If you use Docker as the container runtime for your edge node, Docker v19.3.0 or later must be installed so that KubeSphere can get Pod metrics of it.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
### Configure EdgeMesh
|
||||
|
||||
Perform the following steps to configure [EdgeMesh](https://kubeedge.io/en/docs/advanced/edgemesh/) on your edge node.
|
||||
|
||||
1. Edit `/etc/nsswitch.conf`.
|
||||
|
||||
```bash
|
||||
vi /etc/nsswitch.conf
|
||||
```
|
||||
|
||||
2. Add the following content to this file:
|
||||
|
||||
```bash
|
||||
hosts: dns files mdns4_minimal [NOTFOUND=return]
|
||||
```
|
||||
|
||||
3. Save the file and run the following command to enable IP forwarding:
|
||||
|
||||
```bash
|
||||
sudo echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
|
||||
```
|
||||
|
||||
4. Verify your modification:
|
||||
|
||||
```bash
|
||||
sudo sysctl -p | grep ip_forward
|
||||
```
|
||||
|
||||
Expected result:
|
||||
|
||||
```bash
|
||||
net.ipv4.ip_forward = 1
|
||||
```
|
||||
|
||||
## Create Firewall Rules and Port Forwarding Rules
|
||||
|
||||
To make sure edge nodes can successfully talk to your cluster, you must forward ports for outside traffic to get into your network. Specifically, map an external port to the corresponding internal IP address (master node) and port based on the table below. Besides, you also need to create firewall rules to allow traffic to these ports (`10000` to `10004`).
|
||||
|
||||
| Fields | External Ports | Fields | Internal Ports |
|
||||
| ------------------- | -------------- | ----------------------- | -------------- |
|
||||
| `cloudhubPort` | `10000` | `cloudhubNodePort` | `30000` |
|
||||
| `cloudhubQuicPort` | `10001` | `cloudhubQuicNodePort` | `30001` |
|
||||
| `cloudhubHttpsPort` | `10002` | `cloudhubHttpsNodePort` | `30002` |
|
||||
| `cloudstreamPort` | `10003` | `cloudstreamNodePort` | `30003` |
|
||||
| `tunnelPort` | `10004` | `tunnelNodePort` | `30004` |
|
||||
|
||||
## Add an Edge Node
|
||||
|
||||
1. Log in to the console as `admin` and click **Platform** in the upper-left corner.
|
||||
|
||||
2. Select **Cluster Management** and navigate to **Edge Nodes** under **Nodes**.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
If you have enabled [multi-cluster management](../../../multicluster-management/), you need to select a cluster first.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
3. Click **Add**. In the dialog that appears, set a node name and enter an internal IP address of your edge node. Click **Validate** to continue.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- The internal IP address is only used for inter-node communication and you do not necessarily need to use the actual internal IP address of the edge node. As long as the IP address is successfully validated, you can use it.
|
||||
- It is recommended that you check the box to add the default taint.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
4. Copy the command automatically created under **Edge Node Configuration Command** and run it on your edge node.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
Make sure `wget` is installed on your edge node before you run the command.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
5. Close the dialog, refresh the page, and the edge node will appear in the list.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
After an edge node is added, if you cannot see CPU and memory resource usage on the **Edge Nodes** page, make sure [Metrics Server](../../../pluggable-components/metrics-server/) 0.4.1 or later is installed in your cluster.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
6. After an edge node joins your cluster, some Pods may be scheduled to it while they remains in the `Pending` state on the edge node. Due to the tolerations some DaemonSets (for example, Calico) have, in the current version (KubeSphere 3.2.1), you need to manually patch some Pods so that they will not be schedule to the edge node.
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
NodeSelectorPatchJson='{"spec":{"template":{"spec":{"nodeSelector":{"node-role.kubernetes.io/master": "","node-role.kubernetes.io/worker": ""}}}}}'
|
||||
|
||||
NoShedulePatchJson='{"spec":{"template":{"spec":{"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"node-role.kubernetes.io/edge","operator":"DoesNotExist"}]}]}}}}}}}'
|
||||
|
||||
edgenode="edgenode"
|
||||
if [ $1 ]; then
|
||||
edgenode="$1"
|
||||
fi
|
||||
|
||||
|
||||
namespaces=($(kubectl get pods -A -o wide |egrep -i $edgenode | awk '{print $1}' ))
|
||||
pods=($(kubectl get pods -A -o wide |egrep -i $edgenode | awk '{print $2}' ))
|
||||
length=${#namespaces[@]}
|
||||
|
||||
|
||||
for((i=0;i<$length;i++));
|
||||
do
|
||||
ns=${namespaces[$i]}
|
||||
pod=${pods[$i]}
|
||||
resources=$(kubectl -n $ns describe pod $pod | grep "Controlled By" |awk '{print $3}')
|
||||
echo "Patching for ns:"${namespaces[$i]}",resources:"$resources
|
||||
kubectl -n $ns patch $resources --type merge --patch "$NoShedulePatchJson"
|
||||
sleep 1
|
||||
done
|
||||
```
|
||||
|
||||
## Custom Configurations
|
||||
|
||||
To customize some configurations of an edge node, such as download URL and KubeEdge version, create a [ConfigMap](../../../project-user-guide/configuration/configmaps/) as below:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
data:
|
||||
region: zh # Download region.
|
||||
version: v1.6.1 # The version of KubeEdge to be installed. Allowed values are v1.5.0, v1.6.0, v1.6.1 (default) and v1.6.2.
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: edge-watcher-config
|
||||
namespace: kubeedge
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- You can specify `zh` or `en` for the field `region`. `zh` is the default value and the default download link is `https://kubeedge.pek3b.qingstor.com/bin/v1.6.1/$arch/keadm-v1.6.1-linux-$arch.tar.gz`. If you set `region` to `en`, the download link will be `https://github.com/kubesphere/kubeedge/releases/download/v1.6.1-kubesphere/keadm-v1.6.1-linux-amd64.tar.gz`.
|
||||
- The ConfigMap does not affect the configurations of exiting edge nodes in your cluster. It is only used to change the KubeEdge configurations to be used on a new edge node. More specifically, it decides [the command automatically created by KubeSphere mentioned above](#add-an-edge-node) which needs to be executed on the edge node.
|
||||
- While you can change the KubeEdge version to be installed on an edge node, it is recommended that the cloud and edge modules have the same KubeEdge version.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
## Remove an Edge Node
|
||||
|
||||
Before you remove an edge node, delete all your workloads running on it.
|
||||
|
||||
1. On your edge node, run the following commands:
|
||||
|
||||
```bash
|
||||
./keadm reset
|
||||
```
|
||||
|
||||
```
|
||||
apt remove mosquitto
|
||||
```
|
||||
|
||||
```bash
|
||||
rm -rf /var/lib/kubeedge /var/lib/edged /etc/kubeedge/ca /etc/kubeedge/certs
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
If you cannot delete the tmpfs-mounted folder, restart the node or unmount the folder first.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
2. Run the following command to remove the edge node from your cluster:
|
||||
|
||||
```bash
|
||||
kubectl delete node <edgenode-name>
|
||||
```
|
||||
|
||||
3. To uninstall KubeEdge from your cluster, run the following commands:
|
||||
|
||||
```bash
|
||||
helm uninstall kubeedge -n kubeedge
|
||||
```
|
||||
|
||||
```bash
|
||||
kubectl delete ns kubeedge
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
After the uninstallation, you will not be able to add edge nodes to your cluster.
|
||||
|
||||
{{</ notice >}}
|
||||
|
|
@ -0,0 +1,226 @@
|
|||
---
|
||||
title: "Add Edge Nodes"
|
||||
keywords: 'Kubernetes, KubeSphere, KubeEdge'
|
||||
description: 'Add edge nodes to your cluster.'
|
||||
linkTitle: "Add Edge Nodes"
|
||||
weight: 3630
|
||||
---
|
||||
|
||||
KubeSphere leverages [KubeEdge](https://kubeedge.io/en/), to extend native containerized application orchestration capabilities to hosts at edge. With separate cloud and edge core modules, KubeEdge provides complete edge computing solutions while the installation may be complex and difficult.
|
||||
|
||||

|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
For more information about different components of KubeEdge, see [the KubeEdge documentation](https://docs.kubeedge.io/en/docs/kubeedge/#components).
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
After an edge node joins your cluster, the native KubeEdge cloud component requires you to manually configure iptables so that you can use commands such as `kubectl logs` and `kubectl exec`. In this connection, KubeSphere features an efficient and convenient way to add edge nodes to a Kubernetes cluster. It uses supporting components (for example, EdgeWatcher) to automatically configure iptables.
|
||||
|
||||

|
||||
|
||||
This tutorial demonstrates how to add an edge node to your cluster.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- You have enabled [KubeEdge](../../../pluggable-components/kubeedge/).
|
||||
- You have an available node to serve as an edge node. The node can run either Ubuntu (recommended) or CentOS. This tutorial uses Ubuntu 18.04 as an example.
|
||||
- Edge nodes, unlike Kubernetes cluster nodes, should work in a separate network.
|
||||
|
||||
## Configure an Edge Node
|
||||
|
||||
You need to install a container runtime and configure EdgeMesh on your edge node.
|
||||
|
||||
### Install a container runtime
|
||||
|
||||
[KubeEdge](https://docs.kubeedge.io/en/docs/) supports several container runtimes including Docker, containerd, CRI-O and Virtlet. For more information, see [the KubeEdge documentation](https://docs.kubeedge.io/en/docs/advanced/cri/).
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
If you use Docker as the container runtime for your edge node, Docker v19.3.0 or later must be installed so that KubeSphere can get Pod metrics of it.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
### Configure EdgeMesh
|
||||
|
||||
Perform the following steps to configure [EdgeMesh](https://kubeedge.io/en/docs/advanced/edgemesh/) on your edge node.
|
||||
|
||||
1. Edit `/etc/nsswitch.conf`.
|
||||
|
||||
```bash
|
||||
vi /etc/nsswitch.conf
|
||||
```
|
||||
|
||||
2. Add the following content to this file:
|
||||
|
||||
```bash
|
||||
hosts: dns files mdns4_minimal [NOTFOUND=return]
|
||||
```
|
||||
|
||||
3. Save the file and run the following command to enable IP forwarding:
|
||||
|
||||
```bash
|
||||
sudo echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
|
||||
```
|
||||
|
||||
4. Verify your modification:
|
||||
|
||||
```bash
|
||||
sudo sysctl -p | grep ip_forward
|
||||
```
|
||||
|
||||
Expected result:
|
||||
|
||||
```bash
|
||||
net.ipv4.ip_forward = 1
|
||||
```
|
||||
|
||||
## Create Firewall Rules and Port Forwarding Rules
|
||||
|
||||
To make sure edge nodes can successfully talk to your cluster, you must forward ports for outside traffic to get into your network. Specifically, map an external port to the corresponding internal IP address (control plane node) and port based on the table below. Besides, you also need to create firewall rules to allow traffic to these ports (`10000` to `10004`).
|
||||
|
||||
| Fields | External Ports | Fields | Internal Ports |
|
||||
| ------------------- | -------------- | ----------------------- | -------------- |
|
||||
| `cloudhubPort` | `10000` | `cloudhubNodePort` | `30000` |
|
||||
| `cloudhubQuicPort` | `10001` | `cloudhubQuicNodePort` | `30001` |
|
||||
| `cloudhubHttpsPort` | `10002` | `cloudhubHttpsNodePort` | `30002` |
|
||||
| `cloudstreamPort` | `10003` | `cloudstreamNodePort` | `30003` |
|
||||
| `tunnelPort` | `10004` | `tunnelNodePort` | `30004` |
|
||||
|
||||
## Add an Edge Node
|
||||
|
||||
1. Log in to the console as `admin` and click **Platform** in the upper-left corner.
|
||||
|
||||
2. Select **Cluster Management** and navigate to **Edge Nodes** under **Nodes**.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
If you have enabled [multi-cluster management](../../../multicluster-management/), you need to select a cluster first.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
3. Click **Add**. In the dialog that appears, set a node name and enter an internal IP address of your edge node. Click **Validate** to continue.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- The internal IP address is only used for inter-node communication and you do not necessarily need to use the actual internal IP address of the edge node. As long as the IP address is successfully validated, you can use it.
|
||||
- It is recommended that you check the box to add the default taint.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
4. Copy the command automatically created under **Edge Node Configuration Command** and run it on your edge node.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
Make sure `wget` is installed on your edge node before you run the command.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
5. Close the dialog, refresh the page, and the edge node will appear in the list.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
After an edge node is added, if you cannot see CPU and memory resource usage on the **Edge Nodes** page, make sure [Metrics Server](../../../pluggable-components/metrics-server/) 0.4.1 or later is installed in your cluster.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
6. After an edge node joins your cluster, some Pods may be scheduled to it while they remains in the `Pending` state on the edge node. Due to the tolerations some DaemonSets (for example, Calico) have, in the current version (KubeSphere 3.2.1), you need to manually patch some Pods so that they will not be schedule to the edge node.
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
NodeSelectorPatchJson='{"spec":{"template":{"spec":{"nodeSelector":{"node-role.kubernetes.io/master": "","node-role.kubernetes.io/worker": ""}}}}}'
|
||||
|
||||
NoShedulePatchJson='{"spec":{"template":{"spec":{"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"node-role.kubernetes.io/edge","operator":"DoesNotExist"}]}]}}}}}}}'
|
||||
|
||||
edgenode="edgenode"
|
||||
if [ $1 ]; then
|
||||
edgenode="$1"
|
||||
fi
|
||||
|
||||
|
||||
namespaces=($(kubectl get pods -A -o wide |egrep -i $edgenode | awk '{print $1}' ))
|
||||
pods=($(kubectl get pods -A -o wide |egrep -i $edgenode | awk '{print $2}' ))
|
||||
length=${#namespaces[@]}
|
||||
|
||||
|
||||
for((i=0;i<$length;i++));
|
||||
do
|
||||
ns=${namespaces[$i]}
|
||||
pod=${pods[$i]}
|
||||
resources=$(kubectl -n $ns describe pod $pod | grep "Controlled By" |awk '{print $3}')
|
||||
echo "Patching for ns:"${namespaces[$i]}",resources:"$resources
|
||||
kubectl -n $ns patch $resources --type merge --patch "$NoShedulePatchJson"
|
||||
sleep 1
|
||||
done
|
||||
```
|
||||
|
||||
## Custom Configurations
|
||||
|
||||
To customize some configurations of an edge node, such as download URL and KubeEdge version, create a [ConfigMap](../../../project-user-guide/configuration/configmaps/) as below:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
data:
|
||||
region: zh # Download region.
|
||||
version: v1.6.1 # The version of KubeEdge to be installed. Allowed values are v1.5.0, v1.6.0, v1.6.1 (default) and v1.6.2.
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: edge-watcher-config
|
||||
namespace: kubeedge
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- You can specify `zh` or `en` for the field `region`. `zh` is the default value and the default download link is `https://kubeedge.pek3b.qingstor.com/bin/v1.6.1/$arch/keadm-v1.6.1-linux-$arch.tar.gz`. If you set `region` to `en`, the download link will be `https://github.com/kubesphere/kubeedge/releases/download/v1.6.1-kubesphere/keadm-v1.6.1-linux-amd64.tar.gz`.
|
||||
- The ConfigMap does not affect the configurations of exiting edge nodes in your cluster. It is only used to change the KubeEdge configurations to be used on a new edge node. More specifically, it decides [the command automatically created by KubeSphere mentioned above](#add-an-edge-node) which needs to be executed on the edge node.
|
||||
- While you can change the KubeEdge version to be installed on an edge node, it is recommended that the cloud and edge modules have the same KubeEdge version.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
## Remove an Edge Node
|
||||
|
||||
Before you remove an edge node, delete all your workloads running on it.
|
||||
|
||||
1. On your edge node, run the following commands:
|
||||
|
||||
```bash
|
||||
./keadm reset
|
||||
```
|
||||
|
||||
```
|
||||
apt remove mosquitto
|
||||
```
|
||||
|
||||
```bash
|
||||
rm -rf /var/lib/kubeedge /var/lib/edged /etc/kubeedge/ca /etc/kubeedge/certs
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
If you cannot delete the tmpfs-mounted folder, restart the node or unmount the folder first.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
2. Run the following command to remove the edge node from your cluster:
|
||||
|
||||
```bash
|
||||
kubectl delete node <edgenode-name>
|
||||
```
|
||||
|
||||
3. To uninstall KubeEdge from your cluster, run the following commands:
|
||||
|
||||
```bash
|
||||
helm uninstall kubeedge -n kubeedge
|
||||
```
|
||||
|
||||
```bash
|
||||
kubectl delete ns kubeedge
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
After the uninstallation, you will not be able to add edge nodes to your cluster.
|
||||
|
||||
{{</ notice >}}
|
||||
|
|
@ -0,0 +1,226 @@
|
|||
---
|
||||
title: "Add Edge Nodes"
|
||||
keywords: 'Kubernetes, KubeSphere, KubeEdge'
|
||||
description: 'Add edge nodes to your cluster.'
|
||||
linkTitle: "Add Edge Nodes"
|
||||
weight: 3630
|
||||
---
|
||||
|
||||
KubeSphere leverages [KubeEdge](https://kubeedge.io/en/), to extend native containerized application orchestration capabilities to hosts at edge. With separate cloud and edge core modules, KubeEdge provides complete edge computing solutions while the installation may be complex and difficult.
|
||||
|
||||

|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
For more information about different components of KubeEdge, see [the KubeEdge documentation](https://docs.kubeedge.io/en/docs/kubeedge/#components).
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
After an edge node joins your cluster, the native KubeEdge cloud component requires you to manually configure iptables so that you can use commands such as `kubectl logs` and `kubectl exec`. In this connection, KubeSphere features an efficient and convenient way to add edge nodes to a Kubernetes cluster. It uses supporting components (for example, EdgeWatcher) to automatically configure iptables.
|
||||
|
||||

|
||||
|
||||
This tutorial demonstrates how to add an edge node to your cluster.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- You have enabled [KubeEdge](../../../pluggable-components/kubeedge/).
|
||||
- You have an available node to serve as an edge node. The node can run either Ubuntu (recommended) or CentOS. This tutorial uses Ubuntu 18.04 as an example.
|
||||
- Edge nodes, unlike Kubernetes cluster nodes, should work in a separate network.
|
||||
|
||||
## Configure an Edge Node
|
||||
|
||||
You need to install a container runtime and configure EdgeMesh on your edge node.
|
||||
|
||||
### Install a container runtime
|
||||
|
||||
[KubeEdge](https://docs.kubeedge.io/en/docs/) supports several container runtimes including Docker, containerd, CRI-O and Virtlet. For more information, see [the KubeEdge documentation](https://docs.kubeedge.io/en/docs/advanced/cri/).
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
If you use Docker as the container runtime for your edge node, Docker v19.3.0 or later must be installed so that KubeSphere can get Pod metrics of it.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
### Configure EdgeMesh
|
||||
|
||||
Perform the following steps to configure [EdgeMesh](https://kubeedge.io/en/docs/advanced/edgemesh/) on your edge node.
|
||||
|
||||
1. Edit `/etc/nsswitch.conf`.
|
||||
|
||||
```bash
|
||||
vi /etc/nsswitch.conf
|
||||
```
|
||||
|
||||
2. Add the following content to this file:
|
||||
|
||||
```bash
|
||||
hosts: dns files mdns4_minimal [NOTFOUND=return]
|
||||
```
|
||||
|
||||
3. Save the file and run the following command to enable IP forwarding:
|
||||
|
||||
```bash
|
||||
sudo echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
|
||||
```
|
||||
|
||||
4. Verify your modification:
|
||||
|
||||
```bash
|
||||
sudo sysctl -p | grep ip_forward
|
||||
```
|
||||
|
||||
Expected result:
|
||||
|
||||
```bash
|
||||
net.ipv4.ip_forward = 1
|
||||
```
|
||||
|
||||
## Create Firewall Rules and Port Forwarding Rules
|
||||
|
||||
To make sure edge nodes can successfully talk to your cluster, you must forward ports for outside traffic to get into your network. Specifically, map an external port to the corresponding internal IP address (control plane node) and port based on the table below. Besides, you also need to create firewall rules to allow traffic to these ports (`10000` to `10004`).
|
||||
|
||||
| Fields | External Ports | Fields | Internal Ports |
|
||||
| ------------------- | -------------- | ----------------------- | -------------- |
|
||||
| `cloudhubPort` | `10000` | `cloudhubNodePort` | `30000` |
|
||||
| `cloudhubQuicPort` | `10001` | `cloudhubQuicNodePort` | `30001` |
|
||||
| `cloudhubHttpsPort` | `10002` | `cloudhubHttpsNodePort` | `30002` |
|
||||
| `cloudstreamPort` | `10003` | `cloudstreamNodePort` | `30003` |
|
||||
| `tunnelPort` | `10004` | `tunnelNodePort` | `30004` |
|
||||
|
||||
## Add an Edge Node
|
||||
|
||||
1. Log in to the console as `admin` and click **Platform** in the upper-left corner.
|
||||
|
||||
2. Select **Cluster Management** and navigate to **Edge Nodes** under **Nodes**.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
If you have enabled [multi-cluster management](../../../multicluster-management/), you need to select a cluster first.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
3. Click **Add**. In the dialog that appears, set a node name and enter an internal IP address of your edge node. Click **Validate** to continue.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- The internal IP address is only used for inter-node communication and you do not necessarily need to use the actual internal IP address of the edge node. As long as the IP address is successfully validated, you can use it.
|
||||
- It is recommended that you check the box to add the default taint.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
4. Copy the command automatically created under **Edge Node Configuration Command** and run it on your edge node.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
Make sure `wget` is installed on your edge node before you run the command.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
5. Close the dialog, refresh the page, and the edge node will appear in the list.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
After an edge node is added, if you cannot see CPU and memory resource usage on the **Edge Nodes** page, make sure [Metrics Server](../../../pluggable-components/metrics-server/) 0.4.1 or later is installed in your cluster.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
6. After an edge node joins your cluster, some Pods may be scheduled to it while they remain in the `Pending` state on the edge node. Due to the tolerations some DaemonSets (for example, Calico) have, in the current version (KubeSphere 3.2.1), you need to manually patch some Pods so that they will not be schedule to the edge node.
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
NodeSelectorPatchJson='{"spec":{"template":{"spec":{"nodeSelector":{"node-role.kubernetes.io/master": "","node-role.kubernetes.io/worker": ""}}}}}'
|
||||
|
||||
NoShedulePatchJson='{"spec":{"template":{"spec":{"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"node-role.kubernetes.io/edge","operator":"DoesNotExist"}]}]}}}}}}}'
|
||||
|
||||
edgenode="edgenode"
|
||||
if [ $1 ]; then
|
||||
edgenode="$1"
|
||||
fi
|
||||
|
||||
|
||||
namespaces=($(kubectl get pods -A -o wide |egrep -i $edgenode | awk '{print $1}' ))
|
||||
pods=($(kubectl get pods -A -o wide |egrep -i $edgenode | awk '{print $2}' ))
|
||||
length=${#namespaces[@]}
|
||||
|
||||
|
||||
for((i=0;i<$length;i++));
|
||||
do
|
||||
ns=${namespaces[$i]}
|
||||
pod=${pods[$i]}
|
||||
resources=$(kubectl -n $ns describe pod $pod | grep "Controlled By" |awk '{print $3}')
|
||||
echo "Patching for ns:"${namespaces[$i]}",resources:"$resources
|
||||
kubectl -n $ns patch $resources --type merge --patch "$NoShedulePatchJson"
|
||||
sleep 1
|
||||
done
|
||||
```
|
||||
|
||||
## Custom Configurations
|
||||
|
||||
To customize some configurations of an edge node, such as download URL and KubeEdge version, create a [ConfigMap](../../../project-user-guide/configuration/configmaps/) as below:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
data:
|
||||
region: zh # Download region.
|
||||
version: v1.6.1 # The version of KubeEdge to be installed. Allowed values are v1.5.0, v1.6.0, v1.6.1 (default) and v1.6.2.
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: edge-watcher-config
|
||||
namespace: kubeedge
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- You can specify `zh` or `en` for the field `region`. `zh` is the default value and the default download link is `https://kubeedge.pek3b.qingstor.com/bin/v1.6.1/$arch/keadm-v1.6.1-linux-$arch.tar.gz`. If you set `region` to `en`, the download link will be `https://github.com/kubesphere/kubeedge/releases/download/v1.6.1-kubesphere/keadm-v1.6.1-linux-amd64.tar.gz`.
|
||||
- The ConfigMap does not affect the configurations of exiting edge nodes in your cluster. It is only used to change the KubeEdge configurations to be used on a new edge node. More specifically, it decides [the command automatically created by KubeSphere mentioned above](#add-an-edge-node) which needs to be executed on the edge node.
|
||||
- While you can change the KubeEdge version to be installed on an edge node, it is recommended that the cloud and edge modules have the same KubeEdge version.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
## Remove an Edge Node
|
||||
|
||||
Before you remove an edge node, delete all your workloads running on it.
|
||||
|
||||
1. On your edge node, run the following commands:
|
||||
|
||||
```bash
|
||||
./keadm reset
|
||||
```
|
||||
|
||||
```
|
||||
apt remove mosquitto
|
||||
```
|
||||
|
||||
```bash
|
||||
rm -rf /var/lib/kubeedge /var/lib/edged /etc/kubeedge/ca /etc/kubeedge/certs
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
If you cannot delete the tmpfs-mounted folder, restart the node or unmount the folder first.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
2. Run the following command to remove the edge node from your cluster:
|
||||
|
||||
```bash
|
||||
kubectl delete node <edgenode-name>
|
||||
```
|
||||
|
||||
3. To uninstall KubeEdge from your cluster, run the following commands:
|
||||
|
||||
```bash
|
||||
helm uninstall kubeedge -n kubeedge
|
||||
```
|
||||
|
||||
```bash
|
||||
kubectl delete ns kubeedge
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
After the uninstallation, you will not be able to add edge nodes to your cluster.
|
||||
|
||||
{{</ notice >}}
|
||||
|
|
@ -0,0 +1,226 @@
|
|||
---
|
||||
title: "Add Edge Nodes"
|
||||
keywords: 'Kubernetes, KubeSphere, KubeEdge'
|
||||
description: 'Add edge nodes to your cluster.'
|
||||
linkTitle: "Add Edge Nodes"
|
||||
weight: 3630
|
||||
---
|
||||
|
||||
KubeSphere leverages [KubeEdge](https://kubeedge.io/en/), to extend native containerized application orchestration capabilities to hosts at edge. With separate cloud and edge core modules, KubeEdge provides complete edge computing solutions while the installation may be complex and difficult.
|
||||
|
||||

|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
For more information about different components of KubeEdge, see [the KubeEdge documentation](https://docs.kubeedge.io/en/docs/kubeedge/#components).
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
After an edge node joins your cluster, the native KubeEdge cloud component requires you to manually configure iptables so that you can use commands such as `kubectl logs` and `kubectl exec`. In this connection, KubeSphere features an efficient and convenient way to add edge nodes to a Kubernetes cluster. It uses supporting components (for example, EdgeWatcher) to automatically configure iptables.
|
||||
|
||||

|
||||
|
||||
This tutorial demonstrates how to add an edge node to your cluster.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- You have enabled [KubeEdge](../../../pluggable-components/kubeedge/).
|
||||
- You have an available node to serve as an edge node. The node can run either Ubuntu (recommended) or CentOS. This tutorial uses Ubuntu 18.04 as an example.
|
||||
- Edge nodes, unlike Kubernetes cluster nodes, should work in a separate network.
|
||||
|
||||
## Configure an Edge Node
|
||||
|
||||
You need to install a container runtime and configure EdgeMesh on your edge node.
|
||||
|
||||
### Install a container runtime
|
||||
|
||||
[KubeEdge](https://docs.kubeedge.io/en/docs/) supports several container runtimes including Docker, containerd, CRI-O and Virtlet. For more information, see [the KubeEdge documentation](https://docs.kubeedge.io/en/docs/advanced/cri/).
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
If you use Docker as the container runtime for your edge node, Docker v19.3.0 or later must be installed so that KubeSphere can get Pod metrics of it.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
### Configure EdgeMesh
|
||||
|
||||
Perform the following steps to configure [EdgeMesh](https://kubeedge.io/en/docs/advanced/edgemesh/) on your edge node.
|
||||
|
||||
1. Edit `/etc/nsswitch.conf`.
|
||||
|
||||
```bash
|
||||
vi /etc/nsswitch.conf
|
||||
```
|
||||
|
||||
2. Add the following content to this file:
|
||||
|
||||
```bash
|
||||
hosts: dns files mdns4_minimal [NOTFOUND=return]
|
||||
```
|
||||
|
||||
3. Save the file and run the following command to enable IP forwarding:
|
||||
|
||||
```bash
|
||||
sudo echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
|
||||
```
|
||||
|
||||
4. Verify your modification:
|
||||
|
||||
```bash
|
||||
sudo sysctl -p | grep ip_forward
|
||||
```
|
||||
|
||||
Expected result:
|
||||
|
||||
```bash
|
||||
net.ipv4.ip_forward = 1
|
||||
```
|
||||
|
||||
## Create Firewall Rules and Port Forwarding Rules
|
||||
|
||||
To make sure edge nodes can successfully talk to your cluster, you must forward ports for outside traffic to get into your network. Specifically, map an external port to the corresponding internal IP address (control plane node) and port based on the table below. Besides, you also need to create firewall rules to allow traffic to these ports (`10000` to `10004`).
|
||||
|
||||
| Fields | External Ports | Fields | Internal Ports |
|
||||
| ------------------- | -------------- | ----------------------- | -------------- |
|
||||
| `cloudhubPort` | `10000` | `cloudhubNodePort` | `30000` |
|
||||
| `cloudhubQuicPort` | `10001` | `cloudhubQuicNodePort` | `30001` |
|
||||
| `cloudhubHttpsPort` | `10002` | `cloudhubHttpsNodePort` | `30002` |
|
||||
| `cloudstreamPort` | `10003` | `cloudstreamNodePort` | `30003` |
|
||||
| `tunnelPort` | `10004` | `tunnelNodePort` | `30004` |
|
||||
|
||||
## Add an Edge Node
|
||||
|
||||
1. Log in to the console as `admin` and click **Platform** in the upper-left corner.
|
||||
|
||||
2. Select **Cluster Management** and navigate to **Edge Nodes** under **Nodes**.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
If you have enabled [multi-cluster management](../../../multicluster-management/), you need to select a cluster first.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
3. Click **Add**. In the dialog that appears, set a node name and enter an internal IP address of your edge node. Click **Validate** to continue.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- The internal IP address is only used for inter-node communication and you do not necessarily need to use the actual internal IP address of the edge node. As long as the IP address is successfully validated, you can use it.
|
||||
- It is recommended that you check the box to add the default taint.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
4. Copy the command automatically created under **Edge Node Configuration Command** and run it on your edge node.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
Make sure `wget` is installed on your edge node before you run the command.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
5. Close the dialog, refresh the page, and the edge node will appear in the list.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
After an edge node is added, if you cannot see CPU and memory resource usage on the **Edge Nodes** page, make sure [Metrics Server](../../../pluggable-components/metrics-server/) 0.4.1 or later is installed in your cluster.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
6. After an edge node joins your cluster, some Pods may be scheduled to it while they remain in the `Pending` state on the edge node. Due to the tolerations some DaemonSets (for example, Calico) have, in the current version (KubeSphere 3.2.1), you need to manually patch some Pods so that they will not be scheduled to the edge node.
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
NodeSelectorPatchJson='{"spec":{"template":{"spec":{"nodeSelector":{"node-role.kubernetes.io/master": "","node-role.kubernetes.io/worker": ""}}}}}'
|
||||
|
||||
NoShedulePatchJson='{"spec":{"template":{"spec":{"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"node-role.kubernetes.io/edge","operator":"DoesNotExist"}]}]}}}}}}}'
|
||||
|
||||
edgenode="edgenode"
|
||||
if [ $1 ]; then
|
||||
edgenode="$1"
|
||||
fi
|
||||
|
||||
|
||||
namespaces=($(kubectl get pods -A -o wide |egrep -i $edgenode | awk '{print $1}' ))
|
||||
pods=($(kubectl get pods -A -o wide |egrep -i $edgenode | awk '{print $2}' ))
|
||||
length=${#namespaces[@]}
|
||||
|
||||
|
||||
for((i=0;i<$length;i++));
|
||||
do
|
||||
ns=${namespaces[$i]}
|
||||
pod=${pods[$i]}
|
||||
resources=$(kubectl -n $ns describe pod $pod | grep "Controlled By" |awk '{print $3}')
|
||||
echo "Patching for ns:"${namespaces[$i]}",resources:"$resources
|
||||
kubectl -n $ns patch $resources --type merge --patch "$NoShedulePatchJson"
|
||||
sleep 1
|
||||
done
|
||||
```
|
||||
|
||||
## Custom Configurations
|
||||
|
||||
To customize some configurations of an edge node, such as download URL and KubeEdge version, create a [ConfigMap](../../../project-user-guide/configuration/configmaps/) as below:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
data:
|
||||
region: zh # Download region.
|
||||
version: v1.6.1 # The version of KubeEdge to be installed. Allowed values are v1.5.0, v1.6.0, v1.6.1 (default) and v1.6.2.
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: edge-watcher-config
|
||||
namespace: kubeedge
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- You can specify `zh` or `en` for the field `region`. `zh` is the default value and the default download link is `https://kubeedge.pek3b.qingstor.com/bin/v1.6.1/$arch/keadm-v1.6.1-linux-$arch.tar.gz`. If you set `region` to `en`, the download link will be `https://github.com/kubesphere/kubeedge/releases/download/v1.6.1-kubesphere/keadm-v1.6.1-linux-amd64.tar.gz`.
|
||||
- The ConfigMap does not affect the configurations of exiting edge nodes in your cluster. It is only used to change the KubeEdge configurations to be used on a new edge node. More specifically, it decides [the command automatically created by KubeSphere mentioned above](#add-an-edge-node) which needs to be executed on the edge node.
|
||||
- While you can change the KubeEdge version to be installed on an edge node, it is recommended that the cloud and edge modules have the same KubeEdge version.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
## Remove an Edge Node
|
||||
|
||||
Before you remove an edge node, delete all your workloads running on it.
|
||||
|
||||
1. On your edge node, run the following commands:
|
||||
|
||||
```bash
|
||||
./keadm reset
|
||||
```
|
||||
|
||||
```
|
||||
apt remove mosquitto
|
||||
```
|
||||
|
||||
```bash
|
||||
rm -rf /var/lib/kubeedge /var/lib/edged /etc/kubeedge/ca /etc/kubeedge/certs
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
If you cannot delete the tmpfs-mounted folder, restart the node or unmount the folder first.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
2. Run the following command to remove the edge node from your cluster:
|
||||
|
||||
```bash
|
||||
kubectl delete node <edgenode-name>
|
||||
```
|
||||
|
||||
3. To uninstall KubeEdge from your cluster, run the following commands:
|
||||
|
||||
```bash
|
||||
helm uninstall kubeedge -n kubeedge
|
||||
```
|
||||
|
||||
```bash
|
||||
kubectl delete ns kubeedge
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
After the uninstallation, you will not be able to add edge nodes to your cluster.
|
||||
|
||||
{{</ notice >}}
|
||||
|
|
@ -0,0 +1,216 @@
|
|||
---
|
||||
title: "Set up an HA Cluster Using a Load Balancer"
|
||||
keywords: 'KubeSphere, Kubernetes, HA, high availability, installation, configuration'
|
||||
description: 'Learn how to create a highly available cluster using a load balancer.'
|
||||
linkTitle: "Set up an HA Cluster Using a Load Balancer"
|
||||
weight: 3220
|
||||
---
|
||||
|
||||
You can set up a single-master Kubernetes cluster with KubeSphere installed based on the tutorial of [Multi-node Installation](../../../installing-on-linux/introduction/multioverview/). Single-master clusters may be sufficient for development and testing in most cases. For a production environment, however, you need to consider the high availability of the cluster. If key components (for example, kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same master node, Kubernetes and KubeSphere will be unavailable once the master node goes down. Therefore, you need to set up a high-availability cluster by provisioning load balancers with multiple master nodes. You can use any cloud load balancer, or any hardware load balancer (for example, F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters.
|
||||
|
||||
This tutorial demonstrates the general configurations of a high-availability cluster as you install KubeSphere on Linux.
|
||||
|
||||
## Architecture
|
||||
|
||||
Make sure you have prepared six Linux machines before you begin, with three of them serving as master nodes and the other three as worker nodes. The following image shows details of these machines, including their private IP address and role. For more information about system and network requirements, see [Multi-node Installation](../../../installing-on-linux/introduction/multioverview/#step-1-prepare-linux-hosts).
|
||||
|
||||

|
||||
|
||||
## Configure a Load Balancer
|
||||
|
||||
You must create a load balancer in your environment to listen (also known as listeners on some cloud platforms) on key ports. Here is a table of recommended ports that need to be listened on.
|
||||
|
||||
| Service | Protocol | Port |
|
||||
| ---------- | -------- | ----- |
|
||||
| apiserver | TCP | 6443 |
|
||||
| ks-console | TCP | 30880 |
|
||||
| http | TCP | 80 |
|
||||
| https | TCP | 443 |
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- Make sure your load balancer at least listens on the port of apiserver.
|
||||
|
||||
- You may need to open ports in your security group to ensure external traffic is not blocked depending on where your cluster is deployed. For more information, see [Port Requirements](../../../installing-on-linux/introduction/port-firewall/).
|
||||
- You can configure both internal and external load balancers on some cloud platforms. After assigning a public IP address to the external load balancer, you can use the IP address to access the cluster.
|
||||
- For more information about how to configure load balancers, see [Installing on Public Cloud](../../../installing-on-linux/public-cloud/install-kubesphere-on-azure-vms/) to see specific steps on major public cloud platforms.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
## Download KubeKey
|
||||
|
||||
[Kubekey](https://github.com/kubesphere/kubekey) is the next-gen installer which provides an easy, fast and flexible way to install Kubernetes and KubeSphere. Follow the steps below to download KubeKey.
|
||||
|
||||
{{< tabs >}}
|
||||
|
||||
{{< tab "Good network connections to GitHub/Googleapis" >}}
|
||||
|
||||
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.0.0 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{< tab "Poor network connections to GitHub/Googleapis" >}}
|
||||
|
||||
Run the following command first to make sure you download KubeKey from the correct zone.
|
||||
|
||||
```bash
|
||||
export KKZONE=cn
|
||||
```
|
||||
|
||||
Run the following command to download KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.0.0 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
After you download KubeKey, if you transfer it to a new machine also with poor network connections to Googleapis, you must run `export KKZONE=cn` again before you proceed with the steps below.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{</ tabs >}}
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
The commands above download the latest release (v2.0.0) of KubeKey. You can change the version number in the command to download a specific version.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
Make `kk` executable:
|
||||
|
||||
```bash
|
||||
chmod +x kk
|
||||
```
|
||||
|
||||
Create an example configuration file with default configurations. Here Kubernetes v1.21.5 is used as an example.
|
||||
|
||||
```bash
|
||||
./kk create config --with-kubesphere v3.2.1 --with-kubernetes v1.21.5
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- Recommended Kubernetes versions for KubeSphere 3.2.1: v1.19.x, v1.20.x, v1.21.x or v1.22.x (experimental). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.21.5 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
|
||||
|
||||
- If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later.
|
||||
- If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
## Deploy KubeSphere and Kubernetes
|
||||
|
||||
After you run the commands above, a configuration file `config-sample.yaml` will be created. Edit the file to add machine information, configure the load balancer and more.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
The file name may be different if you customize it.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
### config-sample.yaml example
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
hosts:
|
||||
- {name: master1, address: 192.168.0.2, internalAddress: 192.168.0.2, user: ubuntu, password: Testing123}
|
||||
- {name: master2, address: 192.168.0.3, internalAddress: 192.168.0.3, user: ubuntu, password: Testing123}
|
||||
- {name: master3, address: 192.168.0.4, internalAddress: 192.168.0.4, user: ubuntu, password: Testing123}
|
||||
- {name: node1, address: 192.168.0.5, internalAddress: 192.168.0.5, user: ubuntu, password: Testing123}
|
||||
- {name: node2, address: 192.168.0.6, internalAddress: 192.168.0.6, user: ubuntu, password: Testing123}
|
||||
- {name: node3, address: 192.168.0.7, internalAddress: 192.168.0.7, user: ubuntu, password: Testing123}
|
||||
roleGroups:
|
||||
etcd:
|
||||
- master1
|
||||
- master2
|
||||
- master3
|
||||
control-plane:
|
||||
- master1
|
||||
- master2
|
||||
- master3
|
||||
worker:
|
||||
- node1
|
||||
- node2
|
||||
- node3
|
||||
```
|
||||
|
||||
For more information about different fields in this configuration file, see [Kubernetes Cluster Configurations](../../../installing-on-linux/introduction/vars/) and [Multi-node Installation](../../../installing-on-linux/introduction/multioverview/#2-edit-the-configuration-file).
|
||||
|
||||
### Configure the load balancer
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
controlPlaneEndpoint:
|
||||
##Internal loadbalancer for apiservers
|
||||
#internalLoadbalancer: haproxy
|
||||
|
||||
domain: lb.kubesphere.local
|
||||
address: "192.168.0.xx"
|
||||
port: 6443
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- The address and port should be indented by two spaces in `config-sample.yaml`.
|
||||
- In most cases, you need to provide the **private IP address** of the load balancer for the field `address`. However, different cloud providers may have different configurations for load balancers. For example, if you configure a Server Load Balancer (SLB) on Alibaba Cloud, the platform assigns a public IP address to the SLB, which means you need to specify the public IP address for the field `address`.
|
||||
- The domain name of the load balancer is `lb.kubesphere.local` by default for internal access.
|
||||
- To use an internal load balancer, uncomment the field `internalLoadbalancer`.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
### Persistent storage plugin configurations
|
||||
|
||||
For a production environment, you need to prepare persistent storage and configure the storage plugin (for example, CSI) in `config-sample.yaml` to define which storage service you want to use. For more information, see [Persistent Storage Configurations](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/).
|
||||
|
||||
### Enable pluggable components (Optional)
|
||||
|
||||
KubeSphere has decoupled some core feature components since v2.1.0. These components are designed to be pluggable which means you can enable them either before or after installation. By default, KubeSphere will be installed with the minimal package if you do not enable them.
|
||||
|
||||
You can enable any of them according to your demands. It is highly recommended that you install these pluggable components to discover the full-stack features and capabilities provided by KubeSphere. Make sure your machines have sufficient CPU and memory before enabling them. See [Enable Pluggable Components](../../../pluggable-components/) for details.
|
||||
|
||||
### Start installation
|
||||
|
||||
After you complete the configuration, you can execute the following command to start the installation:
|
||||
|
||||
```bash
|
||||
./kk create cluster -f config-sample.yaml
|
||||
```
|
||||
|
||||
### Verify installation
|
||||
|
||||
1. Run the following command to inspect the logs of installation.
|
||||
|
||||
```bash
|
||||
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
|
||||
```
|
||||
|
||||
2. When you see the following message, it means your HA cluster is successfully created.
|
||||
|
||||
```bash
|
||||
#####################################################
|
||||
### Welcome to KubeSphere! ###
|
||||
#####################################################
|
||||
|
||||
Console: http://192.168.0.3:30880
|
||||
Account: admin
|
||||
Password: P@88w0rd
|
||||
|
||||
NOTES:
|
||||
1. After you log into the console, please check the
|
||||
monitoring status of service components in
|
||||
the "Cluster Management". If any service is not
|
||||
ready, please wait patiently until all components
|
||||
are up and running.
|
||||
2. Please change the default password after login.
|
||||
|
||||
#####################################################
|
||||
https://kubesphere.io 2020-xx-xx xx:xx:xx
|
||||
#####################################################
|
||||
```
|
||||
|
||||
|
|
@ -0,0 +1,216 @@
|
|||
---
|
||||
title: "Set up an HA Cluster Using a Load Balancer"
|
||||
keywords: 'KubeSphere, Kubernetes, HA, high availability, installation, configuration'
|
||||
description: 'Learn how to create a highly available cluster using a load balancer.'
|
||||
linkTitle: "Set up an HA Cluster Using a Load Balancer"
|
||||
weight: 3220
|
||||
---
|
||||
|
||||
You can set up a single-master Kubernetes cluster with KubeSphere installed based on the tutorial of [Multi-node Installation](../../../installing-on-linux/introduction/multioverview/). Single-master clusters may be sufficient for development and testing in most cases. For a production environment, however, you need to consider the high availability of the cluster. If key components (for example, kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same master node, Kubernetes and KubeSphere will be unavailable once the master node goes down. Therefore, you need to set up a high-availability cluster by provisioning load balancers with multiple control plane nodes. You can use any cloud load balancer, or any hardware load balancer (for example, F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters.
|
||||
|
||||
This tutorial demonstrates the general configurations of a high-availability cluster as you install KubeSphere on Linux.
|
||||
|
||||
## Architecture
|
||||
|
||||
Make sure you have prepared six Linux machines before you begin, with three of them serving as master nodes and the other three as worker nodes. The following image shows details of these machines, including their private IP address and role. For more information about system and network requirements, see [Multi-node Installation](../../../installing-on-linux/introduction/multioverview/#step-1-prepare-linux-hosts).
|
||||
|
||||

|
||||
|
||||
## Configure a Load Balancer
|
||||
|
||||
You must create a load balancer in your environment to listen (also known as listeners on some cloud platforms) on key ports. Here is a table of recommended ports that need to be listened on.
|
||||
|
||||
| Service | Protocol | Port |
|
||||
| ---------- | -------- | ----- |
|
||||
| apiserver | TCP | 6443 |
|
||||
| ks-console | TCP | 30880 |
|
||||
| http | TCP | 80 |
|
||||
| https | TCP | 443 |
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- Make sure your load balancer at least listens on the port of apiserver.
|
||||
|
||||
- You may need to open ports in your security group to ensure external traffic is not blocked depending on where your cluster is deployed. For more information, see [Port Requirements](../../../installing-on-linux/introduction/port-firewall/).
|
||||
- You can configure both internal and external load balancers on some cloud platforms. After assigning a public IP address to the external load balancer, you can use the IP address to access the cluster.
|
||||
- For more information about how to configure load balancers, see [Installing on Public Cloud](../../../installing-on-linux/public-cloud/install-kubesphere-on-azure-vms/) to see specific steps on major public cloud platforms.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
## Download KubeKey
|
||||
|
||||
[Kubekey](https://github.com/kubesphere/kubekey) is the next-gen installer which provides an easy, fast and flexible way to install Kubernetes and KubeSphere. Follow the steps below to download KubeKey.
|
||||
|
||||
{{< tabs >}}
|
||||
|
||||
{{< tab "Good network connections to GitHub/Googleapis" >}}
|
||||
|
||||
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.0.0 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{< tab "Poor network connections to GitHub/Googleapis" >}}
|
||||
|
||||
Run the following command first to make sure you download KubeKey from the correct zone.
|
||||
|
||||
```bash
|
||||
export KKZONE=cn
|
||||
```
|
||||
|
||||
Run the following command to download KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.0.0 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
After you download KubeKey, if you transfer it to a new machine also with poor network connections to Googleapis, you must run `export KKZONE=cn` again before you proceed with the steps below.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{</ tabs >}}
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
The commands above download the latest release (v2.0.0) of KubeKey. You can change the version number in the command to download a specific version.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
Make `kk` executable:
|
||||
|
||||
```bash
|
||||
chmod +x kk
|
||||
```
|
||||
|
||||
Create an example configuration file with default configurations. Here Kubernetes v1.21.5 is used as an example.
|
||||
|
||||
```bash
|
||||
./kk create config --with-kubesphere v3.2.1 --with-kubernetes v1.21.5
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- Recommended Kubernetes versions for KubeSphere 3.2.1: v1.19.x, v1.20.x, v1.21.x or v1.22.x (experimental). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.21.5 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
|
||||
|
||||
- If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later.
|
||||
- If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
## Deploy KubeSphere and Kubernetes
|
||||
|
||||
After you run the commands above, a configuration file `config-sample.yaml` will be created. Edit the file to add machine information, configure the load balancer and more.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
The file name may be different if you customize it.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
### config-sample.yaml example
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
hosts:
|
||||
- {name: master1, address: 192.168.0.2, internalAddress: 192.168.0.2, user: ubuntu, password: Testing123}
|
||||
- {name: master2, address: 192.168.0.3, internalAddress: 192.168.0.3, user: ubuntu, password: Testing123}
|
||||
- {name: master3, address: 192.168.0.4, internalAddress: 192.168.0.4, user: ubuntu, password: Testing123}
|
||||
- {name: node1, address: 192.168.0.5, internalAddress: 192.168.0.5, user: ubuntu, password: Testing123}
|
||||
- {name: node2, address: 192.168.0.6, internalAddress: 192.168.0.6, user: ubuntu, password: Testing123}
|
||||
- {name: node3, address: 192.168.0.7, internalAddress: 192.168.0.7, user: ubuntu, password: Testing123}
|
||||
roleGroups:
|
||||
etcd:
|
||||
- master1
|
||||
- master2
|
||||
- master3
|
||||
control-plane:
|
||||
- master1
|
||||
- master2
|
||||
- master3
|
||||
worker:
|
||||
- node1
|
||||
- node2
|
||||
- node3
|
||||
```
|
||||
|
||||
For more information about different fields in this configuration file, see [Kubernetes Cluster Configurations](../../../installing-on-linux/introduction/vars/) and [Multi-node Installation](../../../installing-on-linux/introduction/multioverview/#2-edit-the-configuration-file).
|
||||
|
||||
### Configure the load balancer
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
controlPlaneEndpoint:
|
||||
##Internal loadbalancer for apiservers
|
||||
#internalLoadbalancer: haproxy
|
||||
|
||||
domain: lb.kubesphere.local
|
||||
address: "192.168.0.xx"
|
||||
port: 6443
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- The address and port should be indented by two spaces in `config-sample.yaml`.
|
||||
- In most cases, you need to provide the **private IP address** of the load balancer for the field `address`. However, different cloud providers may have different configurations for load balancers. For example, if you configure a Server Load Balancer (SLB) on Alibaba Cloud, the platform assigns a public IP address to the SLB, which means you need to specify the public IP address for the field `address`.
|
||||
- The domain name of the load balancer is `lb.kubesphere.local` by default for internal access.
|
||||
- To use an internal load balancer, uncomment the field `internalLoadbalancer`.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
### Persistent storage plugin configurations
|
||||
|
||||
For a production environment, you need to prepare persistent storage and configure the storage plugin (for example, CSI) in `config-sample.yaml` to define which storage service you want to use. For more information, see [Persistent Storage Configurations](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/).
|
||||
|
||||
### Enable pluggable components (Optional)
|
||||
|
||||
KubeSphere has decoupled some core feature components since v2.1.0. These components are designed to be pluggable which means you can enable them either before or after installation. By default, KubeSphere will be installed with the minimal package if you do not enable them.
|
||||
|
||||
You can enable any of them according to your demands. It is highly recommended that you install these pluggable components to discover the full-stack features and capabilities provided by KubeSphere. Make sure your machines have sufficient CPU and memory before enabling them. See [Enable Pluggable Components](../../../pluggable-components/) for details.
|
||||
|
||||
### Start installation
|
||||
|
||||
After you complete the configuration, you can execute the following command to start the installation:
|
||||
|
||||
```bash
|
||||
./kk create cluster -f config-sample.yaml
|
||||
```
|
||||
|
||||
### Verify installation
|
||||
|
||||
1. Run the following command to inspect the logs of installation.
|
||||
|
||||
```bash
|
||||
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
|
||||
```
|
||||
|
||||
2. When you see the following message, it means your HA cluster is successfully created.
|
||||
|
||||
```bash
|
||||
#####################################################
|
||||
### Welcome to KubeSphere! ###
|
||||
#####################################################
|
||||
|
||||
Console: http://192.168.0.3:30880
|
||||
Account: admin
|
||||
Password: P@88w0rd
|
||||
|
||||
NOTES:
|
||||
1. After you log into the console, please check the
|
||||
monitoring status of service components in
|
||||
the "Cluster Management". If any service is not
|
||||
ready, please wait patiently until all components
|
||||
are up and running.
|
||||
2. Please change the default password after login.
|
||||
|
||||
#####################################################
|
||||
https://kubesphere.io 2020-xx-xx xx:xx:xx
|
||||
#####################################################
|
||||
```
|
||||
|
||||
|
|
@ -0,0 +1,216 @@
|
|||
---
|
||||
title: "Set up an HA Cluster Using a Load Balancer"
|
||||
keywords: 'KubeSphere, Kubernetes, HA, high availability, installation, configuration'
|
||||
description: 'Learn how to create a highly available cluster using a load balancer.'
|
||||
linkTitle: "Set up an HA Cluster Using a Load Balancer"
|
||||
weight: 3220
|
||||
---
|
||||
|
||||
You can set up a single-master Kubernetes cluster with KubeSphere installed based on the tutorial of [Multi-node Installation](../../../installing-on-linux/introduction/multioverview/). Clusters with a control plane node may be sufficient for development and testing in most cases. For a production environment, however, you need to consider the high availability of the cluster. If key components (for example, kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same control plane node, Kubernetes and KubeSphere will be unavailable once the control plane node goes down. Therefore, you need to set up a high-availability cluster by provisioning load balancers with multiple control plane nodes. You can use any cloud load balancer, or any hardware load balancer (for example, F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters.
|
||||
|
||||
This tutorial demonstrates the general configurations of a high-availability cluster as you install KubeSphere on Linux.
|
||||
|
||||
## Architecture
|
||||
|
||||
Make sure you have prepared six Linux machines before you begin, with three of them serving as master nodes and the other three as worker nodes. The following image shows details of these machines, including their private IP address and role. For more information about system and network requirements, see [Multi-node Installation](../../../installing-on-linux/introduction/multioverview/#step-1-prepare-linux-hosts).
|
||||
|
||||

|
||||
|
||||
## Configure a Load Balancer
|
||||
|
||||
You must create a load balancer in your environment to listen (also known as listeners on some cloud platforms) on key ports. Here is a table of recommended ports that need to be listened on.
|
||||
|
||||
| Service | Protocol | Port |
|
||||
| ---------- | -------- | ----- |
|
||||
| apiserver | TCP | 6443 |
|
||||
| ks-console | TCP | 30880 |
|
||||
| http | TCP | 80 |
|
||||
| https | TCP | 443 |
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- Make sure your load balancer at least listens on the port of apiserver.
|
||||
|
||||
- You may need to open ports in your security group to ensure external traffic is not blocked depending on where your cluster is deployed. For more information, see [Port Requirements](../../../installing-on-linux/introduction/port-firewall/).
|
||||
- You can configure both internal and external load balancers on some cloud platforms. After assigning a public IP address to the external load balancer, you can use the IP address to access the cluster.
|
||||
- For more information about how to configure load balancers, see [Installing on Public Cloud](../../../installing-on-linux/public-cloud/install-kubesphere-on-azure-vms/) to see specific steps on major public cloud platforms.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
## Download KubeKey
|
||||
|
||||
[Kubekey](https://github.com/kubesphere/kubekey) is the next-gen installer which provides an easy, fast and flexible way to install Kubernetes and KubeSphere. Follow the steps below to download KubeKey.
|
||||
|
||||
{{< tabs >}}
|
||||
|
||||
{{< tab "Good network connections to GitHub/Googleapis" >}}
|
||||
|
||||
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.0.0 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{< tab "Poor network connections to GitHub/Googleapis" >}}
|
||||
|
||||
Run the following command first to make sure you download KubeKey from the correct zone.
|
||||
|
||||
```bash
|
||||
export KKZONE=cn
|
||||
```
|
||||
|
||||
Run the following command to download KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.0.0 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
After you download KubeKey, if you transfer it to a new machine also with poor network connections to Googleapis, you must run `export KKZONE=cn` again before you proceed with the steps below.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{</ tabs >}}
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
The commands above download the latest release (v2.0.0) of KubeKey. You can change the version number in the command to download a specific version.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
Make `kk` executable:
|
||||
|
||||
```bash
|
||||
chmod +x kk
|
||||
```
|
||||
|
||||
Create an example configuration file with default configurations. Here Kubernetes v1.21.5 is used as an example.
|
||||
|
||||
```bash
|
||||
./kk create config --with-kubesphere v3.2.1 --with-kubernetes v1.21.5
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- Recommended Kubernetes versions for KubeSphere 3.2.1: v1.19.x, v1.20.x, v1.21.x or v1.22.x (experimental). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.21.5 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
|
||||
|
||||
- If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later.
|
||||
- If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
## Deploy KubeSphere and Kubernetes
|
||||
|
||||
After you run the commands above, a configuration file `config-sample.yaml` will be created. Edit the file to add machine information, configure the load balancer and more.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
The file name may be different if you customize it.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
### config-sample.yaml example
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
hosts:
|
||||
- {name: master1, address: 192.168.0.2, internalAddress: 192.168.0.2, user: ubuntu, password: Testing123}
|
||||
- {name: master2, address: 192.168.0.3, internalAddress: 192.168.0.3, user: ubuntu, password: Testing123}
|
||||
- {name: master3, address: 192.168.0.4, internalAddress: 192.168.0.4, user: ubuntu, password: Testing123}
|
||||
- {name: node1, address: 192.168.0.5, internalAddress: 192.168.0.5, user: ubuntu, password: Testing123}
|
||||
- {name: node2, address: 192.168.0.6, internalAddress: 192.168.0.6, user: ubuntu, password: Testing123}
|
||||
- {name: node3, address: 192.168.0.7, internalAddress: 192.168.0.7, user: ubuntu, password: Testing123}
|
||||
roleGroups:
|
||||
etcd:
|
||||
- master1
|
||||
- master2
|
||||
- master3
|
||||
control-plane:
|
||||
- master1
|
||||
- master2
|
||||
- master3
|
||||
worker:
|
||||
- node1
|
||||
- node2
|
||||
- node3
|
||||
```
|
||||
|
||||
For more information about different fields in this configuration file, see [Kubernetes Cluster Configurations](../../../installing-on-linux/introduction/vars/) and [Multi-node Installation](../../../installing-on-linux/introduction/multioverview/#2-edit-the-configuration-file).
|
||||
|
||||
### Configure the load balancer
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
controlPlaneEndpoint:
|
||||
##Internal loadbalancer for apiservers
|
||||
#internalLoadbalancer: haproxy
|
||||
|
||||
domain: lb.kubesphere.local
|
||||
address: "192.168.0.xx"
|
||||
port: 6443
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- The address and port should be indented by two spaces in `config-sample.yaml`.
|
||||
- In most cases, you need to provide the **private IP address** of the load balancer for the field `address`. However, different cloud providers may have different configurations for load balancers. For example, if you configure a Server Load Balancer (SLB) on Alibaba Cloud, the platform assigns a public IP address to the SLB, which means you need to specify the public IP address for the field `address`.
|
||||
- The domain name of the load balancer is `lb.kubesphere.local` by default for internal access.
|
||||
- To use an internal load balancer, uncomment the field `internalLoadbalancer`.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
### Persistent storage plugin configurations
|
||||
|
||||
For a production environment, you need to prepare persistent storage and configure the storage plugin (for example, CSI) in `config-sample.yaml` to define which storage service you want to use. For more information, see [Persistent Storage Configurations](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/).
|
||||
|
||||
### Enable pluggable components (Optional)
|
||||
|
||||
KubeSphere has decoupled some core feature components since v2.1.0. These components are designed to be pluggable which means you can enable them either before or after installation. By default, KubeSphere will be installed with the minimal package if you do not enable them.
|
||||
|
||||
You can enable any of them according to your demands. It is highly recommended that you install these pluggable components to discover the full-stack features and capabilities provided by KubeSphere. Make sure your machines have sufficient CPU and memory before enabling them. See [Enable Pluggable Components](../../../pluggable-components/) for details.
|
||||
|
||||
### Start installation
|
||||
|
||||
After you complete the configuration, you can execute the following command to start the installation:
|
||||
|
||||
```bash
|
||||
./kk create cluster -f config-sample.yaml
|
||||
```
|
||||
|
||||
### Verify installation
|
||||
|
||||
1. Run the following command to inspect the logs of installation.
|
||||
|
||||
```bash
|
||||
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
|
||||
```
|
||||
|
||||
2. When you see the following message, it means your HA cluster is successfully created.
|
||||
|
||||
```bash
|
||||
#####################################################
|
||||
### Welcome to KubeSphere! ###
|
||||
#####################################################
|
||||
|
||||
Console: http://192.168.0.3:30880
|
||||
Account: admin
|
||||
Password: P@88w0rd
|
||||
|
||||
NOTES:
|
||||
1. After you log into the console, please check the
|
||||
monitoring status of service components in
|
||||
the "Cluster Management". If any service is not
|
||||
ready, please wait patiently until all components
|
||||
are up and running.
|
||||
2. Please change the default password after login.
|
||||
|
||||
#####################################################
|
||||
https://kubesphere.io 2020-xx-xx xx:xx:xx
|
||||
#####################################################
|
||||
```
|
||||
|
||||
|
|
@ -0,0 +1,216 @@
|
|||
---
|
||||
title: "Set up an HA Cluster Using a Load Balancer"
|
||||
keywords: 'KubeSphere, Kubernetes, HA, high availability, installation, configuration'
|
||||
description: 'Learn how to create a highly available cluster using a load balancer.'
|
||||
linkTitle: "Set up an HA Cluster Using a Load Balancer"
|
||||
weight: 3220
|
||||
---
|
||||
|
||||
You can set up Kubernetes cluster (a control plane node) with KubeSphere installed based on the tutorial of [Multi-node Installation](../../../installing-on-linux/introduction/multioverview/). Clusters with a control plane node may be sufficient for development and testing in most cases. For a production environment, however, you need to consider the high availability of the cluster. If key components (for example, kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same control plane node, Kubernetes and KubeSphere will be unavailable once the control plane node goes down. Therefore, you need to set up a high-availability cluster by provisioning load balancers with multiple control plane nodes. You can use any cloud load balancer, or any hardware load balancer (for example, F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters.
|
||||
|
||||
This tutorial demonstrates the general configurations of a high-availability cluster as you install KubeSphere on Linux.
|
||||
|
||||
## Architecture
|
||||
|
||||
Make sure you have prepared six Linux machines before you begin, with three of them serving as master nodes and the other three as worker nodes. The following image shows details of these machines, including their private IP address and role. For more information about system and network requirements, see [Multi-node Installation](../../../installing-on-linux/introduction/multioverview/#step-1-prepare-linux-hosts).
|
||||
|
||||

|
||||
|
||||
## Configure a Load Balancer
|
||||
|
||||
You must create a load balancer in your environment to listen (also known as listeners on some cloud platforms) on key ports. Here is a table of recommended ports that need to be listened on.
|
||||
|
||||
| Service | Protocol | Port |
|
||||
| ---------- | -------- | ----- |
|
||||
| apiserver | TCP | 6443 |
|
||||
| ks-console | TCP | 30880 |
|
||||
| http | TCP | 80 |
|
||||
| https | TCP | 443 |
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- Make sure your load balancer at least listens on the port of apiserver.
|
||||
|
||||
- You may need to open ports in your security group to ensure external traffic is not blocked depending on where your cluster is deployed. For more information, see [Port Requirements](../../../installing-on-linux/introduction/port-firewall/).
|
||||
- You can configure both internal and external load balancers on some cloud platforms. After assigning a public IP address to the external load balancer, you can use the IP address to access the cluster.
|
||||
- For more information about how to configure load balancers, see [Installing on Public Cloud](../../../installing-on-linux/public-cloud/install-kubesphere-on-azure-vms/) to see specific steps on major public cloud platforms.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
## Download KubeKey
|
||||
|
||||
[Kubekey](https://github.com/kubesphere/kubekey) is the next-gen installer which provides an easy, fast and flexible way to install Kubernetes and KubeSphere. Follow the steps below to download KubeKey.
|
||||
|
||||
{{< tabs >}}
|
||||
|
||||
{{< tab "Good network connections to GitHub/Googleapis" >}}
|
||||
|
||||
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.0.0 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{< tab "Poor network connections to GitHub/Googleapis" >}}
|
||||
|
||||
Run the following command first to make sure you download KubeKey from the correct zone.
|
||||
|
||||
```bash
|
||||
export KKZONE=cn
|
||||
```
|
||||
|
||||
Run the following command to download KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.0.0 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
After you download KubeKey, if you transfer it to a new machine also with poor network connections to Googleapis, you must run `export KKZONE=cn` again before you proceed with the steps below.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{</ tabs >}}
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
The commands above download the latest release (v2.0.0) of KubeKey. You can change the version number in the command to download a specific version.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
Make `kk` executable:
|
||||
|
||||
```bash
|
||||
chmod +x kk
|
||||
```
|
||||
|
||||
Create an example configuration file with default configurations. Here Kubernetes v1.21.5 is used as an example.
|
||||
|
||||
```bash
|
||||
./kk create config --with-kubesphere v3.2.1 --with-kubernetes v1.21.5
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- Recommended Kubernetes versions for KubeSphere 3.2.1: v1.19.x, v1.20.x, v1.21.x or v1.22.x (experimental). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.21.5 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
|
||||
|
||||
- If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later.
|
||||
- If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
## Deploy KubeSphere and Kubernetes
|
||||
|
||||
After you run the commands above, a configuration file `config-sample.yaml` will be created. Edit the file to add machine information, configure the load balancer and more.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
The file name may be different if you customize it.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
### config-sample.yaml example
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
hosts:
|
||||
- {name: master1, address: 192.168.0.2, internalAddress: 192.168.0.2, user: ubuntu, password: Testing123}
|
||||
- {name: master2, address: 192.168.0.3, internalAddress: 192.168.0.3, user: ubuntu, password: Testing123}
|
||||
- {name: master3, address: 192.168.0.4, internalAddress: 192.168.0.4, user: ubuntu, password: Testing123}
|
||||
- {name: node1, address: 192.168.0.5, internalAddress: 192.168.0.5, user: ubuntu, password: Testing123}
|
||||
- {name: node2, address: 192.168.0.6, internalAddress: 192.168.0.6, user: ubuntu, password: Testing123}
|
||||
- {name: node3, address: 192.168.0.7, internalAddress: 192.168.0.7, user: ubuntu, password: Testing123}
|
||||
roleGroups:
|
||||
etcd:
|
||||
- master1
|
||||
- master2
|
||||
- master3
|
||||
control-plane:
|
||||
- master1
|
||||
- master2
|
||||
- master3
|
||||
worker:
|
||||
- node1
|
||||
- node2
|
||||
- node3
|
||||
```
|
||||
|
||||
For more information about different fields in this configuration file, see [Kubernetes Cluster Configurations](../../../installing-on-linux/introduction/vars/) and [Multi-node Installation](../../../installing-on-linux/introduction/multioverview/#2-edit-the-configuration-file).
|
||||
|
||||
### Configure the load balancer
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
controlPlaneEndpoint:
|
||||
##Internal loadbalancer for apiservers
|
||||
#internalLoadbalancer: haproxy
|
||||
|
||||
domain: lb.kubesphere.local
|
||||
address: "192.168.0.xx"
|
||||
port: 6443
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- The address and port should be indented by two spaces in `config-sample.yaml`.
|
||||
- In most cases, you need to provide the **private IP address** of the load balancer for the field `address`. However, different cloud providers may have different configurations for load balancers. For example, if you configure a Server Load Balancer (SLB) on Alibaba Cloud, the platform assigns a public IP address to the SLB, which means you need to specify the public IP address for the field `address`.
|
||||
- The domain name of the load balancer is `lb.kubesphere.local` by default for internal access.
|
||||
- To use an internal load balancer, uncomment the field `internalLoadbalancer`.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
### Persistent storage plugin configurations
|
||||
|
||||
For a production environment, you need to prepare persistent storage and configure the storage plugin (for example, CSI) in `config-sample.yaml` to define which storage service you want to use. For more information, see [Persistent Storage Configurations](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/).
|
||||
|
||||
### Enable pluggable components (Optional)
|
||||
|
||||
KubeSphere has decoupled some core feature components since v2.1.0. These components are designed to be pluggable which means you can enable them either before or after installation. By default, KubeSphere will be installed with the minimal package if you do not enable them.
|
||||
|
||||
You can enable any of them according to your demands. It is highly recommended that you install these pluggable components to discover the full-stack features and capabilities provided by KubeSphere. Make sure your machines have sufficient CPU and memory before enabling them. See [Enable Pluggable Components](../../../pluggable-components/) for details.
|
||||
|
||||
### Start installation
|
||||
|
||||
After you complete the configuration, you can execute the following command to start the installation:
|
||||
|
||||
```bash
|
||||
./kk create cluster -f config-sample.yaml
|
||||
```
|
||||
|
||||
### Verify installation
|
||||
|
||||
1. Run the following command to inspect the logs of installation.
|
||||
|
||||
```bash
|
||||
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
|
||||
```
|
||||
|
||||
2. When you see the following message, it means your HA cluster is successfully created.
|
||||
|
||||
```bash
|
||||
#####################################################
|
||||
### Welcome to KubeSphere! ###
|
||||
#####################################################
|
||||
|
||||
Console: http://192.168.0.3:30880
|
||||
Account: admin
|
||||
Password: P@88w0rd
|
||||
|
||||
NOTES:
|
||||
1. After you log into the console, please check the
|
||||
monitoring status of service components in
|
||||
the "Cluster Management". If any service is not
|
||||
ready, please wait patiently until all components
|
||||
are up and running.
|
||||
2. Please change the default password after login.
|
||||
|
||||
#####################################################
|
||||
https://kubesphere.io 2020-xx-xx xx:xx:xx
|
||||
#####################################################
|
||||
```
|
||||
|
||||
|
|
@ -0,0 +1,216 @@
|
|||
---
|
||||
title: "Set up an HA Cluster Using a Load Balancer"
|
||||
keywords: 'KubeSphere, Kubernetes, HA, high availability, installation, configuration'
|
||||
description: 'Learn how to create a highly available cluster using a load balancer.'
|
||||
linkTitle: "Set up an HA Cluster Using a Load Balancer"
|
||||
weight: 3220
|
||||
---
|
||||
|
||||
You can set up Kubernetes cluster (a control plane node) with KubeSphere installed based on the tutorial of [Multi-node Installation](../../../installing-on-linux/introduction/multioverview/). Clusters with a control plane node may be sufficient for development and testing in most cases. For a production environment, however, you need to consider the high availability of the cluster. If key components (for example, kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same control plane node, Kubernetes and KubeSphere will be unavailable once the control plane node goes down. Therefore, you need to set up a high-availability cluster by provisioning load balancers with multiple control plane nodes. You can use any cloud load balancer, or any hardware load balancer (for example, F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters.
|
||||
|
||||
This tutorial demonstrates the general configurations of a high-availability cluster as you install KubeSphere on Linux.
|
||||
|
||||
## Architecture
|
||||
|
||||
Make sure you have prepared six Linux machines before you begin, with three of them serving as control plane nodes and the other three as worker nodes. The following image shows details of these machines, including their private IP address and role. For more information about system and network requirements, see [Multi-node Installation](../../../installing-on-linux/introduction/multioverview/#step-1-prepare-linux-hosts).
|
||||
|
||||

|
||||
|
||||
## Configure a Load Balancer
|
||||
|
||||
You must create a load balancer in your environment to listen (also known as listeners on some cloud platforms) on key ports. Here is a table of recommended ports that need to be listened on.
|
||||
|
||||
| Service | Protocol | Port |
|
||||
| ---------- | -------- | ----- |
|
||||
| apiserver | TCP | 6443 |
|
||||
| ks-console | TCP | 30880 |
|
||||
| http | TCP | 80 |
|
||||
| https | TCP | 443 |
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- Make sure your load balancer at least listens on the port of apiserver.
|
||||
|
||||
- You may need to open ports in your security group to ensure external traffic is not blocked depending on where your cluster is deployed. For more information, see [Port Requirements](../../../installing-on-linux/introduction/port-firewall/).
|
||||
- You can configure both internal and external load balancers on some cloud platforms. After assigning a public IP address to the external load balancer, you can use the IP address to access the cluster.
|
||||
- For more information about how to configure load balancers, see [Installing on Public Cloud](../../../installing-on-linux/public-cloud/install-kubesphere-on-azure-vms/) to see specific steps on major public cloud platforms.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
## Download KubeKey
|
||||
|
||||
[Kubekey](https://github.com/kubesphere/kubekey) is the next-gen installer which provides an easy, fast and flexible way to install Kubernetes and KubeSphere. Follow the steps below to download KubeKey.
|
||||
|
||||
{{< tabs >}}
|
||||
|
||||
{{< tab "Good network connections to GitHub/Googleapis" >}}
|
||||
|
||||
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.0.0 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{< tab "Poor network connections to GitHub/Googleapis" >}}
|
||||
|
||||
Run the following command first to make sure you download KubeKey from the correct zone.
|
||||
|
||||
```bash
|
||||
export KKZONE=cn
|
||||
```
|
||||
|
||||
Run the following command to download KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.0.0 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
After you download KubeKey, if you transfer it to a new machine also with poor network connections to Googleapis, you must run `export KKZONE=cn` again before you proceed with the steps below.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{</ tabs >}}
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
The commands above download the latest release (v2.0.0) of KubeKey. You can change the version number in the command to download a specific version.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
Make `kk` executable:
|
||||
|
||||
```bash
|
||||
chmod +x kk
|
||||
```
|
||||
|
||||
Create an example configuration file with default configurations. Here Kubernetes v1.21.5 is used as an example.
|
||||
|
||||
```bash
|
||||
./kk create config --with-kubesphere v3.2.1 --with-kubernetes v1.21.5
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- Recommended Kubernetes versions for KubeSphere 3.2.1: v1.19.x, v1.20.x, v1.21.x or v1.22.x (experimental). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.21.5 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
|
||||
|
||||
- If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later.
|
||||
- If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
## Deploy KubeSphere and Kubernetes
|
||||
|
||||
After you run the commands above, a configuration file `config-sample.yaml` will be created. Edit the file to add machine information, configure the load balancer and more.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
The file name may be different if you customize it.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
### config-sample.yaml example
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
hosts:
|
||||
- {name: master1, address: 192.168.0.2, internalAddress: 192.168.0.2, user: ubuntu, password: Testing123}
|
||||
- {name: master2, address: 192.168.0.3, internalAddress: 192.168.0.3, user: ubuntu, password: Testing123}
|
||||
- {name: master3, address: 192.168.0.4, internalAddress: 192.168.0.4, user: ubuntu, password: Testing123}
|
||||
- {name: node1, address: 192.168.0.5, internalAddress: 192.168.0.5, user: ubuntu, password: Testing123}
|
||||
- {name: node2, address: 192.168.0.6, internalAddress: 192.168.0.6, user: ubuntu, password: Testing123}
|
||||
- {name: node3, address: 192.168.0.7, internalAddress: 192.168.0.7, user: ubuntu, password: Testing123}
|
||||
roleGroups:
|
||||
etcd:
|
||||
- master1
|
||||
- master2
|
||||
- master3
|
||||
control-plane:
|
||||
- master1
|
||||
- master2
|
||||
- master3
|
||||
worker:
|
||||
- node1
|
||||
- node2
|
||||
- node3
|
||||
```
|
||||
|
||||
For more information about different fields in this configuration file, see [Kubernetes Cluster Configurations](../../../installing-on-linux/introduction/vars/) and [Multi-node Installation](../../../installing-on-linux/introduction/multioverview/#2-edit-the-configuration-file).
|
||||
|
||||
### Configure the load balancer
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
controlPlaneEndpoint:
|
||||
##Internal loadbalancer for apiservers
|
||||
#internalLoadbalancer: haproxy
|
||||
|
||||
domain: lb.kubesphere.local
|
||||
address: "192.168.0.xx"
|
||||
port: 6443
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- The address and port should be indented by two spaces in `config-sample.yaml`.
|
||||
- In most cases, you need to provide the **private IP address** of the load balancer for the field `address`. However, different cloud providers may have different configurations for load balancers. For example, if you configure a Server Load Balancer (SLB) on Alibaba Cloud, the platform assigns a public IP address to the SLB, which means you need to specify the public IP address for the field `address`.
|
||||
- The domain name of the load balancer is `lb.kubesphere.local` by default for internal access.
|
||||
- To use an internal load balancer, uncomment the field `internalLoadbalancer`.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
### Persistent storage plugin configurations
|
||||
|
||||
For a production environment, you need to prepare persistent storage and configure the storage plugin (for example, CSI) in `config-sample.yaml` to define which storage service you want to use. For more information, see [Persistent Storage Configurations](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/).
|
||||
|
||||
### Enable pluggable components (Optional)
|
||||
|
||||
KubeSphere has decoupled some core feature components since v2.1.0. These components are designed to be pluggable which means you can enable them either before or after installation. By default, KubeSphere will be installed with the minimal package if you do not enable them.
|
||||
|
||||
You can enable any of them according to your demands. It is highly recommended that you install these pluggable components to discover the full-stack features and capabilities provided by KubeSphere. Make sure your machines have sufficient CPU and memory before enabling them. See [Enable Pluggable Components](../../../pluggable-components/) for details.
|
||||
|
||||
### Start installation
|
||||
|
||||
After you complete the configuration, you can execute the following command to start the installation:
|
||||
|
||||
```bash
|
||||
./kk create cluster -f config-sample.yaml
|
||||
```
|
||||
|
||||
### Verify installation
|
||||
|
||||
1. Run the following command to inspect the logs of installation.
|
||||
|
||||
```bash
|
||||
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
|
||||
```
|
||||
|
||||
2. When you see the following message, it means your HA cluster is successfully created.
|
||||
|
||||
```bash
|
||||
#####################################################
|
||||
### Welcome to KubeSphere! ###
|
||||
#####################################################
|
||||
|
||||
Console: http://192.168.0.3:30880
|
||||
Account: admin
|
||||
Password: P@88w0rd
|
||||
|
||||
NOTES:
|
||||
1. After you log into the console, please check the
|
||||
monitoring status of service components in
|
||||
the "Cluster Management". If any service is not
|
||||
ready, please wait patiently until all components
|
||||
are up and running.
|
||||
2. Please change the default password after login.
|
||||
|
||||
#####################################################
|
||||
https://kubesphere.io 2020-xx-xx xx:xx:xx
|
||||
#####################################################
|
||||
```
|
||||
|
||||
|
|
@ -0,0 +1,70 @@
|
|||
---
|
||||
title: "Installing on Linux — Overview"
|
||||
keywords: 'Kubernetes, KubeSphere, Linux, Installation'
|
||||
description: 'Explore the general content in this chapter, including installation preparation, installation tool and method, and storage configurations.'
|
||||
linkTitle: "Overview"
|
||||
weight: 3110
|
||||
---
|
||||
|
||||
As an open-source project on [GitHub](https://github.com/kubesphere), KubeSphere is home to a community with thousands of users. Many of them are running KubeSphere for their production workloads. For the installation on Linux, KubeSphere can be deployed both in clouds and in on-premises environments, such as AWS EC2, Azure VM and bare metal.
|
||||
|
||||
The installation process is easy and friendly as KubeSphere provides users with [KubeKey](https://github.com/kubesphere/kubekey), a lightweight installer that supports the installation of Kubernetes, KubeSphere and related add-ons. KubeKey not only helps users to create clusters online but also serves as an air-gapped installation solution.
|
||||
|
||||
Here is a list of available installation options.
|
||||
|
||||
- [All-in-one installation](../../../quick-start/all-in-one-on-linux/): Install KubeSphere on a single node. It is only for users to quickly get familiar with KubeSphere.
|
||||
- [Multi-node installation](../multioverview/): Install KubeSphere on multiple nodes. It is for testing or development.
|
||||
- [Air-gapped installation on Linux](../air-gapped-installation/): All images of KubeSphere have been encapsulated into a package. It is convenient for air-gapped installation on Linux machines.
|
||||
- [High availability installation](../../../installing-on-linux/high-availability-configurations/ha-configuration/): Install a highly-available KubeSphere cluster with multiple nodes which is used for production.
|
||||
- Minimal Packages: Only install the minimum required system components of KubeSphere. Here is the minimum resource requirement:
|
||||
- 2 CPUs
|
||||
- 4 GB RAM
|
||||
- 40 GB Storage
|
||||
- [Full Packages](../../../pluggable-components/): Install all available system components of KubeSphere such as DevOps, service mesh, and alerting.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
Not all options are mutually exclusive. For instance, you can deploy KubeSphere with the minimal package on multiple nodes in an air-gapped environment.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
If you have an existing Kubernetes cluster, see [Overview of Installing on Kubernetes](../../../installing-on-kubernetes/introduction/overview/).
|
||||
|
||||
## Before Installation
|
||||
|
||||
- As images will be pulled from the Internet, your environment must have Internet access. Otherwise, you need to [install KubeSphere in an air-gapped environment](../air-gapped-installation/).
|
||||
- For all-in-one installation, the only one node is both the master and the worker.
|
||||
- For multi-node installation, you need to provide host information in a configuration file.
|
||||
- See [Port Requirements](../port-firewall/) before installation.
|
||||
|
||||
## KubeKey
|
||||
|
||||
[KubeKey](https://github.com/kubesphere/kubekey) provides an efficient approach to the installation and configuration of your cluster. You can use it to create, scale, and upgrade your Kubernetes cluster. It also allows you to install cloud-native add-ons (YAML or Chart) as you set up your cluster. For more information, see [KubeKey](../kubekey).
|
||||
|
||||
## Quick Installation for Development and Testing
|
||||
|
||||
KubeSphere has decoupled some components since v2.1.0. KubeKey only installs necessary components by default as this way features fast installation and minimal resource consumption. If you want to enable enhanced pluggable functionalities, see [Enable Pluggable Components](../../../pluggable-components/) for details.
|
||||
|
||||
The quick installation of KubeSphere is only for development or testing since it uses [Local Volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage services by default. If you want a production installation, see [High Availability Configurations](../../../installing-on-linux/high-availability-configurations/ha-configuration/).
|
||||
|
||||
## Storage Configurations
|
||||
|
||||
KubeSphere allows you to configure persistent storage services both before and after installation. Meanwhile, KubeSphere supports a variety of open-source storage solutions (for example, Ceph and GlusterFS) as well as commercial storage products. Refer to [Persistent Storage Configurations](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/) for detailed instructions regarding how to configure the storage class before you install KubeSphere.
|
||||
|
||||
For more information about how to set different storage classes for your workloads after you install KubeSphere, see [Persistent Volumes and Storage Classes](../../../cluster-administration/persistent-volume-and-storage-class/).
|
||||
|
||||
## Cluster Operation and Maintenance
|
||||
|
||||
### Add new nodes
|
||||
|
||||
With KubeKey, you can increase the number of nodes to meet higher resource needs after the installation, especially in production. For more information, see [Add New Nodes](../../../installing-on-linux/cluster-operation/add-new-nodes/).
|
||||
|
||||
### Remove nodes
|
||||
|
||||
You need to drain a node before you remove it. For more information, see [Remove Nodes](../../../installing-on-linux/cluster-operation/remove-nodes/).
|
||||
|
||||
## Uninstalling
|
||||
|
||||
Uninstalling KubeSphere means it will be removed from your machine, which is irreversible. Please be cautious with the operation.
|
||||
|
||||
For more information, see [Uninstall KubeSphere and Kubernetes](../../../installing-on-linux/uninstall-kubesphere-and-kubernetes/).
|
||||
|
|
@ -0,0 +1,70 @@
|
|||
---
|
||||
title: "Installing on Linux — Overview"
|
||||
keywords: 'Kubernetes, KubeSphere, Linux, Installation'
|
||||
description: 'Explore the general content in this chapter, including installation preparation, installation tool and method, and storage configurations.'
|
||||
linkTitle: "Overview"
|
||||
weight: 3110
|
||||
---
|
||||
|
||||
As an open-source project on [GitHub](https://github.com/kubesphere), KubeSphere is home to a community with thousands of users. Many of them are running KubeSphere for their production workloads. For the installation on Linux, KubeSphere can be deployed both in clouds and in on-premises environments, such as AWS EC2, Azure VM and bare metal.
|
||||
|
||||
The installation process is easy and friendly as KubeSphere provides users with [KubeKey](https://github.com/kubesphere/kubekey), a lightweight installer that supports the installation of Kubernetes, KubeSphere and related add-ons. KubeKey not only helps users to create clusters online but also serves as an air-gapped installation solution.
|
||||
|
||||
Here is a list of available installation options.
|
||||
|
||||
- [All-in-one installation](../../../quick-start/all-in-one-on-linux/): Install KubeSphere on a single node. It is only for users to quickly get familiar with KubeSphere.
|
||||
- [Multi-node installation](../multioverview/): Install KubeSphere on multiple nodes. It is for testing or development.
|
||||
- [Air-gapped installation on Linux](../air-gapped-installation/): All images of KubeSphere have been encapsulated into a package. It is convenient for air-gapped installation on Linux machines.
|
||||
- [High availability installation](../../../installing-on-linux/high-availability-configurations/ha-configuration/): Install a highly-available KubeSphere cluster with multiple nodes which is used for production.
|
||||
- Minimal Packages: Only install the minimum required system components of KubeSphere. Here is the minimum resource requirement:
|
||||
- 2 CPUs
|
||||
- 4 GB RAM
|
||||
- 40 GB Storage
|
||||
- [Full Packages](../../../pluggable-components/): Install all available system components of KubeSphere such as DevOps, service mesh, and alerting.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
Not all options are mutually exclusive. For instance, you can deploy KubeSphere with the minimal package on multiple nodes in an air-gapped environment.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
If you have an existing Kubernetes cluster, see [Overview of Installing on Kubernetes](../../../installing-on-kubernetes/introduction/overview/).
|
||||
|
||||
## Before Installation
|
||||
|
||||
- As images will be pulled from the Internet, your environment must have Internet access. Otherwise, you need to [install KubeSphere in an air-gapped environment](../air-gapped-installation/).
|
||||
- For all-in-one installation, the only one node is both the control plane and the worker.
|
||||
- For multi-node installation, you need to provide host information in a configuration file.
|
||||
- See [Port Requirements](../port-firewall/) before installation.
|
||||
|
||||
## KubeKey
|
||||
|
||||
[KubeKey](https://github.com/kubesphere/kubekey) provides an efficient approach to the installation and configuration of your cluster. You can use it to create, scale, and upgrade your Kubernetes cluster. It also allows you to install cloud-native add-ons (YAML or Chart) as you set up your cluster. For more information, see [KubeKey](../kubekey).
|
||||
|
||||
## Quick Installation for Development and Testing
|
||||
|
||||
KubeSphere has decoupled some components since v2.1.0. KubeKey only installs necessary components by default as this way features fast installation and minimal resource consumption. If you want to enable enhanced pluggable functionalities, see [Enable Pluggable Components](../../../pluggable-components/) for details.
|
||||
|
||||
The quick installation of KubeSphere is only for development or testing since it uses [Local Volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage services by default. If you want a production installation, see [High Availability Configurations](../../../installing-on-linux/high-availability-configurations/ha-configuration/).
|
||||
|
||||
## Storage Configurations
|
||||
|
||||
KubeSphere allows you to configure persistent storage services both before and after installation. Meanwhile, KubeSphere supports a variety of open-source storage solutions (for example, Ceph and GlusterFS) as well as commercial storage products. Refer to [Persistent Storage Configurations](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/) for detailed instructions regarding how to configure the storage class before you install KubeSphere.
|
||||
|
||||
For more information about how to set different storage classes for your workloads after you install KubeSphere, see [Persistent Volumes and Storage Classes](../../../cluster-administration/persistent-volume-and-storage-class/).
|
||||
|
||||
## Cluster Operation and Maintenance
|
||||
|
||||
### Add new nodes
|
||||
|
||||
With KubeKey, you can increase the number of nodes to meet higher resource needs after the installation, especially in production. For more information, see [Add New Nodes](../../../installing-on-linux/cluster-operation/add-new-nodes/).
|
||||
|
||||
### Remove nodes
|
||||
|
||||
You need to drain a node before you remove it. For more information, see [Remove Nodes](../../../installing-on-linux/cluster-operation/remove-nodes/).
|
||||
|
||||
## Uninstalling
|
||||
|
||||
Uninstalling KubeSphere means it will be removed from your machine, which is irreversible. Please be cautious with the operation.
|
||||
|
||||
For more information, see [Uninstall KubeSphere and Kubernetes](../../../installing-on-linux/uninstall-kubesphere-and-kubernetes/).
|
||||
|
|
@ -0,0 +1,70 @@
|
|||
---
|
||||
title: "Installing on Linux — Overview"
|
||||
keywords: 'Kubernetes, KubeSphere, Linux, Installation'
|
||||
description: 'Explore the general content in this chapter, including installation preparation, installation tool and method, and storage configurations.'
|
||||
linkTitle: "Overview"
|
||||
weight: 3110
|
||||
---
|
||||
|
||||
As an open-source project on [GitHub](https://github.com/kubesphere), KubeSphere is home to a community with thousands of users. Many of them are running KubeSphere for their production workloads. For the installation on Linux, KubeSphere can be deployed both in clouds and in on-premises environments, such as AWS EC2, Azure VM and bare metal.
|
||||
|
||||
The installation process is easy and friendly as KubeSphere provides users with [KubeKey](https://github.com/kubesphere/kubekey), a lightweight installer that supports the installation of Kubernetes, KubeSphere and related add-ons. KubeKey not only helps users to create clusters online but also serves as an air-gapped installation solution.
|
||||
|
||||
Here is a list of available installation options.
|
||||
|
||||
- [All-in-one installation](../../../quick-start/all-in-one-on-linux/): Install KubeSphere on a single node. It is only for users to quickly get familiar with KubeSphere.
|
||||
- [Multi-node installation](../multioverview/): Install KubeSphere on multiple nodes. It is for testing or development.
|
||||
- [Air-gapped installation on Linux](../air-gapped-installation/): All images of KubeSphere have been encapsulated into a package. It is convenient for air-gapped installation on Linux machines.
|
||||
- [High availability installation](../../../installing-on-linux/high-availability-configurations/ha-configuration/): Install a highly-available KubeSphere cluster with multiple nodes which is used for production.
|
||||
- Minimal Packages: Only install the minimum required system components of KubeSphere. Here is the minimum resource requirement:
|
||||
- 2 CPUs
|
||||
- 4 GB RAM
|
||||
- 40 GB Storage
|
||||
- [Full Packages](../../../pluggable-components/): Install all available system components of KubeSphere such as DevOps, service mesh, and alerting.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
Not all options are mutually exclusive. For instance, you can deploy KubeSphere with the minimal package on multiple nodes in an air-gapped environment.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
If you have an existing Kubernetes cluster, see [Overview of Installing on Kubernetes](../../../installing-on-kubernetes/introduction/overview/).
|
||||
|
||||
## Before Installation
|
||||
|
||||
- As images will be pulled from the Internet, your environment must have Internet access. Otherwise, you need to [install KubeSphere in an air-gapped environment](../air-gapped-installation/).
|
||||
- For all-in-one installation, the only one node is both the control plane and the worker.
|
||||
- For multi-node installation, you need to provide host information in a configuration file.
|
||||
- See [Port Requirements](../port-firewall/) before installation.
|
||||
|
||||
## KubeKey
|
||||
|
||||
[KubeKey](https://github.com/kubesphere/kubekey) provides an efficient approach to the installation and configuration of your cluster. You can use it to create, scale, and upgrade your Kubernetes cluster. It also allows you to install cloud-native add-ons (YAML or Chart) as you set up your cluster. For more information, see [KubeKey](../kubekey).
|
||||
|
||||
## Quick Installation for Development and Testing
|
||||
|
||||
KubeSphere has decoupled some components since v2.1.0. KubeKey only installs necessary components by default as this way features fast installation and minimal resource consumption. If you want to enable enhanced pluggable functionalities, see [Enable Pluggable Components](../../../pluggable-components/) for details.
|
||||
|
||||
The quick installation of KubeSphere is only for development or testing since it uses [Local Volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage services by default. If you want a production installation, see [High Availability Configurations](../../../installing-on-linux/high-availability-configurations/ha-configuration/).
|
||||
|
||||
## Storage Configurations
|
||||
|
||||
KubeSphere allows you to configure persistent storage services both before and after installation. Meanwhile, KubeSphere supports a variety of open-source storage solutions (for example, Ceph and GlusterFS) as well as commercial storage products. Refer to [Persistent Storage Configurations](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/) for detailed instructions regarding how to configure the storage class before you install KubeSphere.
|
||||
|
||||
For more information about how to set different storage classes for your workloads after you install KubeSphere, see [Persistent Volumes and Storage Classes](../../../cluster-administration/persistent-volume-and-storage-class/).
|
||||
|
||||
## Cluster Operation and Maintenance
|
||||
|
||||
### Add new nodes
|
||||
|
||||
With KubeKey, you can increase the number of nodes to meet higher resource needs after the installation, especially in production. For more information, see [Add New Nodes](../../../installing-on-linux/cluster-operation/add-new-nodes/).
|
||||
|
||||
### Remove nodes
|
||||
|
||||
You need to drain a node before you remove it. For more information, see [Remove Nodes](../../../installing-on-linux/cluster-operation/remove-nodes/).
|
||||
|
||||
## Uninstalling
|
||||
|
||||
Uninstalling KubeSphere means it will be removed from your machine, which is irreversible. Please be cautious with the operation.
|
||||
|
||||
For more information, see [Uninstall KubeSphere and Kubernetes](../../../installing-on-linux/uninstall-kubesphere-and-kubernetes/).
|
||||
|
|
@ -0,0 +1,363 @@
|
|||
---
|
||||
title: "Install a Multi-node Kubernetes and KubeSphere Cluster"
|
||||
keywords: 'Multi-node, Installation, KubeSphere'
|
||||
description: 'Learn the general steps of installing KubeSphere and Kubernetes on a multi-node cluster.'
|
||||
linkTitle: "Multi-node Installation"
|
||||
weight: 3130
|
||||
---
|
||||
|
||||
In a production environment, a single-node cluster cannot satisfy most of the needs as the cluster has limited resources with insufficient compute capabilities. Thus, single-node clusters are not recommended for large-scale data processing. Besides, a cluster of this kind is not available with high availability as it only has one node. On the other hand, a multi-node architecture is the most common and preferred choice in terms of application deployment and distribution.
|
||||
|
||||
This section gives you an overview of a single-master multi-node installation, including the concept, [KubeKey](https://github.com/kubesphere/kubekey/) and steps. For information about HA installation, refer to [High Availability Configurations](../../../installing-on-linux/high-availability-configurations/ha-configuration/), [Installing on Public Cloud](../../public-cloud/install-kubesphere-on-azure-vms/) and [Installing in On-premises Environment](../../on-premises/install-kubesphere-on-bare-metal/).
|
||||
|
||||
## Video Demonstration
|
||||
|
||||
{{< youtube nYOYk3VTSgo >}}
|
||||
|
||||
## Concept
|
||||
|
||||
A multi-node cluster is composed of at least one control plane node and one worker node. You can use any node as the **taskbox** to carry out the installation task. You can add additional nodes based on your needs (for example, for high availability) both before and after the installation.
|
||||
|
||||
- **Control Plane**. A control plane node generally controls and manages the whole system.
|
||||
- **Worker**. Worker nodes run the actual applications deployed on them.
|
||||
|
||||
## Step 1: Prepare Linux Hosts
|
||||
|
||||
Please see the requirements for hardware and operating system shown below. To get started with multi-node installation in this tutorial, you need to prepare at least three hosts according to the following requirements. It is possible to install the [KubeSphere Container Platform](https://kubesphere.io/) on two nodes if they have sufficient resources.
|
||||
|
||||
### System requirements
|
||||
|
||||
| Systems | Minimum Requirements (Each node) |
|
||||
| ------------------------------------------------------ | ------------------------------------------- |
|
||||
| **Ubuntu** *16.04, 18.04* | CPU: 2 Cores, Memory: 4 G, Disk Space: 40 G |
|
||||
| **Debian** *Buster, Stretch* | CPU: 2 Cores, Memory: 4 G, Disk Space: 40 G |
|
||||
| **CentOS** *7*.x | CPU: 2 Cores, Memory: 4 G, Disk Space: 40 G |
|
||||
| **Red Hat Enterprise Linux** *7* | CPU: 2 Cores, Memory: 4 G, Disk Space: 40 G |
|
||||
| **SUSE Linux Enterprise Server** *15* **/openSUSE Leap** *15.2* | CPU: 2 Cores, Memory: 4 G, Disk Space: 40 G |
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- The path `/var/lib/docker` is mainly used to store the container data, and will gradually increase in size during use and operation. In the case of a production environment, it is recommended that `/var/lib/docker` should mount a drive separately.
|
||||
|
||||
- Only x86_64 CPUs are supported, and Arm CPUs are not fully supported at present.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
### Node requirements
|
||||
|
||||
- All nodes must be accessible through `SSH`.
|
||||
- Time synchronization for all nodes.
|
||||
- `sudo`/`curl`/`openssl` should be used in all nodes.
|
||||
|
||||
### Container runtimes
|
||||
|
||||
Your cluster must have an available container runtime. If you use KubeKey to set up a cluster, KubeKey will install the latest version of Docker by default. Alternatively, you can install Docker or other container runtimes by yourself before you create a cluster.
|
||||
|
||||
| Supported Container Runtime | Version |
|
||||
| --------------------------- | ------- |
|
||||
| Docker | 19.3.8+ |
|
||||
| containerd | Latest |
|
||||
| CRI-O (experimental, not fully tested) | Latest |
|
||||
| iSula (experimental, not fully tested) | Latest |
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
A container runtime must be installed in advance if you want to deploy KubeSphere in an offline environment.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
### Dependency requirements
|
||||
|
||||
KubeKey can install Kubernetes and KubeSphere together. The dependency that needs to be installed may be different based on the Kubernetes version to be installed. You can refer to the list below to see if you need to install relevant dependencies on your node in advance.
|
||||
|
||||
| Dependency | Kubernetes Version ≥ 1.18 | Kubernetes Version < 1.18 |
|
||||
| ----------- | ------------------------- | ------------------------- |
|
||||
| `socat` | Required | Optional but recommended |
|
||||
| `conntrack` | Required | Optional but recommended |
|
||||
| `ebtables` | Optional but recommended | Optional but recommended |
|
||||
| `ipset` | Optional but recommended | Optional but recommended |
|
||||
|
||||
### Network and DNS requirements
|
||||
|
||||
- Make sure the DNS address in `/etc/resolv.conf` is available. Otherwise, it may cause some issues of DNS in clusters.
|
||||
- If your network configuration uses firewall rules or security groups, you must ensure infrastructure components can communicate with each other through specific ports. It's recommended that you turn off the firewall or follow the guide [Port Requirements](../port-firewall/).
|
||||
- Supported CNI plugins: Calico and Flannel. Others (such as Cilium and Kube-OVN) may also work but note that they have not been fully tested.
|
||||
|
||||
{{< notice tip >}}
|
||||
|
||||
- It's recommended that your OS be clean (without any other software installed). Otherwise, there may be conflicts.
|
||||
- A registry mirror (booster) is recommended to be prepared if you have trouble downloading images from `dockerhub.io`. See [Configure a Booster for Installation](../../../faq/installation/configure-booster/) and [Configure registry mirrors for the Docker daemon](https://docs.docker.com/registry/recipes/mirror/#configure-the-docker-daemon).
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
This example includes three hosts as below with the master node serving as the taskbox.
|
||||
|
||||
| Host IP | Host Name | Role |
|
||||
| ----------- | --------- | ------------ |
|
||||
| 192.168.0.2 | master | master, etcd |
|
||||
| 192.168.0.3 | node1 | worker |
|
||||
| 192.168.0.4 | node2 | worker |
|
||||
|
||||
## Step 2: Download KubeKey
|
||||
|
||||
Follow the step below to download [KubeKey](../kubekey).
|
||||
|
||||
{{< tabs >}}
|
||||
|
||||
{{< tab "Good network connections to GitHub/Googleapis" >}}
|
||||
|
||||
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.0.0 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{< tab "Poor network connections to GitHub/Googleapis" >}}
|
||||
|
||||
Run the following command first to make sure you download KubeKey from the correct zone.
|
||||
|
||||
```bash
|
||||
export KKZONE=cn
|
||||
```
|
||||
|
||||
Run the following command to download KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.0.0 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
After you download KubeKey, if you transfer it to a new machine also with poor network connections to Googleapis, you must run `export KKZONE=cn` again before you proceed with the steps below.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{</ tabs >}}
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
The commands above download the latest release (v2.0.0) of KubeKey. You can change the version number in the command to download a specific version.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
Make `kk` executable:
|
||||
|
||||
```bash
|
||||
chmod +x kk
|
||||
```
|
||||
|
||||
## Step 3: Create a Kubernetes Multi-node Cluster
|
||||
|
||||
For multi-node installation, you need to create a cluster by specifying a configuration file.
|
||||
|
||||
### 1. Create an example configuration file
|
||||
|
||||
Command:
|
||||
|
||||
```bash
|
||||
./kk create config [--with-kubernetes version] [--with-kubesphere version] [(-f | --file) path]
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- Recommended Kubernetes versions for KubeSphere 3.2.1: v1.19.x, v1.20.x, v1.21.x or v1.22.x (experimental). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.21.5 by default. For more information about supported Kubernetes versions, see [Support Matrix](../kubekey/#support-matrix).
|
||||
|
||||
- If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later.
|
||||
- If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
Here are some examples for your reference:
|
||||
|
||||
- You can create an example configuration file with default configurations. You can also specify the file with a different filename, or in a different folder.
|
||||
|
||||
```bash
|
||||
./kk create config [-f ~/myfolder/abc.yaml]
|
||||
```
|
||||
|
||||
- You can specify a KubeSphere version that you want to install (for example, `--with-kubesphere v3.2.1`).
|
||||
|
||||
```bash
|
||||
./kk create config --with-kubesphere [version]
|
||||
```
|
||||
|
||||
### 2. Edit the configuration file of a Kubernetes multi-node cluster
|
||||
|
||||
A default file `config-sample.yaml` will be created if you do not change the name. Edit the file and here is an example of the configuration file of a multi-node cluster with one master node.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
To customize Kubernetes related parameters, refer to [Kubernetes Cluster Configurations](../vars/).
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
hosts:
|
||||
- {name: master, address: 192.168.0.2, internalAddress: 192.168.0.2, user: ubuntu, password: Testing123}
|
||||
- {name: node1, address: 192.168.0.3, internalAddress: 192.168.0.3, user: ubuntu, password: Testing123}
|
||||
- {name: node2, address: 192.168.0.4, internalAddress: 192.168.0.4, user: ubuntu, password: Testing123}
|
||||
roleGroups:
|
||||
etcd:
|
||||
- master
|
||||
control-plane:
|
||||
- master
|
||||
worker:
|
||||
- node1
|
||||
- node2
|
||||
controlPlaneEndpoint:
|
||||
domain: lb.kubesphere.local
|
||||
address: ""
|
||||
port: 6443
|
||||
```
|
||||
|
||||
#### Hosts
|
||||
|
||||
List all your machines under `hosts` and add their detailed information as above.
|
||||
|
||||
`name`: The hostname of the instance.
|
||||
|
||||
`address`: The IP address you use for the connection between the taskbox and other instances through SSH. This can be either the public IP address or the private IP address depending on your environment. For example, some cloud platforms provide every instance with a public IP address which you use to access instances through SSH. In this case, you can provide the public IP address for this field.
|
||||
|
||||
`internalAddress`: The private IP address of the instance.
|
||||
|
||||
At the same time, you must provide the login information used to connect to each instance. Here are some examples:
|
||||
|
||||
- For password login:
|
||||
|
||||
```yaml
|
||||
hosts:
|
||||
- {name: master, address: 192.168.0.2, internalAddress: 192.168.0.2, port: 8022, user: ubuntu, password: Testing123}
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
In this tutorial, port `22` is the default port of SSH so you do not need to add it in the YAML file. Otherwise, you need to add the port number after the IP address as above.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
- For the default root user:
|
||||
|
||||
```yaml
|
||||
hosts:
|
||||
- {name: master, address: 192.168.0.2, internalAddress: 192.168.0.2, password: Testing123}
|
||||
```
|
||||
|
||||
- For passwordless login with SSH keys:
|
||||
|
||||
```yaml
|
||||
hosts:
|
||||
- {name: master, address: 192.168.0.2, internalAddress: 192.168.0.2, privateKeyPath: "~/.ssh/id_rsa"}
|
||||
```
|
||||
|
||||
{{< notice tip >}}
|
||||
|
||||
- Before you install KubeSphere, you can use the information provided under `hosts` (for example, IP addresses and passwords) to test the network connection between the taskbox and other instances using SSH.
|
||||
- Make sure port `6443` is not being used by other services before the installation. Otherwise, it may cause conflicts as the default port of the API server is `6443`.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
#### roleGroups
|
||||
|
||||
- `etcd`: etcd node names
|
||||
- `control-plane`: Name of the contro plane node
|
||||
- `worker`: Worker node names
|
||||
|
||||
#### controlPlaneEndpoint (for HA installation only)
|
||||
|
||||
The `controlPlaneEndpoint` is where you provide your external load balancer information for an HA cluster. You need to prepare and configure the external load balancer if and only if you need to install multiple master nodes. Please note that the address and port should be indented by two spaces in `config-sample.yaml`, and `address` should be your load balancer's IP address. See [HA Configurations](../../../installing-on-linux/high-availability-configurations/ha-configuration/) for details.
|
||||
|
||||
#### addons
|
||||
|
||||
You can customize persistent storage plugins (for example, NFS Client, Ceph RBD, and GlusterFS) by specifying storage under the field `addons` in `config-sample.yaml`. For more information, see [Persistent Storage Configurations](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/).
|
||||
|
||||
KubeKey will install [OpenEBS](https://openebs.io/) to provision [LocalPV](https://kubernetes.io/docs/concepts/storage/volumes/#local) for development and testing environment by default, which is convenient for new users. In this example of multi-node installation, the default storage class (local volume) is used. For production, you can use Ceph/GlusterFS/CSI or commercial products as persistent storage solutions.
|
||||
|
||||
{{< notice tip >}}
|
||||
|
||||
- You can enable the multi-cluster feature by editing the configuration file. For more information, see [Multi-cluster Management](../../../multicluster-management/).
|
||||
- You can also select the components you want to install. For more information, see [Enable Pluggable Components](../../../pluggable-components/). For an example of a complete `config-sample.yaml` file, see [this file](https://github.com/kubesphere/kubekey/blob/release-1.2/docs/config-example.md).
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
When you finish editing, save the file.
|
||||
|
||||
### 3. Create a cluster using the configuration file
|
||||
|
||||
```bash
|
||||
./kk create cluster -f config-sample.yaml
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
You need to change `config-sample.yaml` above to your own file if you use a different name.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
The whole installation process may take 10-20 minutes, depending on your machine and network.
|
||||
|
||||
### 4. Verify the installation
|
||||
|
||||
When the installation finishes, you can see the content as follows:
|
||||
|
||||
```bash
|
||||
#####################################################
|
||||
### Welcome to KubeSphere! ###
|
||||
#####################################################
|
||||
|
||||
Console: http://192.168.0.2:30880
|
||||
Account: admin
|
||||
Password: P@88w0rd
|
||||
|
||||
NOTES:
|
||||
1. After you log into the console, please check the
|
||||
monitoring status of service components in
|
||||
the "Cluster Management". If any service is not
|
||||
ready, please wait patiently until all components
|
||||
are up and running.
|
||||
2. Please change the default password after login.
|
||||
|
||||
#####################################################
|
||||
https://kubesphere.io 20xx-xx-xx xx:xx:xx
|
||||
#####################################################
|
||||
```
|
||||
|
||||
Now, you will be able to access the web console of KubeSphere at `<NodeIP>:30880` with the default account and password (`admin/P@88w0rd`).
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
To access the console, you may need to configure port forwarding rules depending on your environment. Please also make sure port `30880` is opened in your security group.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||

|
||||
|
||||
## Enable kubectl Autocompletion
|
||||
|
||||
KubeKey doesn't enable kubectl autocompletion. See the content below and turn it on:
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
Make sure bash-autocompletion is installed and works.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
```bash
|
||||
# Install bash-completion
|
||||
apt-get install bash-completion
|
||||
|
||||
# Source the completion script in your ~/.bashrc file
|
||||
echo 'source <(kubectl completion bash)' >>~/.bashrc
|
||||
|
||||
# Add the completion script to the /etc/bash_completion.d directory
|
||||
kubectl completion bash >/etc/bash_completion.d/kubectl
|
||||
```
|
||||
|
||||
Detailed information can be found [here](https://kubernetes.io/docs/tasks/tools/install-kubectl/#enabling-shell-autocompletion).
|
||||
|
||||
## Code Demonstration
|
||||
<script src="https://asciinema.org/a/368752.js" id="asciicast-368752" async></script>
|
||||
|
|
@ -0,0 +1,363 @@
|
|||
---
|
||||
title: "Install a Multi-node Kubernetes and KubeSphere Cluster"
|
||||
keywords: 'Multi-node, Installation, KubeSphere'
|
||||
description: 'Learn the general steps of installing KubeSphere and Kubernetes on a multi-node cluster.'
|
||||
linkTitle: "Multi-node Installation"
|
||||
weight: 3130
|
||||
---
|
||||
|
||||
In a production environment, a single-node cluster cannot satisfy most of the needs as the cluster has limited resources with insufficient compute capabilities. Thus, single-node clusters are not recommended for large-scale data processing. Besides, a cluster of this kind is not available with high availability as it only has one node. On the other hand, a multi-node architecture is the most common and preferred choice in terms of application deployment and distribution.
|
||||
|
||||
This section gives you an overview of a single-master multi-node installation, including the concept, [KubeKey](https://github.com/kubesphere/kubekey/) and steps. For information about HA installation, refer to [High Availability Configurations](../../../installing-on-linux/high-availability-configurations/ha-configuration/), [Installing on Public Cloud](../../public-cloud/install-kubesphere-on-azure-vms/) and [Installing in On-premises Environment](../../on-premises/install-kubesphere-on-bare-metal/).
|
||||
|
||||
## Video Demonstration
|
||||
|
||||
{{< youtube nYOYk3VTSgo >}}
|
||||
|
||||
## Concept
|
||||
|
||||
A multi-node cluster is composed of at least one control plane node and one worker node. You can use any node as the **taskbox** to carry out the installation task. You can add additional nodes based on your needs (for example, for high availability) both before and after the installation.
|
||||
|
||||
- **Control Plane**. A control plane node generally controls and manages the whole system.
|
||||
- **Worker**. Worker nodes run the actual applications deployed on them.
|
||||
|
||||
## Step 1: Prepare Linux Hosts
|
||||
|
||||
Please see the requirements for hardware and operating system shown below. To get started with multi-node installation in this tutorial, you need to prepare at least three hosts according to the following requirements. It is possible to install the [KubeSphere Container Platform](https://kubesphere.io/) on two nodes if they have sufficient resources.
|
||||
|
||||
### System requirements
|
||||
|
||||
| Systems | Minimum Requirements (Each node) |
|
||||
| ------------------------------------------------------ | ------------------------------------------- |
|
||||
| **Ubuntu** *16.04, 18.04* | CPU: 2 Cores, Memory: 4 G, Disk Space: 40 G |
|
||||
| **Debian** *Buster, Stretch* | CPU: 2 Cores, Memory: 4 G, Disk Space: 40 G |
|
||||
| **CentOS** *7*.x | CPU: 2 Cores, Memory: 4 G, Disk Space: 40 G |
|
||||
| **Red Hat Enterprise Linux** *7* | CPU: 2 Cores, Memory: 4 G, Disk Space: 40 G |
|
||||
| **SUSE Linux Enterprise Server** *15* **/openSUSE Leap** *15.2* | CPU: 2 Cores, Memory: 4 G, Disk Space: 40 G |
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- The path `/var/lib/docker` is mainly used to store the container data, and will gradually increase in size during use and operation. In the case of a production environment, it is recommended that `/var/lib/docker` should mount a drive separately.
|
||||
|
||||
- Only x86_64 CPUs are supported, and Arm CPUs are not fully supported at present.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
### Node requirements
|
||||
|
||||
- All nodes must be accessible through `SSH`.
|
||||
- Time synchronization for all nodes.
|
||||
- `sudo`/`curl`/`openssl` should be used in all nodes.
|
||||
|
||||
### Container runtimes
|
||||
|
||||
Your cluster must have an available container runtime. If you use KubeKey to set up a cluster, KubeKey will install the latest version of Docker by default. Alternatively, you can install Docker or other container runtimes by yourself before you create a cluster.
|
||||
|
||||
| Supported Container Runtime | Version |
|
||||
| --------------------------- | ------- |
|
||||
| Docker | 19.3.8+ |
|
||||
| containerd | Latest |
|
||||
| CRI-O (experimental, not fully tested) | Latest |
|
||||
| iSula (experimental, not fully tested) | Latest |
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
A container runtime must be installed in advance if you want to deploy KubeSphere in an offline environment.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
### Dependency requirements
|
||||
|
||||
KubeKey can install Kubernetes and KubeSphere together. The dependency that needs to be installed may be different based on the Kubernetes version to be installed. You can refer to the list below to see if you need to install relevant dependencies on your node in advance.
|
||||
|
||||
| Dependency | Kubernetes Version ≥ 1.18 | Kubernetes Version < 1.18 |
|
||||
| ----------- | ------------------------- | ------------------------- |
|
||||
| `socat` | Required | Optional but recommended |
|
||||
| `conntrack` | Required | Optional but recommended |
|
||||
| `ebtables` | Optional but recommended | Optional but recommended |
|
||||
| `ipset` | Optional but recommended | Optional but recommended |
|
||||
|
||||
### Network and DNS requirements
|
||||
|
||||
- Make sure the DNS address in `/etc/resolv.conf` is available. Otherwise, it may cause some issues of DNS in clusters.
|
||||
- If your network configuration uses firewall rules or security groups, you must ensure infrastructure components can communicate with each other through specific ports. It's recommended that you turn off the firewall or follow the guide [Port Requirements](../port-firewall/).
|
||||
- Supported CNI plugins: Calico and Flannel. Others (such as Cilium and Kube-OVN) may also work but note that they have not been fully tested.
|
||||
|
||||
{{< notice tip >}}
|
||||
|
||||
- It's recommended that your OS be clean (without any other software installed). Otherwise, there may be conflicts.
|
||||
- A registry mirror (booster) is recommended to be prepared if you have trouble downloading images from `dockerhub.io`. See [Configure a Booster for Installation](../../../faq/installation/configure-booster/) and [Configure registry mirrors for the Docker daemon](https://docs.docker.com/registry/recipes/mirror/#configure-the-docker-daemon).
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
This example includes three hosts as below with the master node serving as the taskbox.
|
||||
|
||||
| Host IP | Host Name | Role |
|
||||
| ----------- | --------- | ------------ |
|
||||
| 192.168.0.2 | master | master, etcd |
|
||||
| 192.168.0.3 | node1 | worker |
|
||||
| 192.168.0.4 | node2 | worker |
|
||||
|
||||
## Step 2: Download KubeKey
|
||||
|
||||
Follow the step below to download [KubeKey](../kubekey).
|
||||
|
||||
{{< tabs >}}
|
||||
|
||||
{{< tab "Good network connections to GitHub/Googleapis" >}}
|
||||
|
||||
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.0.0 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{< tab "Poor network connections to GitHub/Googleapis" >}}
|
||||
|
||||
Run the following command first to make sure you download KubeKey from the correct zone.
|
||||
|
||||
```bash
|
||||
export KKZONE=cn
|
||||
```
|
||||
|
||||
Run the following command to download KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.0.0 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
After you download KubeKey, if you transfer it to a new machine also with poor network connections to Googleapis, you must run `export KKZONE=cn` again before you proceed with the steps below.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{</ tabs >}}
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
The commands above download the latest release (v2.0.0) of KubeKey. You can change the version number in the command to download a specific version.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
Make `kk` executable:
|
||||
|
||||
```bash
|
||||
chmod +x kk
|
||||
```
|
||||
|
||||
## Step 3: Create a Kubernetes Multi-node Cluster
|
||||
|
||||
For multi-node installation, you need to create a cluster by specifying a configuration file.
|
||||
|
||||
### 1. Create an example configuration file
|
||||
|
||||
Command:
|
||||
|
||||
```bash
|
||||
./kk create config [--with-kubernetes version] [--with-kubesphere version] [(-f | --file) path]
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- Recommended Kubernetes versions for KubeSphere 3.2.1: v1.19.x, v1.20.x, v1.21.x or v1.22.x (experimental). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.21.5 by default. For more information about supported Kubernetes versions, see [Support Matrix](../kubekey/#support-matrix).
|
||||
|
||||
- If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later.
|
||||
- If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
Here are some examples for your reference:
|
||||
|
||||
- You can create an example configuration file with default configurations. You can also specify the file with a different filename, or in a different folder.
|
||||
|
||||
```bash
|
||||
./kk create config [-f ~/myfolder/abc.yaml]
|
||||
```
|
||||
|
||||
- You can specify a KubeSphere version that you want to install (for example, `--with-kubesphere v3.2.1`).
|
||||
|
||||
```bash
|
||||
./kk create config --with-kubesphere [version]
|
||||
```
|
||||
|
||||
### 2. Edit the configuration file of a Kubernetes multi-node cluster
|
||||
|
||||
A default file `config-sample.yaml` will be created if you do not change the name. Edit the file and here is an example of the configuration file of a multi-node cluster with one master node.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
To customize Kubernetes related parameters, refer to [Kubernetes Cluster Configurations](../vars/).
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
hosts:
|
||||
- {name: master, address: 192.168.0.2, internalAddress: 192.168.0.2, user: ubuntu, password: Testing123}
|
||||
- {name: node1, address: 192.168.0.3, internalAddress: 192.168.0.3, user: ubuntu, password: Testing123}
|
||||
- {name: node2, address: 192.168.0.4, internalAddress: 192.168.0.4, user: ubuntu, password: Testing123}
|
||||
roleGroups:
|
||||
etcd:
|
||||
- master
|
||||
control-plane:
|
||||
- master
|
||||
worker:
|
||||
- node1
|
||||
- node2
|
||||
controlPlaneEndpoint:
|
||||
domain: lb.kubesphere.local
|
||||
address: ""
|
||||
port: 6443
|
||||
```
|
||||
|
||||
#### Hosts
|
||||
|
||||
List all your machines under `hosts` and add their detailed information as above.
|
||||
|
||||
`name`: The hostname of the instance.
|
||||
|
||||
`address`: The IP address you use for the connection between the taskbox and other instances through SSH. This can be either the public IP address or the private IP address depending on your environment. For example, some cloud platforms provide every instance with a public IP address which you use to access instances through SSH. In this case, you can provide the public IP address for this field.
|
||||
|
||||
`internalAddress`: The private IP address of the instance.
|
||||
|
||||
At the same time, you must provide the login information used to connect to each instance. Here are some examples:
|
||||
|
||||
- For password login:
|
||||
|
||||
```yaml
|
||||
hosts:
|
||||
- {name: master, address: 192.168.0.2, internalAddress: 192.168.0.2, port: 8022, user: ubuntu, password: Testing123}
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
In this tutorial, port `22` is the default port of SSH so you do not need to add it in the YAML file. Otherwise, you need to add the port number after the IP address as above.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
- For the default root user:
|
||||
|
||||
```yaml
|
||||
hosts:
|
||||
- {name: master, address: 192.168.0.2, internalAddress: 192.168.0.2, password: Testing123}
|
||||
```
|
||||
|
||||
- For passwordless login with SSH keys:
|
||||
|
||||
```yaml
|
||||
hosts:
|
||||
- {name: master, address: 192.168.0.2, internalAddress: 192.168.0.2, privateKeyPath: "~/.ssh/id_rsa"}
|
||||
```
|
||||
|
||||
{{< notice tip >}}
|
||||
|
||||
- Before you install KubeSphere, you can use the information provided under `hosts` (for example, IP addresses and passwords) to test the network connection between the taskbox and other instances using SSH.
|
||||
- Make sure port `6443` is not being used by other services before the installation. Otherwise, it may cause conflicts as the default port of the API server is `6443`.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
#### roleGroups
|
||||
|
||||
- `etcd`: etcd node names
|
||||
- `control-plane`: Name of the contro plane node
|
||||
- `worker`: Worker node names
|
||||
|
||||
#### controlPlaneEndpoint (for HA installation only)
|
||||
|
||||
The `controlPlaneEndpoint` is where you provide your external load balancer information for an HA cluster. You need to prepare and configure the external load balancer if and only if you need to install multiple control plane nodes. Please note that the address and port should be indented by two spaces in `config-sample.yaml`, and `address` should be your load balancer's IP address. See [HA Configurations](../../../installing-on-linux/high-availability-configurations/ha-configuration/) for details.
|
||||
|
||||
#### addons
|
||||
|
||||
You can customize persistent storage plugins (for example, NFS Client, Ceph RBD, and GlusterFS) by specifying storage under the field `addons` in `config-sample.yaml`. For more information, see [Persistent Storage Configurations](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/).
|
||||
|
||||
KubeKey will install [OpenEBS](https://openebs.io/) to provision [LocalPV](https://kubernetes.io/docs/concepts/storage/volumes/#local) for development and testing environment by default, which is convenient for new users. In this example of multi-node installation, the default storage class (local volume) is used. For production, you can use Ceph/GlusterFS/CSI or commercial products as persistent storage solutions.
|
||||
|
||||
{{< notice tip >}}
|
||||
|
||||
- You can enable the multi-cluster feature by editing the configuration file. For more information, see [Multi-cluster Management](../../../multicluster-management/).
|
||||
- You can also select the components you want to install. For more information, see [Enable Pluggable Components](../../../pluggable-components/). For an example of a complete `config-sample.yaml` file, see [this file](https://github.com/kubesphere/kubekey/blob/release-1.2/docs/config-example.md).
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
When you finish editing, save the file.
|
||||
|
||||
### 3. Create a cluster using the configuration file
|
||||
|
||||
```bash
|
||||
./kk create cluster -f config-sample.yaml
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
You need to change `config-sample.yaml` above to your own file if you use a different name.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
The whole installation process may take 10-20 minutes, depending on your machine and network.
|
||||
|
||||
### 4. Verify the installation
|
||||
|
||||
When the installation finishes, you can see the content as follows:
|
||||
|
||||
```bash
|
||||
#####################################################
|
||||
### Welcome to KubeSphere! ###
|
||||
#####################################################
|
||||
|
||||
Console: http://192.168.0.2:30880
|
||||
Account: admin
|
||||
Password: P@88w0rd
|
||||
|
||||
NOTES:
|
||||
1. After you log into the console, please check the
|
||||
monitoring status of service components in
|
||||
the "Cluster Management". If any service is not
|
||||
ready, please wait patiently until all components
|
||||
are up and running.
|
||||
2. Please change the default password after login.
|
||||
|
||||
#####################################################
|
||||
https://kubesphere.io 20xx-xx-xx xx:xx:xx
|
||||
#####################################################
|
||||
```
|
||||
|
||||
Now, you will be able to access the web console of KubeSphere at `<NodeIP>:30880` with the default account and password (`admin/P@88w0rd`).
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
To access the console, you may need to configure port forwarding rules depending on your environment. Please also make sure port `30880` is opened in your security group.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||

|
||||
|
||||
## Enable kubectl Autocompletion
|
||||
|
||||
KubeKey doesn't enable kubectl autocompletion. See the content below and turn it on:
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
Make sure bash-autocompletion is installed and works.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
```bash
|
||||
# Install bash-completion
|
||||
apt-get install bash-completion
|
||||
|
||||
# Source the completion script in your ~/.bashrc file
|
||||
echo 'source <(kubectl completion bash)' >>~/.bashrc
|
||||
|
||||
# Add the completion script to the /etc/bash_completion.d directory
|
||||
kubectl completion bash >/etc/bash_completion.d/kubectl
|
||||
```
|
||||
|
||||
Detailed information can be found [here](https://kubernetes.io/docs/tasks/tools/install-kubectl/#enabling-shell-autocompletion).
|
||||
|
||||
## Code Demonstration
|
||||
<script src="https://asciinema.org/a/368752.js" id="asciicast-368752" async></script>
|
||||
|
|
@ -0,0 +1,540 @@
|
|||
---
|
||||
title: "Deploy KubeSphere on VMware vSphere"
|
||||
keywords: 'Kubernetes, KubeSphere, VMware-vSphere, installation'
|
||||
description: 'Learn how to create a high-availability cluster on VMware vSphere.'
|
||||
|
||||
|
||||
weight: 3510
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
For a production environment, we need to consider the high availability of the cluster. If the key components (for example, kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same master node, Kubernetes and KubeSphere will be unavailable once the master node goes down. Therefore, we need to set up a high-availability cluster by provisioning load balancers with multiple master nodes. You can use any cloud load balancer, or any hardware load balancer (for example, F5). In addition, Keepalived and [HAProxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters.
|
||||
|
||||
This tutorial walks you through an example of how to create Keepalived and HAProxy, and implement high availability of master and etcd nodes using the load balancers on VMware vSphere.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Please make sure that you already know how to install KubeSphere with a multi-node cluster by following the [guide](../../introduction/multioverview/). This tutorial focuses more on how to configure load balancers.
|
||||
- You need a VMware vSphere account to create VMs.
|
||||
- Considering data persistence, for a production environment, we recommend you to prepare persistent storage and create a **default** StorageClass in advance. For development and testing, you can use the integrated OpenEBS to provision LocalPV as the storage service directly.
|
||||
|
||||
## Architecture
|
||||
|
||||

|
||||
|
||||
## Prepare Linux Hosts
|
||||
|
||||
This tutorial creates 8 virtual machines of **CentOS Linux release 7.6.1810 (Core)** for the default minimal installation. Every machine has 2 Cores, 4 GB of memory and 40 G disk space.
|
||||
|
||||
| Host IP | Host Name | Role |
|
||||
| --- | --- | --- |
|
||||
|10.10.71.214|master1|master, etcd|
|
||||
|10.10.71.73|master2|master, etcd|
|
||||
|10.10.71.62|master3|master, etcd|
|
||||
|10.10.71.75|node1|worker|
|
||||
|10.10.71.76|node2|worker|
|
||||
|10.10.71.79|node3|worker|
|
||||
|10.10.71.67|vip|vip (No need to create a VM)|
|
||||
|10.10.71.77|lb-0|lb (Keepalived + HAProxy)|
|
||||
|10.10.71.66|lb-1|lb (Keepalived + HAProxy)|
|
||||
|
||||
{{< notice note >}}
|
||||
You do not need to create a virtual machine for `vip` (i.e. Virtual IP) above, so only 8 virtual machines need to be created.
|
||||
{{</ notice >}}
|
||||
|
||||
You can follow the New Virtual Machine wizard to create a virtual machine to place in the VMware Host Client inventory.
|
||||
|
||||

|
||||
|
||||
1. In the first step **Select a creation type**, you can deploy a virtual machine from an OVF or OVA file, or register an existing virtual machine directly.
|
||||
|
||||

|
||||
|
||||
2. When you create a new virtual machine, provide a unique name for the virtual machine to distinguish it from existing virtual machines on the host you are managing.
|
||||
|
||||

|
||||
|
||||
3. Select a compute resource and storage (datastore) for the configuration and disk files. You can select the datastore that has the most suitable properties, such as size, speed, and availability, for your virtual machine storage.
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
4. Select a guest operating system. The wizard will provide the appropriate defaults for the operating system installation.
|
||||
|
||||

|
||||
|
||||
5. Before you finish deploying a new virtual machine, you have the option to set **Virtual Hardware** and **VM Options**. You can refer to the images below for part of the fields.
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
6. In **Ready to complete** page, you review the configuration selections that you have made for the virtual machine. Click **Finish** at the bottom-right corner to continue.
|
||||
|
||||

|
||||
|
||||
## Install a Load Balancer using Keepalived and HAProxy
|
||||
|
||||
For a production environment, you have to prepare an external load balancer for your multiple-master cluster. If you do not have a load balancer, you can install it using Keepalived and HAProxy. If you are provisioning a development or testing environment by installing a single-master cluster, please skip this section.
|
||||
|
||||
### Yum Install
|
||||
|
||||
host lb-0 (`10.10.71.77`) and host lb-1 (`10.10.71.66`).
|
||||
|
||||
```bash
|
||||
yum install keepalived haproxy psmisc -y
|
||||
```
|
||||
|
||||
### Configure HAProxy
|
||||
|
||||
On the servers with IP `10.10.71.77` and `10.10.71.66`, configure HAProxy as follows.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
The configuration of the two lb machines is the same. Please pay attention to the backend service address.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
```yaml
|
||||
# HAProxy Configure /etc/haproxy/haproxy.cfg
|
||||
global
|
||||
log 127.0.0.1 local2
|
||||
chroot /var/lib/haproxy
|
||||
pidfile /var/run/haproxy.pid
|
||||
maxconn 4000
|
||||
user haproxy
|
||||
group haproxy
|
||||
daemon
|
||||
# turn on stats unix socket
|
||||
stats socket /var/lib/haproxy/stats
|
||||
#---------------------------------------------------------------------
|
||||
# common defaults that all the 'listen' and 'backend' sections will
|
||||
# use if not designated in their block
|
||||
#---------------------------------------------------------------------
|
||||
defaults
|
||||
log global
|
||||
option httplog
|
||||
option dontlognull
|
||||
timeout connect 5000
|
||||
timeout client 5000
|
||||
timeout server 5000
|
||||
#---------------------------------------------------------------------
|
||||
# main frontend which proxys to the backends
|
||||
#---------------------------------------------------------------------
|
||||
frontend kube-apiserver
|
||||
bind *:6443
|
||||
mode tcp
|
||||
option tcplog
|
||||
default_backend kube-apiserver
|
||||
#---------------------------------------------------------------------
|
||||
# round robin balancing between the various backends
|
||||
#---------------------------------------------------------------------
|
||||
backend kube-apiserver
|
||||
mode tcp
|
||||
option tcplog
|
||||
balance roundrobin
|
||||
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
|
||||
server kube-apiserver-1 10.10.71.214:6443 check
|
||||
server kube-apiserver-2 10.10.71.73:6443 check
|
||||
server kube-apiserver-3 10.10.71.62:6443 check
|
||||
```
|
||||
|
||||
Check grammar first before you start it.
|
||||
|
||||
```bash
|
||||
haproxy -f /etc/haproxy/haproxy.cfg -c
|
||||
```
|
||||
|
||||
Restart HAProxy and execute the command below to enable HAProxy.
|
||||
|
||||
```bash
|
||||
systemctl restart haproxy && systemctl enable haproxy
|
||||
```
|
||||
|
||||
Stop HAProxy.
|
||||
|
||||
```bash
|
||||
systemctl stop haproxy
|
||||
```
|
||||
|
||||
### Configure Keepalived
|
||||
|
||||
Main HAProxy 77 lb-0-10.10.71.77 (/etc/keepalived/keepalived.conf).
|
||||
|
||||
```bash
|
||||
global_defs {
|
||||
notification_email {
|
||||
}
|
||||
smtp_connect_timeout 30
|
||||
router_id LVS_DEVEL01
|
||||
vrrp_skip_check_adv_addr
|
||||
vrrp_garp_interval 0
|
||||
vrrp_gna_interval 0
|
||||
}
|
||||
vrrp_script chk_haproxy {
|
||||
script "killall -0 haproxy"
|
||||
interval 2
|
||||
weight 20
|
||||
}
|
||||
vrrp_instance haproxy-vip {
|
||||
state MASTER
|
||||
priority 100
|
||||
interface ens192
|
||||
virtual_router_id 60
|
||||
advert_int 1
|
||||
authentication {
|
||||
auth_type PASS
|
||||
auth_pass 1111
|
||||
}
|
||||
unicast_src_ip 10.10.71.77
|
||||
unicast_peer {
|
||||
10.10.71.66
|
||||
}
|
||||
virtual_ipaddress {
|
||||
#vip
|
||||
10.10.71.67/24
|
||||
}
|
||||
track_script {
|
||||
chk_haproxy
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Remark HAProxy 66 lb-1-10.10.71.66 (/etc/keepalived/keepalived.conf).
|
||||
|
||||
```bash
|
||||
global_defs {
|
||||
notification_email {
|
||||
}
|
||||
router_id LVS_DEVEL02
|
||||
vrrp_skip_check_adv_addr
|
||||
vrrp_garp_interval 0
|
||||
vrrp_gna_interval 0
|
||||
}
|
||||
vrrp_script chk_haproxy {
|
||||
script "killall -0 haproxy"
|
||||
interval 2
|
||||
weight 20
|
||||
}
|
||||
vrrp_instance haproxy-vip {
|
||||
state BACKUP
|
||||
priority 90
|
||||
interface ens192
|
||||
virtual_router_id 60
|
||||
advert_int 1
|
||||
authentication {
|
||||
auth_type PASS
|
||||
auth_pass 1111
|
||||
}
|
||||
unicast_src_ip 10.10.71.66
|
||||
unicast_peer {
|
||||
10.10.71.77
|
||||
}
|
||||
virtual_ipaddress {
|
||||
10.10.71.67/24
|
||||
}
|
||||
track_script {
|
||||
chk_haproxy
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Start keepalived and enable keepalived.
|
||||
|
||||
```bash
|
||||
systemctl restart keepalived && systemctl enable keepalived
|
||||
```
|
||||
|
||||
```bash
|
||||
systemctl stop keepalived
|
||||
```
|
||||
|
||||
```bash
|
||||
systemctl start keepalived
|
||||
```
|
||||
|
||||
### Verify Availability
|
||||
|
||||
Use `ip a s` to view the vip binding status of each lb node:
|
||||
|
||||
```bash
|
||||
ip a s
|
||||
```
|
||||
|
||||
Pause VIP node HAProxy through the following command:
|
||||
|
||||
```bash
|
||||
systemctl stop haproxy
|
||||
```
|
||||
|
||||
Use `ip a s` again to check the vip binding of each lb node, and check whether vip drifts:
|
||||
|
||||
```bash
|
||||
ip a s
|
||||
```
|
||||
|
||||
Alternatively, use the command below:
|
||||
|
||||
```bash
|
||||
systemctl status -l keepalived
|
||||
```
|
||||
|
||||
## Download KubeKey
|
||||
|
||||
[Kubekey](https://github.com/kubesphere/kubekey) is the brand-new installer which provides an easy, fast and flexible way to install Kubernetes and KubeSphere 3.2.1.
|
||||
|
||||
Follow the step below to download KubeKey.
|
||||
|
||||
{{< tabs >}}
|
||||
|
||||
{{< tab "Good network connections to GitHub/Googleapis" >}}
|
||||
|
||||
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.0.0 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{< tab "Poor network connections to GitHub/Googleapis" >}}
|
||||
|
||||
Run the following command first to make sure you download KubeKey from the correct zone.
|
||||
|
||||
```bash
|
||||
export KKZONE=cn
|
||||
```
|
||||
|
||||
Run the following command to download KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.0.0 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
After you download KubeKey, if you transfer it to a new machine also with poor network connections to Googleapis, you must run `export KKZONE=cn` again before you proceed with the steps below.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{</ tabs >}}
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
The commands above download the latest release (v2.0.0) of KubeKey. You can change the version number in the command to download a specific version.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
Make `kk` executable:
|
||||
|
||||
```bash
|
||||
chmod +x kk
|
||||
```
|
||||
|
||||
## Create a High Availability Cluster
|
||||
|
||||
With KubeKey, you can install Kubernetes and KubeSphere together. You have the option to create a multi-node cluster by customizing parameters in the configuration file.
|
||||
|
||||
Create a Kubernetes cluster with KubeSphere installed (for example, `--with-kubesphere v3.2.1`):
|
||||
|
||||
```bash
|
||||
./kk create config --with-kubernetes v1.21.5 --with-kubesphere v3.2.1
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- Recommended Kubernetes versions for KubeSphere 3.2.1: v1.19.x, v1.20.x, v1.21.x or v1.22.x (experimental). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.21.5 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
|
||||
|
||||
- If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later.
|
||||
- If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
A default file `config-sample.yaml` will be created. Modify it according to your environment.
|
||||
|
||||
```bash
|
||||
vi config-sample.yaml
|
||||
```
|
||||
|
||||
```yaml
|
||||
apiVersion: kubekey.kubesphere.io/v1alpha1
|
||||
kind: Cluster
|
||||
metadata:
|
||||
name: config-sample
|
||||
spec:
|
||||
hosts:
|
||||
- {name: master1, address: 10.10.71.214, internalAddress: 10.10.71.214, password: P@ssw0rd!}
|
||||
- {name: master2, address: 10.10.71.73, internalAddress: 10.10.71.73, password: P@ssw0rd!}
|
||||
- {name: master3, address: 10.10.71.62, internalAddress: 10.10.71.62, password: P@ssw0rd!}
|
||||
- {name: node1, address: 10.10.71.75, internalAddress: 10.10.71.75, password: P@ssw0rd!}
|
||||
- {name: node2, address: 10.10.71.76, internalAddress: 10.10.71.76, password: P@ssw0rd!}
|
||||
- {name: node3, address: 10.10.71.79, internalAddress: 10.10.71.79, password: P@ssw0rd!}
|
||||
roleGroups:
|
||||
etcd:
|
||||
- master1
|
||||
- master2
|
||||
- master3
|
||||
control-plane:
|
||||
- master1
|
||||
- master2
|
||||
- master3
|
||||
worker:
|
||||
- node1
|
||||
- node2
|
||||
- node3
|
||||
controlPlaneEndpoint:
|
||||
domain: lb.kubesphere.local
|
||||
# vip
|
||||
address: "10.10.71.67"
|
||||
port: 6443
|
||||
kubernetes:
|
||||
version: v1.21.5
|
||||
imageRepo: kubesphere
|
||||
clusterName: cluster.local
|
||||
masqueradeAll: false # masqueradeAll tells kube-proxy to SNAT everything if using the pure iptables proxy mode. [Default: false]
|
||||
maxPods: 110 # maxPods is the number of pods that can run on this Kubelet. [Default: 110]
|
||||
nodeCidrMaskSize: 24 # internal network node size allocation. This is the size allocated to each node on your network. [Default: 24]
|
||||
proxyMode: ipvs # mode specifies which proxy mode to use. [Default: ipvs]
|
||||
network:
|
||||
plugin: calico
|
||||
calico:
|
||||
ipipMode: Always # IPIP Mode to use for the IPv4 POOL created at start up. If set to a value other than Never, vxlanMode should be set to "Never". [Always | CrossSubnet | Never] [Default: Always]
|
||||
vxlanMode: Never # VXLAN Mode to use for the IPv4 POOL created at start up. If set to a value other than Never, ipipMode should be set to "Never". [Always | CrossSubnet | Never] [Default: Never]
|
||||
vethMTU: 1440 # The maximum transmission unit (MTU) setting determines the largest packet size that can be transmitted through your network. [Default: 1440]
|
||||
kubePodsCIDR: 10.233.64.0/18
|
||||
kubeServiceCIDR: 10.233.0.0/18
|
||||
registry:
|
||||
registryMirrors: []
|
||||
insecureRegistries: []
|
||||
privateRegistry: ""
|
||||
storage:
|
||||
defaultStorageClass: localVolume
|
||||
localVolume:
|
||||
storageClassName: local
|
||||
|
||||
---
|
||||
apiVersion: installer.kubesphere.io/v1alpha1
|
||||
kind: ClusterConfiguration
|
||||
metadata:
|
||||
name: ks-installer
|
||||
namespace: kubesphere-system
|
||||
labels:
|
||||
version: v3.2.1
|
||||
spec:
|
||||
local_registry: ""
|
||||
persistence:
|
||||
storageClass: ""
|
||||
authentication:
|
||||
jwtSecret: ""
|
||||
etcd:
|
||||
monitoring: true # Whether to install etcd monitoring dashboard
|
||||
endpointIps: 192.168.0.7,192.168.0.8,192.168.0.9 # etcd cluster endpointIps
|
||||
port: 2379 # etcd port
|
||||
tlsEnable: true
|
||||
common:
|
||||
mysqlVolumeSize: 20Gi # MySQL PVC size
|
||||
minioVolumeSize: 20Gi # Minio PVC size
|
||||
etcdVolumeSize: 20Gi # etcd PVC size
|
||||
openldapVolumeSize: 2Gi # openldap PVC size
|
||||
redisVolumSize: 2Gi # Redis PVC size
|
||||
es: # Storage backend for logging, tracing, events and auditing.
|
||||
elasticsearchMasterReplicas: 1 # total number of master nodes, it's not allowed to use even number
|
||||
elasticsearchDataReplicas: 1 # total number of data nodes
|
||||
elasticsearchMasterVolumeSize: 4Gi # Volume size of Elasticsearch master nodes
|
||||
elasticsearchDataVolumeSize: 20Gi # Volume size of Elasticsearch data nodes
|
||||
logMaxAge: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default.
|
||||
elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log
|
||||
# externalElasticsearchUrl:
|
||||
# externalElasticsearchPort:
|
||||
console:
|
||||
enableMultiLogin: false # enable/disable multiple sing on, it allows a user can be used by different users at the same time.
|
||||
port: 30880
|
||||
alerting: # Whether to install KubeSphere alerting system. It enables Users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from.
|
||||
enabled: false
|
||||
auditing: # Whether to install KubeSphere audit log system. It provides a security-relevant chronological set of records,recording the sequence of activities happened in platform, initiated by different tenants.
|
||||
enabled: false
|
||||
devops: # Whether to install KubeSphere DevOps System. It provides out-of-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & Binary-to-Image
|
||||
enabled: false
|
||||
jenkinsMemoryLim: 2Gi # Jenkins memory limit
|
||||
jenkinsMemoryReq: 1500Mi # Jenkins memory request
|
||||
jenkinsVolumeSize: 8Gi # Jenkins volume size
|
||||
jenkinsJavaOpts_Xms: 512m # The following three fields are JVM parameters
|
||||
jenkinsJavaOpts_Xmx: 512m
|
||||
jenkinsJavaOpts_MaxRAM: 2g
|
||||
events: # Whether to install KubeSphere events system. It provides a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters.
|
||||
enabled: false
|
||||
logging: # Whether to install KubeSphere logging system. Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd.
|
||||
enabled: false
|
||||
logsidecarReplicas: 2
|
||||
metrics_server: # Whether to install metrics-server. IT enables HPA (Horizontal Pod Autoscaler).
|
||||
enabled: true
|
||||
monitoring: #
|
||||
prometheusReplicas: 1 # Prometheus replicas are responsible for monitoring different segments of data source and provide high availability as well.
|
||||
prometheusMemoryRequest: 400Mi # Prometheus request memory
|
||||
prometheusVolumeSize: 20Gi # Prometheus PVC size
|
||||
alertmanagerReplicas: 1 # AlertManager Replicas
|
||||
multicluster:
|
||||
clusterRole: none # host | member | none # You can install a solo cluster, or specify it as the role of host or member cluster
|
||||
networkpolicy: # Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods).
|
||||
enabled: false
|
||||
notification: # It supports notification management in multi-tenant Kubernetes clusters. It allows you to set AlertManager as its sender, and receivers include Email, Wechat Work, and Slack.
|
||||
enabled: false
|
||||
openpitrix: # Whether to install KubeSphere App Store. It provides an application store for Helm-based applications, and offer application lifecycle management
|
||||
enabled: false
|
||||
servicemesh: # Whether to install KubeSphere Service Mesh (Istio-based). It provides fine-grained traffic management, observability and tracing, and offer visualization for traffic topology
|
||||
enabled: false
|
||||
```
|
||||
|
||||
Create a cluster using the configuration file you customized above:
|
||||
|
||||
```bash
|
||||
./kk create cluster -f config-sample.yaml
|
||||
```
|
||||
|
||||
## Verify the Multi-node Installation
|
||||
|
||||
Inspect the logs of installation by executing the command below:
|
||||
|
||||
```bash
|
||||
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
|
||||
```
|
||||
|
||||
If you can see the welcome log return, it means the installation is successful. Your cluster is up and running.
|
||||
|
||||
```yaml
|
||||
**************************************************
|
||||
#####################################################
|
||||
### Welcome to KubeSphere! ###
|
||||
#####################################################
|
||||
Console: http://10.10.71.214:30880
|
||||
Account: admin
|
||||
Password: P@88w0rd
|
||||
NOTES:
|
||||
1. After you log into the console, please check the
|
||||
monitoring status of service components in
|
||||
the "Cluster Management". If any service is not
|
||||
ready, please wait patiently until all components
|
||||
are up and running.
|
||||
2. Please change the default password after login.
|
||||
#####################################################
|
||||
https://kubesphere.io 2020-08-15 23:32:12
|
||||
#####################################################
|
||||
```
|
||||
|
||||
### Log in to the Console
|
||||
|
||||
You will be able to use default account and password `admin/P@88w0rd` to log in to the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after login.
|
||||
|
||||
## Enable Pluggable Components (Optional)
|
||||
|
||||
The example above demonstrates the process of a default minimal installation. To enable other components in KubeSphere, see [Enable Pluggable Components](../../../pluggable-components/) for more details.
|
||||
|
|
@ -0,0 +1,540 @@
|
|||
---
|
||||
title: "Deploy KubeSphere on VMware vSphere"
|
||||
keywords: 'Kubernetes, KubeSphere, VMware-vSphere, installation'
|
||||
description: 'Learn how to create a high-availability cluster on VMware vSphere.'
|
||||
|
||||
|
||||
weight: 3510
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
For a production environment, we need to consider the high availability of the cluster. If the key components (for example, kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same control plane node, Kubernetes and KubeSphere will be unavailable once the control plane node goes down. Therefore, we need to set up a high-availability cluster by provisioning load balancers with multiple control plane nodes. You can use any cloud load balancer, or any hardware load balancer (for example, F5). In addition, Keepalived and [HAProxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters.
|
||||
|
||||
This tutorial walks you through an example of how to create Keepalived and HAProxy, and implement high availability of control plane and etcd nodes using the load balancers on VMware vSphere.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Please make sure that you already know how to install KubeSphere with a multi-node cluster by following the [guide](../../introduction/multioverview/). This tutorial focuses more on how to configure load balancers.
|
||||
- You need a VMware vSphere account to create VMs.
|
||||
- Considering data persistence, for a production environment, we recommend you to prepare persistent storage and create a **default** StorageClass in advance. For development and testing, you can use the integrated OpenEBS to provision LocalPV as the storage service directly.
|
||||
|
||||
## Architecture
|
||||
|
||||

|
||||
|
||||
## Prepare Linux Hosts
|
||||
|
||||
This tutorial creates 8 virtual machines of **CentOS Linux release 7.6.1810 (Core)** for the default minimal installation. Every machine has 2 Cores, 4 GB of memory and 40 G disk space.
|
||||
|
||||
| Host IP | Host Name | Role |
|
||||
| --- | --- | --- |
|
||||
|10.10.71.214|master1|master, etcd|
|
||||
|10.10.71.73|master2|master, etcd|
|
||||
|10.10.71.62|master3|master, etcd|
|
||||
|10.10.71.75|node1|worker|
|
||||
|10.10.71.76|node2|worker|
|
||||
|10.10.71.79|node3|worker|
|
||||
|10.10.71.67|vip|vip (No need to create a VM)|
|
||||
|10.10.71.77|lb-0|lb (Keepalived + HAProxy)|
|
||||
|10.10.71.66|lb-1|lb (Keepalived + HAProxy)|
|
||||
|
||||
{{< notice note >}}
|
||||
You do not need to create a virtual machine for `vip` (i.e. Virtual IP) above, so only 8 virtual machines need to be created.
|
||||
{{</ notice >}}
|
||||
|
||||
You can follow the New Virtual Machine wizard to create a virtual machine to place in the VMware Host Client inventory.
|
||||
|
||||

|
||||
|
||||
1. In the first step **Select a creation type**, you can deploy a virtual machine from an OVF or OVA file, or register an existing virtual machine directly.
|
||||
|
||||

|
||||
|
||||
2. When you create a new virtual machine, provide a unique name for the virtual machine to distinguish it from existing virtual machines on the host you are managing.
|
||||
|
||||

|
||||
|
||||
3. Select a compute resource and storage (datastore) for the configuration and disk files. You can select the datastore that has the most suitable properties, such as size, speed, and availability, for your virtual machine storage.
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
4. Select a guest operating system. The wizard will provide the appropriate defaults for the operating system installation.
|
||||
|
||||

|
||||
|
||||
5. Before you finish deploying a new virtual machine, you have the option to set **Virtual Hardware** and **VM Options**. You can refer to the images below for part of the fields.
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
6. In **Ready to complete** page, you review the configuration selections that you have made for the virtual machine. Click **Finish** at the bottom-right corner to continue.
|
||||
|
||||

|
||||
|
||||
## Install a Load Balancer using Keepalived and HAProxy
|
||||
|
||||
For a production environment, you have to prepare an external load balancer for your cluster with multiple control plane nodes. If you do not have a load balancer, you can install it using Keepalived and HAProxy. If you are provisioning a development or testing environment by installing a single-master cluster, please skip this section.
|
||||
|
||||
### Yum Install
|
||||
|
||||
host lb-0 (`10.10.71.77`) and host lb-1 (`10.10.71.66`).
|
||||
|
||||
```bash
|
||||
yum install keepalived haproxy psmisc -y
|
||||
```
|
||||
|
||||
### Configure HAProxy
|
||||
|
||||
On the servers with IP `10.10.71.77` and `10.10.71.66`, configure HAProxy as follows.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
The configuration of the two lb machines is the same. Please pay attention to the backend service address.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
```yaml
|
||||
# HAProxy Configure /etc/haproxy/haproxy.cfg
|
||||
global
|
||||
log 127.0.0.1 local2
|
||||
chroot /var/lib/haproxy
|
||||
pidfile /var/run/haproxy.pid
|
||||
maxconn 4000
|
||||
user haproxy
|
||||
group haproxy
|
||||
daemon
|
||||
# turn on stats unix socket
|
||||
stats socket /var/lib/haproxy/stats
|
||||
#---------------------------------------------------------------------
|
||||
# common defaults that all the 'listen' and 'backend' sections will
|
||||
# use if not designated in their block
|
||||
#---------------------------------------------------------------------
|
||||
defaults
|
||||
log global
|
||||
option httplog
|
||||
option dontlognull
|
||||
timeout connect 5000
|
||||
timeout client 5000
|
||||
timeout server 5000
|
||||
#---------------------------------------------------------------------
|
||||
# main frontend which proxys to the backends
|
||||
#---------------------------------------------------------------------
|
||||
frontend kube-apiserver
|
||||
bind *:6443
|
||||
mode tcp
|
||||
option tcplog
|
||||
default_backend kube-apiserver
|
||||
#---------------------------------------------------------------------
|
||||
# round robin balancing between the various backends
|
||||
#---------------------------------------------------------------------
|
||||
backend kube-apiserver
|
||||
mode tcp
|
||||
option tcplog
|
||||
balance roundrobin
|
||||
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
|
||||
server kube-apiserver-1 10.10.71.214:6443 check
|
||||
server kube-apiserver-2 10.10.71.73:6443 check
|
||||
server kube-apiserver-3 10.10.71.62:6443 check
|
||||
```
|
||||
|
||||
Check grammar first before you start it.
|
||||
|
||||
```bash
|
||||
haproxy -f /etc/haproxy/haproxy.cfg -c
|
||||
```
|
||||
|
||||
Restart HAProxy and execute the command below to enable HAProxy.
|
||||
|
||||
```bash
|
||||
systemctl restart haproxy && systemctl enable haproxy
|
||||
```
|
||||
|
||||
Stop HAProxy.
|
||||
|
||||
```bash
|
||||
systemctl stop haproxy
|
||||
```
|
||||
|
||||
### Configure Keepalived
|
||||
|
||||
Main HAProxy 77 lb-0-10.10.71.77 (/etc/keepalived/keepalived.conf).
|
||||
|
||||
```bash
|
||||
global_defs {
|
||||
notification_email {
|
||||
}
|
||||
smtp_connect_timeout 30
|
||||
router_id LVS_DEVEL01
|
||||
vrrp_skip_check_adv_addr
|
||||
vrrp_garp_interval 0
|
||||
vrrp_gna_interval 0
|
||||
}
|
||||
vrrp_script chk_haproxy {
|
||||
script "killall -0 haproxy"
|
||||
interval 2
|
||||
weight 20
|
||||
}
|
||||
vrrp_instance haproxy-vip {
|
||||
state MASTER
|
||||
priority 100
|
||||
interface ens192
|
||||
virtual_router_id 60
|
||||
advert_int 1
|
||||
authentication {
|
||||
auth_type PASS
|
||||
auth_pass 1111
|
||||
}
|
||||
unicast_src_ip 10.10.71.77
|
||||
unicast_peer {
|
||||
10.10.71.66
|
||||
}
|
||||
virtual_ipaddress {
|
||||
#vip
|
||||
10.10.71.67/24
|
||||
}
|
||||
track_script {
|
||||
chk_haproxy
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Remark HAProxy 66 lb-1-10.10.71.66 (/etc/keepalived/keepalived.conf).
|
||||
|
||||
```bash
|
||||
global_defs {
|
||||
notification_email {
|
||||
}
|
||||
router_id LVS_DEVEL02
|
||||
vrrp_skip_check_adv_addr
|
||||
vrrp_garp_interval 0
|
||||
vrrp_gna_interval 0
|
||||
}
|
||||
vrrp_script chk_haproxy {
|
||||
script "killall -0 haproxy"
|
||||
interval 2
|
||||
weight 20
|
||||
}
|
||||
vrrp_instance haproxy-vip {
|
||||
state BACKUP
|
||||
priority 90
|
||||
interface ens192
|
||||
virtual_router_id 60
|
||||
advert_int 1
|
||||
authentication {
|
||||
auth_type PASS
|
||||
auth_pass 1111
|
||||
}
|
||||
unicast_src_ip 10.10.71.66
|
||||
unicast_peer {
|
||||
10.10.71.77
|
||||
}
|
||||
virtual_ipaddress {
|
||||
10.10.71.67/24
|
||||
}
|
||||
track_script {
|
||||
chk_haproxy
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Start keepalived and enable keepalived.
|
||||
|
||||
```bash
|
||||
systemctl restart keepalived && systemctl enable keepalived
|
||||
```
|
||||
|
||||
```bash
|
||||
systemctl stop keepalived
|
||||
```
|
||||
|
||||
```bash
|
||||
systemctl start keepalived
|
||||
```
|
||||
|
||||
### Verify Availability
|
||||
|
||||
Use `ip a s` to view the vip binding status of each lb node:
|
||||
|
||||
```bash
|
||||
ip a s
|
||||
```
|
||||
|
||||
Pause VIP node HAProxy through the following command:
|
||||
|
||||
```bash
|
||||
systemctl stop haproxy
|
||||
```
|
||||
|
||||
Use `ip a s` again to check the vip binding of each lb node, and check whether vip drifts:
|
||||
|
||||
```bash
|
||||
ip a s
|
||||
```
|
||||
|
||||
Alternatively, use the command below:
|
||||
|
||||
```bash
|
||||
systemctl status -l keepalived
|
||||
```
|
||||
|
||||
## Download KubeKey
|
||||
|
||||
[Kubekey](https://github.com/kubesphere/kubekey) is the brand-new installer which provides an easy, fast and flexible way to install Kubernetes and KubeSphere 3.2.1.
|
||||
|
||||
Follow the step below to download KubeKey.
|
||||
|
||||
{{< tabs >}}
|
||||
|
||||
{{< tab "Good network connections to GitHub/Googleapis" >}}
|
||||
|
||||
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.0.0 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{< tab "Poor network connections to GitHub/Googleapis" >}}
|
||||
|
||||
Run the following command first to make sure you download KubeKey from the correct zone.
|
||||
|
||||
```bash
|
||||
export KKZONE=cn
|
||||
```
|
||||
|
||||
Run the following command to download KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.0.0 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
After you download KubeKey, if you transfer it to a new machine also with poor network connections to Googleapis, you must run `export KKZONE=cn` again before you proceed with the steps below.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{</ tabs >}}
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
The commands above download the latest release (v2.0.0) of KubeKey. You can change the version number in the command to download a specific version.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
Make `kk` executable:
|
||||
|
||||
```bash
|
||||
chmod +x kk
|
||||
```
|
||||
|
||||
## Create a High Availability Cluster
|
||||
|
||||
With KubeKey, you can install Kubernetes and KubeSphere together. You have the option to create a multi-node cluster by customizing parameters in the configuration file.
|
||||
|
||||
Create a Kubernetes cluster with KubeSphere installed (for example, `--with-kubesphere v3.2.1`):
|
||||
|
||||
```bash
|
||||
./kk create config --with-kubernetes v1.21.5 --with-kubesphere v3.2.1
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- Recommended Kubernetes versions for KubeSphere 3.2.1: v1.19.x, v1.20.x, v1.21.x or v1.22.x (experimental). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.21.5 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
|
||||
|
||||
- If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later.
|
||||
- If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
A default file `config-sample.yaml` will be created. Modify it according to your environment.
|
||||
|
||||
```bash
|
||||
vi config-sample.yaml
|
||||
```
|
||||
|
||||
```yaml
|
||||
apiVersion: kubekey.kubesphere.io/v1alpha1
|
||||
kind: Cluster
|
||||
metadata:
|
||||
name: config-sample
|
||||
spec:
|
||||
hosts:
|
||||
- {name: master1, address: 10.10.71.214, internalAddress: 10.10.71.214, password: P@ssw0rd!}
|
||||
- {name: master2, address: 10.10.71.73, internalAddress: 10.10.71.73, password: P@ssw0rd!}
|
||||
- {name: master3, address: 10.10.71.62, internalAddress: 10.10.71.62, password: P@ssw0rd!}
|
||||
- {name: node1, address: 10.10.71.75, internalAddress: 10.10.71.75, password: P@ssw0rd!}
|
||||
- {name: node2, address: 10.10.71.76, internalAddress: 10.10.71.76, password: P@ssw0rd!}
|
||||
- {name: node3, address: 10.10.71.79, internalAddress: 10.10.71.79, password: P@ssw0rd!}
|
||||
roleGroups:
|
||||
etcd:
|
||||
- master1
|
||||
- master2
|
||||
- master3
|
||||
control-plane:
|
||||
- master1
|
||||
- master2
|
||||
- master3
|
||||
worker:
|
||||
- node1
|
||||
- node2
|
||||
- node3
|
||||
controlPlaneEndpoint:
|
||||
domain: lb.kubesphere.local
|
||||
# vip
|
||||
address: "10.10.71.67"
|
||||
port: 6443
|
||||
kubernetes:
|
||||
version: v1.21.5
|
||||
imageRepo: kubesphere
|
||||
clusterName: cluster.local
|
||||
masqueradeAll: false # masqueradeAll tells kube-proxy to SNAT everything if using the pure iptables proxy mode. [Default: false]
|
||||
maxPods: 110 # maxPods is the number of pods that can run on this Kubelet. [Default: 110]
|
||||
nodeCidrMaskSize: 24 # internal network node size allocation. This is the size allocated to each node on your network. [Default: 24]
|
||||
proxyMode: ipvs # mode specifies which proxy mode to use. [Default: ipvs]
|
||||
network:
|
||||
plugin: calico
|
||||
calico:
|
||||
ipipMode: Always # IPIP Mode to use for the IPv4 POOL created at start up. If set to a value other than Never, vxlanMode should be set to "Never". [Always | CrossSubnet | Never] [Default: Always]
|
||||
vxlanMode: Never # VXLAN Mode to use for the IPv4 POOL created at start up. If set to a value other than Never, ipipMode should be set to "Never". [Always | CrossSubnet | Never] [Default: Never]
|
||||
vethMTU: 1440 # The maximum transmission unit (MTU) setting determines the largest packet size that can be transmitted through your network. [Default: 1440]
|
||||
kubePodsCIDR: 10.233.64.0/18
|
||||
kubeServiceCIDR: 10.233.0.0/18
|
||||
registry:
|
||||
registryMirrors: []
|
||||
insecureRegistries: []
|
||||
privateRegistry: ""
|
||||
storage:
|
||||
defaultStorageClass: localVolume
|
||||
localVolume:
|
||||
storageClassName: local
|
||||
|
||||
---
|
||||
apiVersion: installer.kubesphere.io/v1alpha1
|
||||
kind: ClusterConfiguration
|
||||
metadata:
|
||||
name: ks-installer
|
||||
namespace: kubesphere-system
|
||||
labels:
|
||||
version: v3.2.1
|
||||
spec:
|
||||
local_registry: ""
|
||||
persistence:
|
||||
storageClass: ""
|
||||
authentication:
|
||||
jwtSecret: ""
|
||||
etcd:
|
||||
monitoring: true # Whether to install etcd monitoring dashboard
|
||||
endpointIps: 192.168.0.7,192.168.0.8,192.168.0.9 # etcd cluster endpointIps
|
||||
port: 2379 # etcd port
|
||||
tlsEnable: true
|
||||
common:
|
||||
mysqlVolumeSize: 20Gi # MySQL PVC size
|
||||
minioVolumeSize: 20Gi # Minio PVC size
|
||||
etcdVolumeSize: 20Gi # etcd PVC size
|
||||
openldapVolumeSize: 2Gi # openldap PVC size
|
||||
redisVolumSize: 2Gi # Redis PVC size
|
||||
es: # Storage backend for logging, tracing, events and auditing.
|
||||
elasticsearchMasterReplicas: 1 # total number of master nodes, it's not allowed to use even number
|
||||
elasticsearchDataReplicas: 1 # total number of data nodes
|
||||
elasticsearchMasterVolumeSize: 4Gi # Volume size of Elasticsearch master nodes
|
||||
elasticsearchDataVolumeSize: 20Gi # Volume size of Elasticsearch data nodes
|
||||
logMaxAge: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default.
|
||||
elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log
|
||||
# externalElasticsearchUrl:
|
||||
# externalElasticsearchPort:
|
||||
console:
|
||||
enableMultiLogin: false # enable/disable multiple sing on, it allows a user can be used by different users at the same time.
|
||||
port: 30880
|
||||
alerting: # Whether to install KubeSphere alerting system. It enables Users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from.
|
||||
enabled: false
|
||||
auditing: # Whether to install KubeSphere audit log system. It provides a security-relevant chronological set of records,recording the sequence of activities happened in platform, initiated by different tenants.
|
||||
enabled: false
|
||||
devops: # Whether to install KubeSphere DevOps System. It provides out-of-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & Binary-to-Image
|
||||
enabled: false
|
||||
jenkinsMemoryLim: 2Gi # Jenkins memory limit
|
||||
jenkinsMemoryReq: 1500Mi # Jenkins memory request
|
||||
jenkinsVolumeSize: 8Gi # Jenkins volume size
|
||||
jenkinsJavaOpts_Xms: 512m # The following three fields are JVM parameters
|
||||
jenkinsJavaOpts_Xmx: 512m
|
||||
jenkinsJavaOpts_MaxRAM: 2g
|
||||
events: # Whether to install KubeSphere events system. It provides a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters.
|
||||
enabled: false
|
||||
logging: # Whether to install KubeSphere logging system. Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd.
|
||||
enabled: false
|
||||
logsidecarReplicas: 2
|
||||
metrics_server: # Whether to install metrics-server. IT enables HPA (Horizontal Pod Autoscaler).
|
||||
enabled: true
|
||||
monitoring: #
|
||||
prometheusReplicas: 1 # Prometheus replicas are responsible for monitoring different segments of data source and provide high availability as well.
|
||||
prometheusMemoryRequest: 400Mi # Prometheus request memory
|
||||
prometheusVolumeSize: 20Gi # Prometheus PVC size
|
||||
alertmanagerReplicas: 1 # AlertManager Replicas
|
||||
multicluster:
|
||||
clusterRole: none # host | member | none # You can install a solo cluster, or specify it as the role of host or member cluster
|
||||
networkpolicy: # Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods).
|
||||
enabled: false
|
||||
notification: # It supports notification management in multi-tenant Kubernetes clusters. It allows you to set AlertManager as its sender, and receivers include Email, Wechat Work, and Slack.
|
||||
enabled: false
|
||||
openpitrix: # Whether to install KubeSphere App Store. It provides an application store for Helm-based applications, and offer application lifecycle management
|
||||
enabled: false
|
||||
servicemesh: # Whether to install KubeSphere Service Mesh (Istio-based). It provides fine-grained traffic management, observability and tracing, and offer visualization for traffic topology
|
||||
enabled: false
|
||||
```
|
||||
|
||||
Create a cluster using the configuration file you customized above:
|
||||
|
||||
```bash
|
||||
./kk create cluster -f config-sample.yaml
|
||||
```
|
||||
|
||||
## Verify the Multi-node Installation
|
||||
|
||||
Inspect the logs of installation by executing the command below:
|
||||
|
||||
```bash
|
||||
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
|
||||
```
|
||||
|
||||
If you can see the welcome log return, it means the installation is successful. Your cluster is up and running.
|
||||
|
||||
```yaml
|
||||
**************************************************
|
||||
#####################################################
|
||||
### Welcome to KubeSphere! ###
|
||||
#####################################################
|
||||
Console: http://10.10.71.214:30880
|
||||
Account: admin
|
||||
Password: P@88w0rd
|
||||
NOTES:
|
||||
1. After you log into the console, please check the
|
||||
monitoring status of service components in
|
||||
the "Cluster Management". If any service is not
|
||||
ready, please wait patiently until all components
|
||||
are up and running.
|
||||
2. Please change the default password after login.
|
||||
#####################################################
|
||||
https://kubesphere.io 2020-08-15 23:32:12
|
||||
#####################################################
|
||||
```
|
||||
|
||||
### Log in to the Console
|
||||
|
||||
You will be able to use default account and password `admin/P@88w0rd` to log in to the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after login.
|
||||
|
||||
## Enable Pluggable Components (Optional)
|
||||
|
||||
The example above demonstrates the process of a default minimal installation. To enable other components in KubeSphere, see [Enable Pluggable Components](../../../pluggable-components/) for more details.
|
||||
|
|
@ -0,0 +1,540 @@
|
|||
---
|
||||
title: "Deploy KubeSphere on VMware vSphere"
|
||||
keywords: 'Kubernetes, KubeSphere, VMware-vSphere, installation'
|
||||
description: 'Learn how to create a high-availability cluster on VMware vSphere.'
|
||||
|
||||
|
||||
weight: 3510
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
For a production environment, we need to consider the high availability of the cluster. If the key components (for example, kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same control plane node, Kubernetes and KubeSphere will be unavailable once the control plane node goes down. Therefore, we need to set up a high-availability cluster by provisioning load balancers with multiple control plane nodes. You can use any cloud load balancer, or any hardware load balancer (for example, F5). In addition, Keepalived and [HAProxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters.
|
||||
|
||||
This tutorial walks you through an example of how to create Keepalived and HAProxy, and implement high availability of control plane and etcd nodes using the load balancers on VMware vSphere.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Please make sure that you already know how to install KubeSphere with a multi-node cluster by following the [guide](../../introduction/multioverview/). This tutorial focuses more on how to configure load balancers.
|
||||
- You need a VMware vSphere account to create VMs.
|
||||
- Considering data persistence, for a production environment, we recommend you to prepare persistent storage and create a **default** StorageClass in advance. For development and testing, you can use the integrated OpenEBS to provision LocalPV as the storage service directly.
|
||||
|
||||
## Architecture
|
||||
|
||||

|
||||
|
||||
## Prepare Linux Hosts
|
||||
|
||||
This tutorial creates 8 virtual machines of **CentOS Linux release 7.6.1810 (Core)** for the default minimal installation. Every machine has 2 Cores, 4 GB of memory and 40 G disk space.
|
||||
|
||||
| Host IP | Host Name | Role |
|
||||
| --- | --- | --- |
|
||||
|10.10.71.214|master1|master, etcd|
|
||||
|10.10.71.73|master2|master, etcd|
|
||||
|10.10.71.62|master3|master, etcd|
|
||||
|10.10.71.75|node1|worker|
|
||||
|10.10.71.76|node2|worker|
|
||||
|10.10.71.79|node3|worker|
|
||||
|10.10.71.67|vip|vip (No need to create a VM)|
|
||||
|10.10.71.77|lb-0|lb (Keepalived + HAProxy)|
|
||||
|10.10.71.66|lb-1|lb (Keepalived + HAProxy)|
|
||||
|
||||
{{< notice note >}}
|
||||
You do not need to create a virtual machine for `vip` (i.e. Virtual IP) above, so only 8 virtual machines need to be created.
|
||||
{{</ notice >}}
|
||||
|
||||
You can follow the New Virtual Machine wizard to create a virtual machine to place in the VMware Host Client inventory.
|
||||
|
||||

|
||||
|
||||
1. In the first step **Select a creation type**, you can deploy a virtual machine from an OVF or OVA file, or register an existing virtual machine directly.
|
||||
|
||||

|
||||
|
||||
2. When you create a new virtual machine, provide a unique name for the virtual machine to distinguish it from existing virtual machines on the host you are managing.
|
||||
|
||||

|
||||
|
||||
3. Select a compute resource and storage (datastore) for the configuration and disk files. You can select the datastore that has the most suitable properties, such as size, speed, and availability, for your virtual machine storage.
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
4. Select a guest operating system. The wizard will provide the appropriate defaults for the operating system installation.
|
||||
|
||||

|
||||
|
||||
5. Before you finish deploying a new virtual machine, you have the option to set **Virtual Hardware** and **VM Options**. You can refer to the images below for part of the fields.
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
6. In **Ready to complete** page, you review the configuration selections that you have made for the virtual machine. Click **Finish** at the bottom-right corner to continue.
|
||||
|
||||

|
||||
|
||||
## Install a Load Balancer using Keepalived and HAProxy
|
||||
|
||||
For a production environment, you have to prepare an external load balancer for your cluster with multiple control plane nodes. If you do not have a load balancer, you can install it using Keepalived and HAProxy. If you are provisioning a development or testing environment by installing a cluster with a control plane node, please skip this section.
|
||||
|
||||
### Yum Install
|
||||
|
||||
host lb-0 (`10.10.71.77`) and host lb-1 (`10.10.71.66`).
|
||||
|
||||
```bash
|
||||
yum install keepalived haproxy psmisc -y
|
||||
```
|
||||
|
||||
### Configure HAProxy
|
||||
|
||||
On the servers with IP `10.10.71.77` and `10.10.71.66`, configure HAProxy as follows.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
The configuration of the two lb machines is the same. Please pay attention to the backend service address.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
```yaml
|
||||
# HAProxy Configure /etc/haproxy/haproxy.cfg
|
||||
global
|
||||
log 127.0.0.1 local2
|
||||
chroot /var/lib/haproxy
|
||||
pidfile /var/run/haproxy.pid
|
||||
maxconn 4000
|
||||
user haproxy
|
||||
group haproxy
|
||||
daemon
|
||||
# turn on stats unix socket
|
||||
stats socket /var/lib/haproxy/stats
|
||||
#---------------------------------------------------------------------
|
||||
# common defaults that all the 'listen' and 'backend' sections will
|
||||
# use if not designated in their block
|
||||
#---------------------------------------------------------------------
|
||||
defaults
|
||||
log global
|
||||
option httplog
|
||||
option dontlognull
|
||||
timeout connect 5000
|
||||
timeout client 5000
|
||||
timeout server 5000
|
||||
#---------------------------------------------------------------------
|
||||
# main frontend which proxys to the backends
|
||||
#---------------------------------------------------------------------
|
||||
frontend kube-apiserver
|
||||
bind *:6443
|
||||
mode tcp
|
||||
option tcplog
|
||||
default_backend kube-apiserver
|
||||
#---------------------------------------------------------------------
|
||||
# round robin balancing between the various backends
|
||||
#---------------------------------------------------------------------
|
||||
backend kube-apiserver
|
||||
mode tcp
|
||||
option tcplog
|
||||
balance roundrobin
|
||||
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
|
||||
server kube-apiserver-1 10.10.71.214:6443 check
|
||||
server kube-apiserver-2 10.10.71.73:6443 check
|
||||
server kube-apiserver-3 10.10.71.62:6443 check
|
||||
```
|
||||
|
||||
Check grammar first before you start it.
|
||||
|
||||
```bash
|
||||
haproxy -f /etc/haproxy/haproxy.cfg -c
|
||||
```
|
||||
|
||||
Restart HAProxy and execute the command below to enable HAProxy.
|
||||
|
||||
```bash
|
||||
systemctl restart haproxy && systemctl enable haproxy
|
||||
```
|
||||
|
||||
Stop HAProxy.
|
||||
|
||||
```bash
|
||||
systemctl stop haproxy
|
||||
```
|
||||
|
||||
### Configure Keepalived
|
||||
|
||||
Main HAProxy 77 lb-0-10.10.71.77 (/etc/keepalived/keepalived.conf).
|
||||
|
||||
```bash
|
||||
global_defs {
|
||||
notification_email {
|
||||
}
|
||||
smtp_connect_timeout 30
|
||||
router_id LVS_DEVEL01
|
||||
vrrp_skip_check_adv_addr
|
||||
vrrp_garp_interval 0
|
||||
vrrp_gna_interval 0
|
||||
}
|
||||
vrrp_script chk_haproxy {
|
||||
script "killall -0 haproxy"
|
||||
interval 2
|
||||
weight 20
|
||||
}
|
||||
vrrp_instance haproxy-vip {
|
||||
state MASTER
|
||||
priority 100
|
||||
interface ens192
|
||||
virtual_router_id 60
|
||||
advert_int 1
|
||||
authentication {
|
||||
auth_type PASS
|
||||
auth_pass 1111
|
||||
}
|
||||
unicast_src_ip 10.10.71.77
|
||||
unicast_peer {
|
||||
10.10.71.66
|
||||
}
|
||||
virtual_ipaddress {
|
||||
#vip
|
||||
10.10.71.67/24
|
||||
}
|
||||
track_script {
|
||||
chk_haproxy
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Remark HAProxy 66 lb-1-10.10.71.66 (/etc/keepalived/keepalived.conf).
|
||||
|
||||
```bash
|
||||
global_defs {
|
||||
notification_email {
|
||||
}
|
||||
router_id LVS_DEVEL02
|
||||
vrrp_skip_check_adv_addr
|
||||
vrrp_garp_interval 0
|
||||
vrrp_gna_interval 0
|
||||
}
|
||||
vrrp_script chk_haproxy {
|
||||
script "killall -0 haproxy"
|
||||
interval 2
|
||||
weight 20
|
||||
}
|
||||
vrrp_instance haproxy-vip {
|
||||
state BACKUP
|
||||
priority 90
|
||||
interface ens192
|
||||
virtual_router_id 60
|
||||
advert_int 1
|
||||
authentication {
|
||||
auth_type PASS
|
||||
auth_pass 1111
|
||||
}
|
||||
unicast_src_ip 10.10.71.66
|
||||
unicast_peer {
|
||||
10.10.71.77
|
||||
}
|
||||
virtual_ipaddress {
|
||||
10.10.71.67/24
|
||||
}
|
||||
track_script {
|
||||
chk_haproxy
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Start keepalived and enable keepalived.
|
||||
|
||||
```bash
|
||||
systemctl restart keepalived && systemctl enable keepalived
|
||||
```
|
||||
|
||||
```bash
|
||||
systemctl stop keepalived
|
||||
```
|
||||
|
||||
```bash
|
||||
systemctl start keepalived
|
||||
```
|
||||
|
||||
### Verify Availability
|
||||
|
||||
Use `ip a s` to view the vip binding status of each lb node:
|
||||
|
||||
```bash
|
||||
ip a s
|
||||
```
|
||||
|
||||
Pause VIP node HAProxy through the following command:
|
||||
|
||||
```bash
|
||||
systemctl stop haproxy
|
||||
```
|
||||
|
||||
Use `ip a s` again to check the vip binding of each lb node, and check whether vip drifts:
|
||||
|
||||
```bash
|
||||
ip a s
|
||||
```
|
||||
|
||||
Alternatively, use the command below:
|
||||
|
||||
```bash
|
||||
systemctl status -l keepalived
|
||||
```
|
||||
|
||||
## Download KubeKey
|
||||
|
||||
[Kubekey](https://github.com/kubesphere/kubekey) is the brand-new installer which provides an easy, fast and flexible way to install Kubernetes and KubeSphere 3.2.1.
|
||||
|
||||
Follow the step below to download KubeKey.
|
||||
|
||||
{{< tabs >}}
|
||||
|
||||
{{< tab "Good network connections to GitHub/Googleapis" >}}
|
||||
|
||||
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.0.0 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{< tab "Poor network connections to GitHub/Googleapis" >}}
|
||||
|
||||
Run the following command first to make sure you download KubeKey from the correct zone.
|
||||
|
||||
```bash
|
||||
export KKZONE=cn
|
||||
```
|
||||
|
||||
Run the following command to download KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.0.0 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
After you download KubeKey, if you transfer it to a new machine also with poor network connections to Googleapis, you must run `export KKZONE=cn` again before you proceed with the steps below.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{</ tabs >}}
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
The commands above download the latest release (v2.0.0) of KubeKey. You can change the version number in the command to download a specific version.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
Make `kk` executable:
|
||||
|
||||
```bash
|
||||
chmod +x kk
|
||||
```
|
||||
|
||||
## Create a High Availability Cluster
|
||||
|
||||
With KubeKey, you can install Kubernetes and KubeSphere together. You have the option to create a multi-node cluster by customizing parameters in the configuration file.
|
||||
|
||||
Create a Kubernetes cluster with KubeSphere installed (for example, `--with-kubesphere v3.2.1`):
|
||||
|
||||
```bash
|
||||
./kk create config --with-kubernetes v1.21.5 --with-kubesphere v3.2.1
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- Recommended Kubernetes versions for KubeSphere 3.2.1: v1.19.x, v1.20.x, v1.21.x or v1.22.x (experimental). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.21.5 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
|
||||
|
||||
- If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later.
|
||||
- If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
A default file `config-sample.yaml` will be created. Modify it according to your environment.
|
||||
|
||||
```bash
|
||||
vi config-sample.yaml
|
||||
```
|
||||
|
||||
```yaml
|
||||
apiVersion: kubekey.kubesphere.io/v1alpha1
|
||||
kind: Cluster
|
||||
metadata:
|
||||
name: config-sample
|
||||
spec:
|
||||
hosts:
|
||||
- {name: master1, address: 10.10.71.214, internalAddress: 10.10.71.214, password: P@ssw0rd!}
|
||||
- {name: master2, address: 10.10.71.73, internalAddress: 10.10.71.73, password: P@ssw0rd!}
|
||||
- {name: master3, address: 10.10.71.62, internalAddress: 10.10.71.62, password: P@ssw0rd!}
|
||||
- {name: node1, address: 10.10.71.75, internalAddress: 10.10.71.75, password: P@ssw0rd!}
|
||||
- {name: node2, address: 10.10.71.76, internalAddress: 10.10.71.76, password: P@ssw0rd!}
|
||||
- {name: node3, address: 10.10.71.79, internalAddress: 10.10.71.79, password: P@ssw0rd!}
|
||||
roleGroups:
|
||||
etcd:
|
||||
- master1
|
||||
- master2
|
||||
- master3
|
||||
control-plane:
|
||||
- master1
|
||||
- master2
|
||||
- master3
|
||||
worker:
|
||||
- node1
|
||||
- node2
|
||||
- node3
|
||||
controlPlaneEndpoint:
|
||||
domain: lb.kubesphere.local
|
||||
# vip
|
||||
address: "10.10.71.67"
|
||||
port: 6443
|
||||
kubernetes:
|
||||
version: v1.21.5
|
||||
imageRepo: kubesphere
|
||||
clusterName: cluster.local
|
||||
masqueradeAll: false # masqueradeAll tells kube-proxy to SNAT everything if using the pure iptables proxy mode. [Default: false]
|
||||
maxPods: 110 # maxPods is the number of pods that can run on this Kubelet. [Default: 110]
|
||||
nodeCidrMaskSize: 24 # internal network node size allocation. This is the size allocated to each node on your network. [Default: 24]
|
||||
proxyMode: ipvs # mode specifies which proxy mode to use. [Default: ipvs]
|
||||
network:
|
||||
plugin: calico
|
||||
calico:
|
||||
ipipMode: Always # IPIP Mode to use for the IPv4 POOL created at start up. If set to a value other than Never, vxlanMode should be set to "Never". [Always | CrossSubnet | Never] [Default: Always]
|
||||
vxlanMode: Never # VXLAN Mode to use for the IPv4 POOL created at start up. If set to a value other than Never, ipipMode should be set to "Never". [Always | CrossSubnet | Never] [Default: Never]
|
||||
vethMTU: 1440 # The maximum transmission unit (MTU) setting determines the largest packet size that can be transmitted through your network. [Default: 1440]
|
||||
kubePodsCIDR: 10.233.64.0/18
|
||||
kubeServiceCIDR: 10.233.0.0/18
|
||||
registry:
|
||||
registryMirrors: []
|
||||
insecureRegistries: []
|
||||
privateRegistry: ""
|
||||
storage:
|
||||
defaultStorageClass: localVolume
|
||||
localVolume:
|
||||
storageClassName: local
|
||||
|
||||
---
|
||||
apiVersion: installer.kubesphere.io/v1alpha1
|
||||
kind: ClusterConfiguration
|
||||
metadata:
|
||||
name: ks-installer
|
||||
namespace: kubesphere-system
|
||||
labels:
|
||||
version: v3.2.1
|
||||
spec:
|
||||
local_registry: ""
|
||||
persistence:
|
||||
storageClass: ""
|
||||
authentication:
|
||||
jwtSecret: ""
|
||||
etcd:
|
||||
monitoring: true # Whether to install etcd monitoring dashboard
|
||||
endpointIps: 192.168.0.7,192.168.0.8,192.168.0.9 # etcd cluster endpointIps
|
||||
port: 2379 # etcd port
|
||||
tlsEnable: true
|
||||
common:
|
||||
mysqlVolumeSize: 20Gi # MySQL PVC size
|
||||
minioVolumeSize: 20Gi # Minio PVC size
|
||||
etcdVolumeSize: 20Gi # etcd PVC size
|
||||
openldapVolumeSize: 2Gi # openldap PVC size
|
||||
redisVolumSize: 2Gi # Redis PVC size
|
||||
es: # Storage backend for logging, tracing, events and auditing.
|
||||
elasticsearchMasterReplicas: 1 # total number of master nodes, it's not allowed to use even number
|
||||
elasticsearchDataReplicas: 1 # total number of data nodes
|
||||
elasticsearchMasterVolumeSize: 4Gi # Volume size of Elasticsearch master nodes
|
||||
elasticsearchDataVolumeSize: 20Gi # Volume size of Elasticsearch data nodes
|
||||
logMaxAge: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default.
|
||||
elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log
|
||||
# externalElasticsearchUrl:
|
||||
# externalElasticsearchPort:
|
||||
console:
|
||||
enableMultiLogin: false # enable/disable multiple sing on, it allows a user can be used by different users at the same time.
|
||||
port: 30880
|
||||
alerting: # Whether to install KubeSphere alerting system. It enables Users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from.
|
||||
enabled: false
|
||||
auditing: # Whether to install KubeSphere audit log system. It provides a security-relevant chronological set of records,recording the sequence of activities happened in platform, initiated by different tenants.
|
||||
enabled: false
|
||||
devops: # Whether to install KubeSphere DevOps System. It provides out-of-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & Binary-to-Image
|
||||
enabled: false
|
||||
jenkinsMemoryLim: 2Gi # Jenkins memory limit
|
||||
jenkinsMemoryReq: 1500Mi # Jenkins memory request
|
||||
jenkinsVolumeSize: 8Gi # Jenkins volume size
|
||||
jenkinsJavaOpts_Xms: 512m # The following three fields are JVM parameters
|
||||
jenkinsJavaOpts_Xmx: 512m
|
||||
jenkinsJavaOpts_MaxRAM: 2g
|
||||
events: # Whether to install KubeSphere events system. It provides a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters.
|
||||
enabled: false
|
||||
logging: # Whether to install KubeSphere logging system. Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd.
|
||||
enabled: false
|
||||
logsidecarReplicas: 2
|
||||
metrics_server: # Whether to install metrics-server. IT enables HPA (Horizontal Pod Autoscaler).
|
||||
enabled: true
|
||||
monitoring: #
|
||||
prometheusReplicas: 1 # Prometheus replicas are responsible for monitoring different segments of data source and provide high availability as well.
|
||||
prometheusMemoryRequest: 400Mi # Prometheus request memory
|
||||
prometheusVolumeSize: 20Gi # Prometheus PVC size
|
||||
alertmanagerReplicas: 1 # AlertManager Replicas
|
||||
multicluster:
|
||||
clusterRole: none # host | member | none # You can install a solo cluster, or specify it as the role of host or member cluster
|
||||
networkpolicy: # Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods).
|
||||
enabled: false
|
||||
notification: # It supports notification management in multi-tenant Kubernetes clusters. It allows you to set AlertManager as its sender, and receivers include Email, Wechat Work, and Slack.
|
||||
enabled: false
|
||||
openpitrix: # Whether to install KubeSphere App Store. It provides an application store for Helm-based applications, and offer application lifecycle management
|
||||
enabled: false
|
||||
servicemesh: # Whether to install KubeSphere Service Mesh (Istio-based). It provides fine-grained traffic management, observability and tracing, and offer visualization for traffic topology
|
||||
enabled: false
|
||||
```
|
||||
|
||||
Create a cluster using the configuration file you customized above:
|
||||
|
||||
```bash
|
||||
./kk create cluster -f config-sample.yaml
|
||||
```
|
||||
|
||||
## Verify the Multi-node Installation
|
||||
|
||||
Inspect the logs of installation by executing the command below:
|
||||
|
||||
```bash
|
||||
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
|
||||
```
|
||||
|
||||
If you can see the welcome log return, it means the installation is successful. Your cluster is up and running.
|
||||
|
||||
```yaml
|
||||
**************************************************
|
||||
#####################################################
|
||||
### Welcome to KubeSphere! ###
|
||||
#####################################################
|
||||
Console: http://10.10.71.214:30880
|
||||
Account: admin
|
||||
Password: P@88w0rd
|
||||
NOTES:
|
||||
1. After you log into the console, please check the
|
||||
monitoring status of service components in
|
||||
the "Cluster Management". If any service is not
|
||||
ready, please wait patiently until all components
|
||||
are up and running.
|
||||
2. Please change the default password after login.
|
||||
#####################################################
|
||||
https://kubesphere.io 2020-08-15 23:32:12
|
||||
#####################################################
|
||||
```
|
||||
|
||||
### Log in to the Console
|
||||
|
||||
You will be able to use default account and password `admin/P@88w0rd` to log in to the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after login.
|
||||
|
||||
## Enable Pluggable Components (Optional)
|
||||
|
||||
The example above demonstrates the process of a default minimal installation. To enable other components in KubeSphere, see [Enable Pluggable Components](../../../pluggable-components/) for more details.
|
||||
|
|
@ -0,0 +1,264 @@
|
|||
---
|
||||
title: "Deploy KubeSphere on Azure VM Instances"
|
||||
keywords: "KubeSphere, Installation, HA, high availability, load balancer, Azure"
|
||||
description: "Learn how to create a high-availability cluster on Azure virtual machines."
|
||||
linkTitle: "Deploy KubeSphere on Azure VM Instances"
|
||||
weight: 3410
|
||||
---
|
||||
|
||||
Using the [Azure cloud platform](https://azure.microsoft.com/en-us/overview/what-is-azure/), you can either install and manage Kubernetes by yourself or adopt a managed Kubernetes solution. If you want to use a fully-managed platform solution, see [Deploy KubeSphere on AKS](../../../installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-aks/) for more details.
|
||||
|
||||
Alternatively, you can set up a highly-available cluster on Azure instances. This tutorial demonstrates how to create a production-ready Kubernetes and KubeSphere cluster.
|
||||
|
||||
## Introduction
|
||||
|
||||
This tutorial uses two key features of Azure virtual machines (VMs):
|
||||
|
||||
- [Virtual Machine Scale Sets (VMSS)](https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/overview): Azure VMSS let you create and manage a group of load balanced VMs. The number of VM instances can automatically increase or decrease in response to demand or a defined schedule (Kubernetes Autoscaler is available, but not covered in this tutorial. See [autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider/azure) for more details), which perfectly fits Worker nodes.
|
||||
- Availability Sets: An availability set is a logical grouping of VMs within a datacenter that are automatically distributed across fault domains. This approach limits the impact of potential physical hardware failures, network outages, or power interruptions. All the Master and etcd VMs will be placed in an availability set to achieve high availability.
|
||||
|
||||
Besides these VMs, other resources like Load Balancer, Virtual Network and Network Security Group will also be used.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- You need an [Azure](https://portal.azure.com) account to create all the resources.
|
||||
- Basic knowledge of [Azure Resource Manager](https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/) (ARM) templates, which are files that define the infrastructure and configuration for your project.
|
||||
- For a production environment, it is recommended that you prepare persistent storage and create a StorageClass in advance. For development and testing, you can use [OpenEBS](https://openebs.io/), which is installed by KubeKey by default, to provision LocalPV directly.
|
||||
|
||||
## Architecture
|
||||
|
||||
Six machines of **Ubuntu 18.04** will be deployed in an Azure Resource Group. Three of them are grouped into an availability set, serving as both the Master and etcd nodes. The other three VMs will be defined as a VMSS where Worker nodes will be running.
|
||||
|
||||

|
||||
|
||||
These VMs will be attached to a load balancer. There are two predefined rules in the load balancer:
|
||||
|
||||
- **Inbound NAT**: The SSH port will be mapped for each machine so that you can easily manage VMs.
|
||||
- **Load Balancing**: The http and https ports will be mapped to Node pools by default. Other ports can be added on demand.
|
||||
|
||||
| Service | Protocol | Rule | Backend Port | Frontend Port/Ports | Pools |
|
||||
|---|---|---|---|---|---|
|
||||
| ssh | TCP | Inbound NAT | 22 |50200, 50201, 50202, 50100~50199| Master, Node |
|
||||
| apiserver | TCP | Load Balancing | 6443 | 6443 | Master |
|
||||
| ks-console | TCP | Load Balancing | 30880 | 30880 | Master |
|
||||
| http | TCP | Load Balancing | 80 | 80 | Node |
|
||||
| https | TCP | Load Balancing | 443 | 443 | Node |
|
||||
|
||||
## Create HA Cluster Infrastructrue
|
||||
|
||||
You don't have to create these resources one by one. According to the best practice of **infrastructure as code** on Azure, all resources in the architecture are already defined as ARM templates.
|
||||
|
||||
### Prepare machines
|
||||
|
||||
1. Click the **Deploy** button below, and you will be redirected to Azure and asked to fill in deployment parameters.
|
||||
|
||||
<a href="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FRolandMa1986%2Fazurek8s%2Fmaster%2Fazuredeploy.json" rel="nofollow"><img src="https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazure.svg?sanitize=true" alt="Deploy to Azure" style="max-width:100%;"></a> <a href="http://armviz.io/#/?load=https%3A%2F%2Fraw.githubusercontent.com%2FRolandMa1986%2Fazurek8s%2Fmaster%2Fazuredeploy.json" rel="nofollow"><img src="https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/visualizebutton.svg?sanitize=true" alt="Visualize" style="max-width:100%;"></a>
|
||||
|
||||
2. On the page that appears, only few parameters need to be changed. Click **Create new** under **Resource group** and enter a name such as `KubeSphereVMRG`.
|
||||
|
||||
3. Enter **Admin Username**.
|
||||
|
||||
4. Copy your public SSH key for the field **Admin Key**. Alternatively, create a new one with `ssh-keygen`.
|
||||
|
||||

|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
Password authentication is restricted in Linux configurations. Only SSH is acceptable.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
5. Click **Purchase** at the bottom to continue.
|
||||
|
||||
### Review Azure resources in the Portal
|
||||
|
||||
After successfully created, all the resources will display in the resource group `KubeSphereVMRG`. Record the public IP of the load balancer and the private IP addresses of the VMs. You will need them later.
|
||||
|
||||

|
||||
|
||||
## Deploy Kubernetes and KubeSphere
|
||||
|
||||
Execute the following commands on your device or connect to one of the Master VMs through SSH. During the installation, files will be downloaded and distributed to each VM.
|
||||
|
||||
```bash
|
||||
# copy your private ssh to master-0
|
||||
scp -P 50200 ~/.ssh/id_rsa kubesphere@40.81.5.xx:/home/kubesphere/.ssh/
|
||||
|
||||
# ssh to the master-0
|
||||
ssh -i .ssh/id_rsa2 -p50200 kubesphere@40.81.5.xx
|
||||
```
|
||||
|
||||
### Download KubeKey
|
||||
|
||||
[Kubekey](../../../installing-on-linux/introduction/kubekey/) is a brand-new installation tool which provides an easy, fast and flexible way to install Kubernetes and KubeSphere.
|
||||
|
||||
1. Download it so that you can generate a configuration file in the next step.
|
||||
|
||||
{{< tabs >}}
|
||||
|
||||
{{< tab "Good network connections to GitHub/Googleapis" >}}
|
||||
|
||||
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.0.0 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{< tab "Poor network connections to GitHub/Googleapis" >}}
|
||||
|
||||
Run the following command first to make sure you download KubeKey from the correct zone.
|
||||
|
||||
```bash
|
||||
export KKZONE=cn
|
||||
```
|
||||
|
||||
Run the following command to download KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.0.0 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
After you download KubeKey, if you transfer it to a new machine also with poor network connections to Googleapis, you must run `export KKZONE=cn` again before you proceed with the steps below.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{</ tabs >}}
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
The commands above download the latest release (v2.0.0) of KubeKey. You can change the version number in the command to download a specific version.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
Make `kk` executable:
|
||||
|
||||
```bash
|
||||
chmod +x kk
|
||||
```
|
||||
|
||||
2. Create an example configuration file with default configurations. Here Kubernetes v1.21.5 is used as an example.
|
||||
|
||||
```bash
|
||||
./kk create config --with-kubesphere v3.2.1 --with-kubernetes v1.21.5
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- Recommended Kubernetes versions for KubeSphere 3.2.1: v1.19.x, v1.20.x, v1.21.x or v1.22.x (experimental). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.21.5 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
|
||||
|
||||
- If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later.
|
||||
- If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
### Example configurations
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
hosts:
|
||||
- {name: master-0, address: 40.81.5.xx, port: 50200, internalAddress: 10.0.1.4, user: kubesphere, privateKeyPath: "~/.ssh/id_rsa"}
|
||||
- {name: master-1, address: 40.81.5.xx, port: 50201, internalAddress: 10.0.1.5, user: kubesphere, privateKeyPath: "~/.ssh/id_rsa"}
|
||||
- {name: master-2, address: 40.81.5.xx, port: 50202, internalAddress: 10.0.1.6, user: kubesphere, privateKeyPath: "~/.ssh/id_rsa"}
|
||||
- {name: node000000, address: 40.81.5.xx, port: 50100, internalAddress: 10.0.0.4, user: kubesphere, privateKeyPath: "~/.ssh/id_rsa"}
|
||||
- {name: node000001, address: 40.81.5.xx, port: 50101, internalAddress: 10.0.0.5, user: kubesphere, privateKeyPath: "~/.ssh/id_rsa"}
|
||||
- {name: node000002, address: 40.81.5.xx, port: 50102, internalAddress: 10.0.0.6, user: kubesphere, privateKeyPath: "~/.ssh/id_rsa"}
|
||||
roleGroups:
|
||||
etcd:
|
||||
- master-0
|
||||
- master-1
|
||||
- master-2
|
||||
control-plane:
|
||||
- master-0
|
||||
- master-1
|
||||
- master-2
|
||||
worker:
|
||||
- node000000
|
||||
- node000001
|
||||
- node000002
|
||||
```
|
||||
For more information, see [this file](https://github.com/kubesphere/kubekey/blob/release-1.2/docs/config-example.md).
|
||||
|
||||
### Configure the load balancer
|
||||
|
||||
In addition to node information, you need to configure your load balancer in the same YAML file. For the IP address, you can find it in **Azure > KubeSphereVMRG > PublicLB**. Assume the IP address and listening port of the load balancer are `40.81.5.xx` and `6443` respectively, and you can refer to the following example.
|
||||
|
||||
```yaml
|
||||
## Public LB config example
|
||||
## apiserver_loadbalancer_domain_name: "lb.kubesphere.local"
|
||||
controlPlaneEndpoint:
|
||||
domain: lb.kubesphere.local
|
||||
address: "40.81.5.xx"
|
||||
port: 6443
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
The public load balancer is used directly instead of an internal load balancer due to Azure [Load Balancer limits](https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-troubleshoot#cause-4-accessing-the-internal-load-balancer-frontend-from-the-participating-load-balancer-backend-pool-vm).
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
### Persistent storage plugin configurations
|
||||
|
||||
See [Persistent Storage Configurations](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/) for details.
|
||||
|
||||
### Configure the network plugin
|
||||
|
||||
Azure Virtual Network doesn't support the IPIP mode used by [Calico](https://docs.projectcalico.org/reference/public-cloud/azure#about-calico-on-azure). You need to change the network plugin to `flannel`.
|
||||
|
||||
```yaml
|
||||
network:
|
||||
plugin: flannel
|
||||
kubePodsCIDR: 10.233.64.0/18
|
||||
kubeServiceCIDR: 10.233.0.0/18
|
||||
```
|
||||
|
||||
### Create a cluster
|
||||
|
||||
1. After you complete the configuration, you can execute the following command to start the installation:
|
||||
|
||||
```bash
|
||||
./kk create cluster -f config-sample.yaml
|
||||
```
|
||||
|
||||
2. Inspect the logs of installation:
|
||||
|
||||
```bash
|
||||
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
|
||||
```
|
||||
|
||||
3. When the installation finishes, you can see the following message:
|
||||
|
||||
```bash
|
||||
#####################################################
|
||||
### Welcome to KubeSphere! ###
|
||||
#####################################################
|
||||
Console: http://10.128.0.44:30880
|
||||
Account: admin
|
||||
Password: P@88w0rd
|
||||
NOTES:
|
||||
1. After you log into the console, please check the
|
||||
monitoring status of service components in
|
||||
the "Cluster Management". If any service is not
|
||||
ready, please wait patiently until all components
|
||||
are up and running.
|
||||
2. Please change the default password after login.
|
||||
#####################################################
|
||||
https://kubesphere.io 2020-xx-xx xx:xx:xx
|
||||
```
|
||||
|
||||
4. Access the KubeSphere console using `<NodeIP>:30880` with the default account and password (`admin/P@88w0rd`).
|
||||
|
||||
## Add Additional Ports
|
||||
|
||||
As the Kubernetes cluster is set up on Azure instances directly, the load balancer is not integrated with [Kubernetes Services](https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer). However, you can still manually map the NodePort to the load balancer. There are 2 steps required.
|
||||
|
||||
1. Create a new Load Balance Rule in the load balancer.
|
||||

|
||||
2. Create an Inbound Security rule to allow Internet access in the Network Security Group.
|
||||

|
||||
|
|
@ -0,0 +1,264 @@
|
|||
---
|
||||
title: "Deploy KubeSphere on Azure VM Instances"
|
||||
keywords: "KubeSphere, Installation, HA, high availability, load balancer, Azure"
|
||||
description: "Learn how to create a high-availability cluster on Azure virtual machines."
|
||||
linkTitle: "Deploy KubeSphere on Azure VM Instances"
|
||||
weight: 3410
|
||||
---
|
||||
|
||||
Using the [Azure cloud platform](https://azure.microsoft.com/en-us/overview/what-is-azure/), you can either install and manage Kubernetes by yourself or adopt a managed Kubernetes solution. If you want to use a fully-managed platform solution, see [Deploy KubeSphere on AKS](../../../installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-aks/) for more details.
|
||||
|
||||
Alternatively, you can set up a highly-available cluster on Azure instances. This tutorial demonstrates how to create a production-ready Kubernetes and KubeSphere cluster.
|
||||
|
||||
## Introduction
|
||||
|
||||
This tutorial uses two key features of Azure virtual machines (VMs):
|
||||
|
||||
- [Virtual Machine Scale Sets (VMSS)](https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/overview): Azure VMSS let you create and manage a group of load balanced VMs. The number of VM instances can automatically increase or decrease in response to demand or a defined schedule (Kubernetes Autoscaler is available, but not covered in this tutorial. See [autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider/azure) for more details), which perfectly fits Worker nodes.
|
||||
- Availability Sets: An availability set is a logical grouping of VMs within a datacenter that are automatically distributed across fault domains. This approach limits the impact of potential physical hardware failures, network outages, or power interruptions. All the Master and etcd VMs will be placed in an availability set to achieve high availability.
|
||||
|
||||
Besides these VMs, other resources like Load Balancer, Virtual Network and Network Security Group will also be used.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- You need an [Azure](https://portal.azure.com) account to create all the resources.
|
||||
- Basic knowledge of [Azure Resource Manager](https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/) (ARM) templates, which are files that define the infrastructure and configuration for your project.
|
||||
- For a production environment, it is recommended that you prepare persistent storage and create a StorageClass in advance. For development and testing, you can use [OpenEBS](https://openebs.io/), which is installed by KubeKey by default, to provision LocalPV directly.
|
||||
|
||||
## Architecture
|
||||
|
||||
Six machines of **Ubuntu 18.04** will be deployed in an Azure Resource Group. Three of them are grouped into an availability set, serving as both the control plane and etcd nodes. The other three VMs will be defined as a VMSS where Worker nodes will be running.
|
||||
|
||||

|
||||
|
||||
These VMs will be attached to a load balancer. There are two predefined rules in the load balancer:
|
||||
|
||||
- **Inbound NAT**: The SSH port will be mapped for each machine so that you can easily manage VMs.
|
||||
- **Load Balancing**: The http and https ports will be mapped to Node pools by default. Other ports can be added on demand.
|
||||
|
||||
| Service | Protocol | Rule | Backend Port | Frontend Port/Ports | Pools |
|
||||
|---|---|---|---|---|---|
|
||||
| ssh | TCP | Inbound NAT | 22 |50200, 50201, 50202, 50100~50199| Master, Node |
|
||||
| apiserver | TCP | Load Balancing | 6443 | 6443 | Master |
|
||||
| ks-console | TCP | Load Balancing | 30880 | 30880 | Master |
|
||||
| http | TCP | Load Balancing | 80 | 80 | Node |
|
||||
| https | TCP | Load Balancing | 443 | 443 | Node |
|
||||
|
||||
## Create HA Cluster Infrastructrue
|
||||
|
||||
You don't have to create these resources one by one. According to the best practice of **infrastructure as code** on Azure, all resources in the architecture are already defined as ARM templates.
|
||||
|
||||
### Prepare machines
|
||||
|
||||
1. Click the **Deploy** button below, and you will be redirected to Azure and asked to fill in deployment parameters.
|
||||
|
||||
<a href="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FRolandMa1986%2Fazurek8s%2Fmaster%2Fazuredeploy.json" rel="nofollow"><img src="https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazure.svg?sanitize=true" alt="Deploy to Azure" style="max-width:100%;"></a> <a href="http://armviz.io/#/?load=https%3A%2F%2Fraw.githubusercontent.com%2FRolandMa1986%2Fazurek8s%2Fmaster%2Fazuredeploy.json" rel="nofollow"><img src="https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/visualizebutton.svg?sanitize=true" alt="Visualize" style="max-width:100%;"></a>
|
||||
|
||||
2. On the page that appears, only few parameters need to be changed. Click **Create new** under **Resource group** and enter a name such as `KubeSphereVMRG`.
|
||||
|
||||
3. Enter **Admin Username**.
|
||||
|
||||
4. Copy your public SSH key for the field **Admin Key**. Alternatively, create a new one with `ssh-keygen`.
|
||||
|
||||

|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
Password authentication is restricted in Linux configurations. Only SSH is acceptable.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
5. Click **Purchase** at the bottom to continue.
|
||||
|
||||
### Review Azure resources in the Portal
|
||||
|
||||
After successfully created, all the resources will display in the resource group `KubeSphereVMRG`. Record the public IP of the load balancer and the private IP addresses of the VMs. You will need them later.
|
||||
|
||||

|
||||
|
||||
## Deploy Kubernetes and KubeSphere
|
||||
|
||||
Execute the following commands on your device or connect to one of the Master VMs through SSH. During the installation, files will be downloaded and distributed to each VM.
|
||||
|
||||
```bash
|
||||
# copy your private ssh to master-0
|
||||
scp -P 50200 ~/.ssh/id_rsa kubesphere@40.81.5.xx:/home/kubesphere/.ssh/
|
||||
|
||||
# ssh to the master-0
|
||||
ssh -i .ssh/id_rsa2 -p50200 kubesphere@40.81.5.xx
|
||||
```
|
||||
|
||||
### Download KubeKey
|
||||
|
||||
[Kubekey](../../../installing-on-linux/introduction/kubekey/) is a brand-new installation tool which provides an easy, fast and flexible way to install Kubernetes and KubeSphere.
|
||||
|
||||
1. Download it so that you can generate a configuration file in the next step.
|
||||
|
||||
{{< tabs >}}
|
||||
|
||||
{{< tab "Good network connections to GitHub/Googleapis" >}}
|
||||
|
||||
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.0.0 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{< tab "Poor network connections to GitHub/Googleapis" >}}
|
||||
|
||||
Run the following command first to make sure you download KubeKey from the correct zone.
|
||||
|
||||
```bash
|
||||
export KKZONE=cn
|
||||
```
|
||||
|
||||
Run the following command to download KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.0.0 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
After you download KubeKey, if you transfer it to a new machine also with poor network connections to Googleapis, you must run `export KKZONE=cn` again before you proceed with the steps below.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{</ tabs >}}
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
The commands above download the latest release (v2.0.0) of KubeKey. You can change the version number in the command to download a specific version.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
Make `kk` executable:
|
||||
|
||||
```bash
|
||||
chmod +x kk
|
||||
```
|
||||
|
||||
2. Create an example configuration file with default configurations. Here Kubernetes v1.21.5 is used as an example.
|
||||
|
||||
```bash
|
||||
./kk create config --with-kubesphere v3.2.1 --with-kubernetes v1.21.5
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- Recommended Kubernetes versions for KubeSphere 3.2.1: v1.19.x, v1.20.x, v1.21.x or v1.22.x (experimental). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.21.5 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
|
||||
|
||||
- If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later.
|
||||
- If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
### Example configurations
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
hosts:
|
||||
- {name: master-0, address: 40.81.5.xx, port: 50200, internalAddress: 10.0.1.4, user: kubesphere, privateKeyPath: "~/.ssh/id_rsa"}
|
||||
- {name: master-1, address: 40.81.5.xx, port: 50201, internalAddress: 10.0.1.5, user: kubesphere, privateKeyPath: "~/.ssh/id_rsa"}
|
||||
- {name: master-2, address: 40.81.5.xx, port: 50202, internalAddress: 10.0.1.6, user: kubesphere, privateKeyPath: "~/.ssh/id_rsa"}
|
||||
- {name: node000000, address: 40.81.5.xx, port: 50100, internalAddress: 10.0.0.4, user: kubesphere, privateKeyPath: "~/.ssh/id_rsa"}
|
||||
- {name: node000001, address: 40.81.5.xx, port: 50101, internalAddress: 10.0.0.5, user: kubesphere, privateKeyPath: "~/.ssh/id_rsa"}
|
||||
- {name: node000002, address: 40.81.5.xx, port: 50102, internalAddress: 10.0.0.6, user: kubesphere, privateKeyPath: "~/.ssh/id_rsa"}
|
||||
roleGroups:
|
||||
etcd:
|
||||
- master-0
|
||||
- master-1
|
||||
- master-2
|
||||
control-plane:
|
||||
- master-0
|
||||
- master-1
|
||||
- master-2
|
||||
worker:
|
||||
- node000000
|
||||
- node000001
|
||||
- node000002
|
||||
```
|
||||
For more information, see [this file](https://github.com/kubesphere/kubekey/blob/release-1.2/docs/config-example.md).
|
||||
|
||||
### Configure the load balancer
|
||||
|
||||
In addition to node information, you need to configure your load balancer in the same YAML file. For the IP address, you can find it in **Azure > KubeSphereVMRG > PublicLB**. Assume the IP address and listening port of the load balancer are `40.81.5.xx` and `6443` respectively, and you can refer to the following example.
|
||||
|
||||
```yaml
|
||||
## Public LB config example
|
||||
## apiserver_loadbalancer_domain_name: "lb.kubesphere.local"
|
||||
controlPlaneEndpoint:
|
||||
domain: lb.kubesphere.local
|
||||
address: "40.81.5.xx"
|
||||
port: 6443
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
The public load balancer is used directly instead of an internal load balancer due to Azure [Load Balancer limits](https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-troubleshoot#cause-4-accessing-the-internal-load-balancer-frontend-from-the-participating-load-balancer-backend-pool-vm).
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
### Persistent storage plugin configurations
|
||||
|
||||
See [Persistent Storage Configurations](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/) for details.
|
||||
|
||||
### Configure the network plugin
|
||||
|
||||
Azure Virtual Network doesn't support the IPIP mode used by [Calico](https://docs.projectcalico.org/reference/public-cloud/azure#about-calico-on-azure). You need to change the network plugin to `flannel`.
|
||||
|
||||
```yaml
|
||||
network:
|
||||
plugin: flannel
|
||||
kubePodsCIDR: 10.233.64.0/18
|
||||
kubeServiceCIDR: 10.233.0.0/18
|
||||
```
|
||||
|
||||
### Create a cluster
|
||||
|
||||
1. After you complete the configuration, you can execute the following command to start the installation:
|
||||
|
||||
```bash
|
||||
./kk create cluster -f config-sample.yaml
|
||||
```
|
||||
|
||||
2. Inspect the logs of installation:
|
||||
|
||||
```bash
|
||||
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
|
||||
```
|
||||
|
||||
3. When the installation finishes, you can see the following message:
|
||||
|
||||
```bash
|
||||
#####################################################
|
||||
### Welcome to KubeSphere! ###
|
||||
#####################################################
|
||||
Console: http://10.128.0.44:30880
|
||||
Account: admin
|
||||
Password: P@88w0rd
|
||||
NOTES:
|
||||
1. After you log into the console, please check the
|
||||
monitoring status of service components in
|
||||
the "Cluster Management". If any service is not
|
||||
ready, please wait patiently until all components
|
||||
are up and running.
|
||||
2. Please change the default password after login.
|
||||
#####################################################
|
||||
https://kubesphere.io 2020-xx-xx xx:xx:xx
|
||||
```
|
||||
|
||||
4. Access the KubeSphere console using `<NodeIP>:30880` with the default account and password (`admin/P@88w0rd`).
|
||||
|
||||
## Add Additional Ports
|
||||
|
||||
As the Kubernetes cluster is set up on Azure instances directly, the load balancer is not integrated with [Kubernetes Services](https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer). However, you can still manually map the NodePort to the load balancer. There are 2 steps required.
|
||||
|
||||
1. Create a new Load Balance Rule in the load balancer.
|
||||

|
||||
2. Create an Inbound Security rule to allow Internet access in the Network Security Group.
|
||||

|
||||
|
|
@ -0,0 +1,340 @@
|
|||
---
|
||||
title: "Deploy KubeSphere on QingCloud Instances"
|
||||
keywords: "KubeSphere, Installation, HA, High-availability, LoadBalancer"
|
||||
description: "Learn how to create a high-availability cluster on QingCloud platform."
|
||||
linkTitle: "Deploy KubeSphere on QingCloud Instances"
|
||||
Weight: 3420
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
For a production environment, you need to consider the high availability of the cluster. If key components (for example, kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same master node, Kubernetes and KubeSphere will be unavailable once the master node goes down. Therefore, you need to set up a high-availability cluster by provisioning load balancers with multiple master nodes. You can use any cloud load balancer, or any hardware load balancer (for example, F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters.
|
||||
|
||||
This tutorial walks you through an example of how to create two [QingCloud load balancers](https://docs.qingcloud.com/product/network/loadbalancer), serving as the internal load balancer and external load balancer respectively, and of how to implement high availability of master and etcd nodes using the load balancers.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Make sure you already know how to install KubeSphere on a multi-node cluster by following the [guide](../../../installing-on-linux/introduction/multioverview/). For detailed information about the configuration file that is used for installation, see [Edit the configuration file](../../../installing-on-linux/introduction/multioverview/#2-edit-the-configuration-file). This tutorial focuses more on how to configure load balancers.
|
||||
- You need a [QingCloud](https://console.qingcloud.com/login) account to create load balancers, or follow the guide of any other cloud provider to create load balancers.
|
||||
- For a production environment, it is recommended that you prepare persistent storage and create a StorageClass in advance. For development and testing, you can use the integrated OpenEBS to provision LocalPV as the storage service directly.
|
||||
|
||||
## Architecture
|
||||
|
||||
This example prepares six machines of **Ubuntu 16.04.6**. You will create two load balancers, and deploy three master and etcd nodes on three of the machines. You can configure these master and etcd nodes in `config-sample.yaml` created by KubeKey (Please note that this is the default name, which can be changed by yourself).
|
||||
|
||||

|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
The Kubernetes document [Options for Highly Available topology](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/) demonstrates that there are two options for configuring the topology of a highly available (HA) Kubernetes cluster, i.e. stacked etcd topology and external etcd topology. You should carefully consider the advantages and disadvantages of each topology before setting up an HA cluster according to [this document](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/). This tutorial adopts stacked etcd topology to bootstrap an HA cluster for demonstration purposes.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
## Install an HA Cluster
|
||||
|
||||
### Step 1: Create load balancers
|
||||
|
||||
This step demonstrates how to create load balancers on the QingCloud platform.
|
||||
|
||||
#### Create an internal load balancer
|
||||
|
||||
1. Log in to the [QingCloud console](https://console.qingcloud.com/login). In the menu on the left, under **Network & CDN**, select **Load Balancers**. Click **Create** to create a load balancer.
|
||||
|
||||

|
||||
|
||||
2. In the pop-up window, set a name for the load balancer. Choose the VxNet where your machines are created from the **Network** drop-down list. Here is `pn`. Other fields can be default values as shown below. Click **Submit** to finish.
|
||||
|
||||

|
||||
|
||||
3. Click the load balancer. On the detail page, create a listener that listens on port `6443` with the **Listener Protocol** set to `TCP`.
|
||||
|
||||

|
||||
|
||||
- **Name**: Define a name for this Listener
|
||||
- **Listener Protocol**: Select `TCP` protocol
|
||||
- **Port**: `6443`
|
||||
- **Balance mode**: `Poll`
|
||||
|
||||
Click **Submit** to continue.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
After you create the listener, check the firewall rules of the load balancer. Make sure that port `6443` has been added to the firewall rules and that external traffic is allowed to port `6443`. Otherwise, the installation will fail. You can find the information in **Security Groups** under **Security** on the QingCloud platform.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
4. Click **Add Backend**, and choose the VxNet you just selected (in this example, it is `pn`). Click **Advanced Search**, choose the three master nodes, and set the port to `6443` which is the default secure port of api-server.
|
||||
|
||||

|
||||
|
||||
Click **Submit** when you finish.
|
||||
|
||||
5. Click **Apply Changes** to use the configurations. At this point, you can find the three masters have been added as the backend servers of the listener that is behind the internal load balancer.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
The status of all masters might show **Not Available** after you added them as backends. This is normal since port `6443` of api-server is not active on master nodes yet. The status will change to **Active** and the port of api-server will be exposed after the installation finishes, which means the internal load balancer you configured works as expected.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||

|
||||
|
||||
Record the Intranet VIP shown under **Networks**. The IP address will be added later to the configuration file.
|
||||
|
||||
#### Create an external load balancer
|
||||
|
||||
You need to create an EIP in advance. To create an EIP, go to **Elastic IPs** under **Networks & CDN**.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
Two elastic IPs are needed for this tutorial, one for the VPC network and the other for the external load balancer created in this step. You cannot associate the same EIP to the VPC network and the load balancer at the same time.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
1. Similarly, create an external load balancer while don't select VxNet for the **Network** field. Bind the EIP that you created to this load balancer by clicking **Add IPv4**.
|
||||
|
||||

|
||||
|
||||
2. On the load balancer's detail page, create a listener that listens on port `30880` (NodePort of KubeSphere console) with **Listener Protocol** set to `HTTP`.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
After you create the listener, check the firewall rules of the load balancer. Make sure that port `30880` has been added to the firewall rules and that external traffic is allowed to port `30880`. Otherwise, the installation will fail. You can find the information in **Security Groups** under **Security** on the QingCloud platform.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||

|
||||
|
||||
3. Click **Add Backend**. In **Advanced Search**, choose the `six` machines on which you are going to install KubeSphere within the VxNet `pn`, and set the port to `30880`.
|
||||
|
||||

|
||||
|
||||
Click **Submit** when you finish.
|
||||
|
||||
4. Click **Apply Changes** to use the configurations. At this point, you can find the six machines have been added as the backend servers of the listener that is behind the external load balancer.
|
||||
|
||||
### Step 2: Download KubeKey
|
||||
|
||||
[Kubekey](https://github.com/kubesphere/kubekey) is the next-gen installer which provides an easy, fast and flexible way to install Kubernetes and KubeSphere.
|
||||
|
||||
Follow the step below to download KubeKey.
|
||||
|
||||
{{< tabs >}}
|
||||
|
||||
{{< tab "Good network connections to GitHub/Googleapis" >}}
|
||||
|
||||
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.0.0 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{< tab "Poor network connections to GitHub/Googleapis" >}}
|
||||
|
||||
Run the following command first to make sure you download KubeKey from the correct zone.
|
||||
|
||||
```bash
|
||||
export KKZONE=cn
|
||||
```
|
||||
|
||||
Run the following command to download KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.0.0 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
After you download KubeKey, if you transfer it to a new machine also with poor network connections to Googleapis, you must run `export KKZONE=cn` again before you proceed with the steps below.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{</ tabs >}}
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
The commands above download the latest release (v2.0.0) of KubeKey. You can change the version number in the command to download a specific version.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
Make `kk` executable:
|
||||
|
||||
```bash
|
||||
chmod +x kk
|
||||
```
|
||||
|
||||
Create an example configuration file with default configurations. Here Kubernetes v1.21.5 is used as an example.
|
||||
|
||||
```bash
|
||||
./kk create config --with-kubesphere v3.2.1 --with-kubernetes v1.21.5
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- Recommended Kubernetes versions for KubeSphere 3.2.1: v1.19.x, v1.20.x, v1.21.x or v1.22.x (experimental). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.21.5 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
|
||||
|
||||
- If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later.
|
||||
- If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
### Step 3: Set cluster nodes
|
||||
|
||||
As you adopt the HA topology with stacked control plane nodes, the master nodes and etcd nodes are on the same three machines.
|
||||
|
||||
| **Property** | **Description** |
|
||||
| :----------- | :-------------------------------- |
|
||||
| `hosts` | Detailed information of all nodes |
|
||||
| `etcd` | etcd node names |
|
||||
| `master` | Master node names |
|
||||
| `worker` | Worker node names |
|
||||
|
||||
Put the master node name (`master1`, `master2` and `master3`) under `etcd` and `master` respectively as below, which means these three machines will serve as both the master and etcd nodes. Note that the number of etcd needs to be odd. Meanwhile, it is not recommended that you install etcd on worker nodes since the memory consumption of etcd is very high.
|
||||
|
||||
#### config-sample.yaml Example
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
hosts:
|
||||
- {name: master1, address: 192.168.0.2, internalAddress: 192.168.0.2, user: ubuntu, password: Testing123}
|
||||
- {name: master2, address: 192.168.0.3, internalAddress: 192.168.0.3, user: ubuntu, password: Testing123}
|
||||
- {name: master3, address: 192.168.0.4, internalAddress: 192.168.0.4, user: ubuntu, password: Testing123}
|
||||
- {name: node1, address: 192.168.0.5, internalAddress: 192.168.0.5, user: ubuntu, password: Testing123}
|
||||
- {name: node2, address: 192.168.0.6, internalAddress: 192.168.0.6, user: ubuntu, password: Testing123}
|
||||
- {name: node3, address: 192.168.0.7, internalAddress: 192.168.0.7, user: ubuntu, password: Testing123}
|
||||
roleGroups:
|
||||
etcd:
|
||||
- master1
|
||||
- master2
|
||||
- master3
|
||||
control-plane:
|
||||
- master1
|
||||
- master2
|
||||
- master3
|
||||
worker:
|
||||
- node1
|
||||
- node2
|
||||
- node3
|
||||
```
|
||||
|
||||
For a complete configuration sample explanation, see [this file](https://github.com/kubesphere/kubekey/blob/release-1.2/docs/config-example.md).
|
||||
|
||||
### Step 4: Configure the load balancer
|
||||
|
||||
In addition to the node information, you need to provide the load balancer information in the same YAML file. For the Intranet VIP address, you can find it in the last part when you create [an internal load balancer](#step-1-create-load-balancers). Assume the VIP address and listening port of the **internal load balancer** are `192.168.0.253` and `6443`
|
||||
|
||||
respectively, and you can refer to the following example.
|
||||
|
||||
#### The configuration example in config-sample.yaml
|
||||
|
||||
```yaml
|
||||
## Internal LB config example
|
||||
## apiserver_loadbalancer_domain_name: "lb.kubesphere.local"
|
||||
controlPlaneEndpoint:
|
||||
domain: lb.kubesphere.local
|
||||
address: "192.168.0.253"
|
||||
port: 6443
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- The address and port should be indented by two spaces in `config-sample.yaml`, and the address should be VIP.
|
||||
- The domain name of the load balancer is `lb.kubesphere.local` by default for internal access. If you need to change the domain name, uncomment and modify it.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
### Step 5: Kubernetes cluster configurations (Optional)
|
||||
|
||||
Kubekey provides some fields and parameters to allow the cluster administrator to customize Kubernetes installation, including Kubernetes version, network plugins and image registry. There are some default values provided in `config-sample.yaml`. You can modify Kubernetes-related configurations in the file based on your needs. For more information, see [Kubernetes Cluster Configurations](../../../installing-on-linux/introduction/vars/).
|
||||
|
||||
### Step 6: Persistent storage plugin configurations
|
||||
|
||||
Considering data persistence in a production environment, you need to prepare persistent storage and configure the storage plugin (for example, CSI) in `config-sample.yaml` to define which storage service you want.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
For testing or development, you can skip this part. KubeKey will use the integrated OpenEBS to provision LocalPV as the storage service directly.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
**Available storage plugins and clients**
|
||||
|
||||
- Ceph RBD & CephFS
|
||||
- GlusterFS
|
||||
- QingCloud CSI
|
||||
- QingStor CSI
|
||||
- More plugins will be supported in future releases
|
||||
|
||||
Make sure you have configured the storage plugin before you get started. KubeKey will create a StorageClass and persistent volumes for related workloads during the installation. For more information, see [Persistent Storage Configurations](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/).
|
||||
|
||||
### Step 7: Enable pluggable components (Optional)
|
||||
|
||||
KubeSphere has decoupled some core feature components since v2.1.0. These components are designed to be pluggable which means you can enable them either before or after installation. By default, KubeSphere will be installed with the minimal package if you do not enable them.
|
||||
|
||||
You can enable any of them according to your demands. It is highly recommended that you install these pluggable components to discover the full-stack features and capabilities provided by KubeSphere. Make sure your machines have sufficient CPU and memory before you enable them. See [Enable Pluggable Components](../../../pluggable-components/) for details.
|
||||
|
||||
### Step 8: Start to bootstrap a cluster
|
||||
|
||||
After you complete the configuration, you can execute the following command to start the installation:
|
||||
|
||||
```bash
|
||||
./kk create cluster -f config-sample.yaml
|
||||
```
|
||||
|
||||
### Step 9: Verify the installation
|
||||
|
||||
Inspect the logs of installation. When you see output logs as follows, it means KubeSphere has been successfully deployed.
|
||||
|
||||
```bash
|
||||
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
|
||||
```
|
||||
|
||||
```bash
|
||||
#####################################################
|
||||
### Welcome to KubeSphere! ###
|
||||
#####################################################
|
||||
|
||||
Console: http://192.168.0.3:30880
|
||||
Account: admin
|
||||
Password: P@88w0rd
|
||||
|
||||
NOTES:
|
||||
1. After you log into the console, please check the
|
||||
monitoring status of service components in
|
||||
the "Cluster Management". If any service is not
|
||||
ready, please wait patiently until all components
|
||||
are up and running.
|
||||
2. Please change the default password after login.
|
||||
|
||||
#####################################################
|
||||
https://kubesphere.io 2020-08-13 10:50:24
|
||||
#####################################################
|
||||
```
|
||||
|
||||
### Step 10: Verify the HA cluster
|
||||
|
||||
Now that you have finished the installation, go back to the detail page of both the internal and external load balancers to see the status.
|
||||
|
||||

|
||||
|
||||
Both listeners show that the status is **Active**, meaning nodes are up and running.
|
||||
|
||||

|
||||
|
||||
In the web console of KubeSphere, you can also see that all the nodes are functioning well.
|
||||
|
||||
To verify if the cluster is highly available, you can turn off an instance on purpose. For example, the above console is accessed through the address `IP: 30880` (the EIP address here is the one bound to the external load balancer). If the cluster is highly available, the console will still work well even if you shut down a master node.
|
||||
|
||||
## See Also
|
||||
|
||||
[Multi-node Installation](../../../installing-on-linux/introduction/multioverview/)
|
||||
|
||||
[Kubernetes Cluster Configurations](../../../installing-on-linux/introduction/vars/)
|
||||
|
||||
[Persistent Storage Configurations](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/)
|
||||
|
||||
[Enable Pluggable Components](../../../pluggable-components/)
|
||||
|
|
@ -0,0 +1,340 @@
|
|||
---
|
||||
title: "Deploy KubeSphere on QingCloud Instances"
|
||||
keywords: "KubeSphere, Installation, HA, High-availability, LoadBalancer"
|
||||
description: "Learn how to create a high-availability cluster on QingCloud platform."
|
||||
linkTitle: "Deploy KubeSphere on QingCloud Instances"
|
||||
Weight: 3420
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
For a production environment, you need to consider the high availability of the cluster. If key components (for example, kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same control plane node, Kubernetes and KubeSphere will be unavailable once the control plane node goes down. Therefore, you need to set up a high-availability cluster by provisioning load balancers with multiple control plane nodes. You can use any cloud load balancer, or any hardware load balancer (for example, F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters.
|
||||
|
||||
This tutorial walks you through an example of how to create two [QingCloud load balancers](https://docs.qingcloud.com/product/network/loadbalancer), serving as the internal load balancer and external load balancer respectively, and of how to implement high availability of control plane and etcd nodes using the load balancers.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Make sure you already know how to install KubeSphere on a multi-node cluster by following the [guide](../../../installing-on-linux/introduction/multioverview/). For detailed information about the configuration file that is used for installation, see [Edit the configuration file](../../../installing-on-linux/introduction/multioverview/#2-edit-the-configuration-file). This tutorial focuses more on how to configure load balancers.
|
||||
- You need a [QingCloud](https://console.qingcloud.com/login) account to create load balancers, or follow the guide of any other cloud provider to create load balancers.
|
||||
- For a production environment, it is recommended that you prepare persistent storage and create a StorageClass in advance. For development and testing, you can use the integrated OpenEBS to provision LocalPV as the storage service directly.
|
||||
|
||||
## Architecture
|
||||
|
||||
This example prepares six machines of **Ubuntu 16.04.6**. You will create two load balancers, and deploy three master and etcd nodes on three of the machines. You can configure these master and etcd nodes in `config-sample.yaml` created by KubeKey (Please note that this is the default name, which can be changed by yourself).
|
||||
|
||||

|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
The Kubernetes document [Options for Highly Available topology](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/) demonstrates that there are two options for configuring the topology of a highly available (HA) Kubernetes cluster, i.e. stacked etcd topology and external etcd topology. You should carefully consider the advantages and disadvantages of each topology before setting up an HA cluster according to [this document](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/). This tutorial adopts stacked etcd topology to bootstrap an HA cluster for demonstration purposes.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
## Install an HA Cluster
|
||||
|
||||
### Step 1: Create load balancers
|
||||
|
||||
This step demonstrates how to create load balancers on the QingCloud platform.
|
||||
|
||||
#### Create an internal load balancer
|
||||
|
||||
1. Log in to the [QingCloud console](https://console.qingcloud.com/login). In the menu on the left, under **Network & CDN**, select **Load Balancers**. Click **Create** to create a load balancer.
|
||||
|
||||

|
||||
|
||||
2. In the pop-up window, set a name for the load balancer. Choose the VxNet where your machines are created from the **Network** drop-down list. Here is `pn`. Other fields can be default values as shown below. Click **Submit** to finish.
|
||||
|
||||

|
||||
|
||||
3. Click the load balancer. On the detail page, create a listener that listens on port `6443` with the **Listener Protocol** set to `TCP`.
|
||||
|
||||

|
||||
|
||||
- **Name**: Define a name for this Listener
|
||||
- **Listener Protocol**: Select `TCP` protocol
|
||||
- **Port**: `6443`
|
||||
- **Balance mode**: `Poll`
|
||||
|
||||
Click **Submit** to continue.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
After you create the listener, check the firewall rules of the load balancer. Make sure that port `6443` has been added to the firewall rules and that external traffic is allowed to port `6443`. Otherwise, the installation will fail. You can find the information in **Security Groups** under **Security** on the QingCloud platform.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
4. Click **Add Backend**, and choose the VxNet you just selected (in this example, it is `pn`). Click **Advanced Search**, choose the three control plane nodes, and set the port to `6443` which is the default secure port of api-server.
|
||||
|
||||

|
||||
|
||||
Click **Submit** when you finish.
|
||||
|
||||
5. Click **Apply Changes** to use the configurations. At this point, you can find the three control plane nodes have been added as the backend servers of the listener that is behind the internal load balancer.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
The status of all control plane nodes might show **Not Available** after you added them as backends. This is normal since port `6443` of api-server is not active on control plane nodes yet. The status will change to **Active** and the port of api-server will be exposed after the installation finishes, which means the internal load balancer you configured works as expected.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||

|
||||
|
||||
Record the Intranet VIP shown under **Networks**. The IP address will be added later to the configuration file.
|
||||
|
||||
#### Create an external load balancer
|
||||
|
||||
You need to create an EIP in advance. To create an EIP, go to **Elastic IPs** under **Networks & CDN**.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
Two elastic IPs are needed for this tutorial, one for the VPC network and the other for the external load balancer created in this step. You cannot associate the same EIP to the VPC network and the load balancer at the same time.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
1. Similarly, create an external load balancer while don't select VxNet for the **Network** field. Bind the EIP that you created to this load balancer by clicking **Add IPv4**.
|
||||
|
||||

|
||||
|
||||
2. On the load balancer's detail page, create a listener that listens on port `30880` (NodePort of KubeSphere console) with **Listener Protocol** set to `HTTP`.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
After you create the listener, check the firewall rules of the load balancer. Make sure that port `30880` has been added to the firewall rules and that external traffic is allowed to port `30880`. Otherwise, the installation will fail. You can find the information in **Security Groups** under **Security** on the QingCloud platform.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||

|
||||
|
||||
3. Click **Add Backend**. In **Advanced Search**, choose the `six` machines on which you are going to install KubeSphere within the VxNet `pn`, and set the port to `30880`.
|
||||
|
||||

|
||||
|
||||
Click **Submit** when you finish.
|
||||
|
||||
4. Click **Apply Changes** to use the configurations. At this point, you can find the six machines have been added as the backend servers of the listener that is behind the external load balancer.
|
||||
|
||||
### Step 2: Download KubeKey
|
||||
|
||||
[Kubekey](https://github.com/kubesphere/kubekey) is the next-gen installer which provides an easy, fast and flexible way to install Kubernetes and KubeSphere.
|
||||
|
||||
Follow the step below to download KubeKey.
|
||||
|
||||
{{< tabs >}}
|
||||
|
||||
{{< tab "Good network connections to GitHub/Googleapis" >}}
|
||||
|
||||
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.0.0 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{< tab "Poor network connections to GitHub/Googleapis" >}}
|
||||
|
||||
Run the following command first to make sure you download KubeKey from the correct zone.
|
||||
|
||||
```bash
|
||||
export KKZONE=cn
|
||||
```
|
||||
|
||||
Run the following command to download KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.0.0 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
After you download KubeKey, if you transfer it to a new machine also with poor network connections to Googleapis, you must run `export KKZONE=cn` again before you proceed with the steps below.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{</ tabs >}}
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
The commands above download the latest release (v2.0.0) of KubeKey. You can change the version number in the command to download a specific version.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
Make `kk` executable:
|
||||
|
||||
```bash
|
||||
chmod +x kk
|
||||
```
|
||||
|
||||
Create an example configuration file with default configurations. Here Kubernetes v1.21.5 is used as an example.
|
||||
|
||||
```bash
|
||||
./kk create config --with-kubesphere v3.2.1 --with-kubernetes v1.21.5
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- Recommended Kubernetes versions for KubeSphere 3.2.1: v1.19.x, v1.20.x, v1.21.x or v1.22.x (experimental). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.21.5 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
|
||||
|
||||
- If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later.
|
||||
- If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
### Step 3: Set cluster nodes
|
||||
|
||||
As you adopt the HA topology with stacked control plane nodes, the control plane nodes and etcd nodes are on the same three machines.
|
||||
|
||||
| **Property** | **Description** |
|
||||
| :----------- | :-------------------------------- |
|
||||
| `hosts` | Detailed information of all nodes |
|
||||
| `etcd` | etcd node names |
|
||||
| `control-plane` | Control plane node names |
|
||||
| `worker` | Worker node names |
|
||||
|
||||
Put the control plane nodes (`master1`, `master2` and `master3`) under `etcd` and `master` respectively as below, which means these three machines will serve as both the master and etcd nodes. Note that the number of etcd needs to be odd. Meanwhile, it is not recommended that you install etcd on worker nodes since the memory consumption of etcd is very high.
|
||||
|
||||
#### config-sample.yaml Example
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
hosts:
|
||||
- {name: master1, address: 192.168.0.2, internalAddress: 192.168.0.2, user: ubuntu, password: Testing123}
|
||||
- {name: master2, address: 192.168.0.3, internalAddress: 192.168.0.3, user: ubuntu, password: Testing123}
|
||||
- {name: master3, address: 192.168.0.4, internalAddress: 192.168.0.4, user: ubuntu, password: Testing123}
|
||||
- {name: node1, address: 192.168.0.5, internalAddress: 192.168.0.5, user: ubuntu, password: Testing123}
|
||||
- {name: node2, address: 192.168.0.6, internalAddress: 192.168.0.6, user: ubuntu, password: Testing123}
|
||||
- {name: node3, address: 192.168.0.7, internalAddress: 192.168.0.7, user: ubuntu, password: Testing123}
|
||||
roleGroups:
|
||||
etcd:
|
||||
- master1
|
||||
- master2
|
||||
- master3
|
||||
control-plane:
|
||||
- master1
|
||||
- master2
|
||||
- master3
|
||||
worker:
|
||||
- node1
|
||||
- node2
|
||||
- node3
|
||||
```
|
||||
|
||||
For a complete configuration sample explanation, see [this file](https://github.com/kubesphere/kubekey/blob/release-1.2/docs/config-example.md).
|
||||
|
||||
### Step 4: Configure the load balancer
|
||||
|
||||
In addition to the node information, you need to provide the load balancer information in the same YAML file. For the Intranet VIP address, you can find it in the last part when you create [an internal load balancer](#step-1-create-load-balancers). Assume the VIP address and listening port of the **internal load balancer** are `192.168.0.253` and `6443`
|
||||
|
||||
respectively, and you can refer to the following example.
|
||||
|
||||
#### The configuration example in config-sample.yaml
|
||||
|
||||
```yaml
|
||||
## Internal LB config example
|
||||
## apiserver_loadbalancer_domain_name: "lb.kubesphere.local"
|
||||
controlPlaneEndpoint:
|
||||
domain: lb.kubesphere.local
|
||||
address: "192.168.0.253"
|
||||
port: 6443
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- The address and port should be indented by two spaces in `config-sample.yaml`, and the address should be VIP.
|
||||
- The domain name of the load balancer is `lb.kubesphere.local` by default for internal access. If you need to change the domain name, uncomment and modify it.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
### Step 5: Kubernetes cluster configurations (Optional)
|
||||
|
||||
Kubekey provides some fields and parameters to allow the cluster administrator to customize Kubernetes installation, including Kubernetes version, network plugins and image registry. There are some default values provided in `config-sample.yaml`. You can modify Kubernetes-related configurations in the file based on your needs. For more information, see [Kubernetes Cluster Configurations](../../../installing-on-linux/introduction/vars/).
|
||||
|
||||
### Step 6: Persistent storage plugin configurations
|
||||
|
||||
Considering data persistence in a production environment, you need to prepare persistent storage and configure the storage plugin (for example, CSI) in `config-sample.yaml` to define which storage service you want.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
For testing or development, you can skip this part. KubeKey will use the integrated OpenEBS to provision LocalPV as the storage service directly.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
**Available storage plugins and clients**
|
||||
|
||||
- Ceph RBD & CephFS
|
||||
- GlusterFS
|
||||
- QingCloud CSI
|
||||
- QingStor CSI
|
||||
- More plugins will be supported in future releases
|
||||
|
||||
Make sure you have configured the storage plugin before you get started. KubeKey will create a StorageClass and persistent volumes for related workloads during the installation. For more information, see [Persistent Storage Configurations](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/).
|
||||
|
||||
### Step 7: Enable pluggable components (Optional)
|
||||
|
||||
KubeSphere has decoupled some core feature components since v2.1.0. These components are designed to be pluggable which means you can enable them either before or after installation. By default, KubeSphere will be installed with the minimal package if you do not enable them.
|
||||
|
||||
You can enable any of them according to your demands. It is highly recommended that you install these pluggable components to discover the full-stack features and capabilities provided by KubeSphere. Make sure your machines have sufficient CPU and memory before you enable them. See [Enable Pluggable Components](../../../pluggable-components/) for details.
|
||||
|
||||
### Step 8: Start to bootstrap a cluster
|
||||
|
||||
After you complete the configuration, you can execute the following command to start the installation:
|
||||
|
||||
```bash
|
||||
./kk create cluster -f config-sample.yaml
|
||||
```
|
||||
|
||||
### Step 9: Verify the installation
|
||||
|
||||
Inspect the logs of installation. When you see output logs as follows, it means KubeSphere has been successfully deployed.
|
||||
|
||||
```bash
|
||||
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
|
||||
```
|
||||
|
||||
```bash
|
||||
#####################################################
|
||||
### Welcome to KubeSphere! ###
|
||||
#####################################################
|
||||
|
||||
Console: http://192.168.0.3:30880
|
||||
Account: admin
|
||||
Password: P@88w0rd
|
||||
|
||||
NOTES:
|
||||
1. After you log into the console, please check the
|
||||
monitoring status of service components in
|
||||
the "Cluster Management". If any service is not
|
||||
ready, please wait patiently until all components
|
||||
are up and running.
|
||||
2. Please change the default password after login.
|
||||
|
||||
#####################################################
|
||||
https://kubesphere.io 2020-08-13 10:50:24
|
||||
#####################################################
|
||||
```
|
||||
|
||||
### Step 10: Verify the HA cluster
|
||||
|
||||
Now that you have finished the installation, go back to the detail page of both the internal and external load balancers to see the status.
|
||||
|
||||

|
||||
|
||||
Both listeners show that the status is **Active**, meaning nodes are up and running.
|
||||
|
||||

|
||||
|
||||
In the web console of KubeSphere, you can also see that all the nodes are functioning well.
|
||||
|
||||
To verify if the cluster is highly available, you can turn off an instance on purpose. For example, the above console is accessed through the address `IP: 30880` (the EIP address here is the one bound to the external load balancer). If the cluster is highly available, the console will still work well even if you shut down a master node.
|
||||
|
||||
## See Also
|
||||
|
||||
[Multi-node Installation](../../../installing-on-linux/introduction/multioverview/)
|
||||
|
||||
[Kubernetes Cluster Configurations](../../../installing-on-linux/introduction/vars/)
|
||||
|
||||
[Persistent Storage Configurations](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/)
|
||||
|
||||
[Enable Pluggable Components](../../../pluggable-components/)
|
||||
|
|
@ -0,0 +1,340 @@
|
|||
---
|
||||
title: "Deploy KubeSphere on QingCloud Instances"
|
||||
keywords: "KubeSphere, Installation, HA, High-availability, LoadBalancer"
|
||||
description: "Learn how to create a high-availability cluster on QingCloud platform."
|
||||
linkTitle: "Deploy KubeSphere on QingCloud Instances"
|
||||
Weight: 3420
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
For a production environment, you need to consider the high availability of the cluster. If key components (for example, kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same control plane node, Kubernetes and KubeSphere will be unavailable once the control plane node goes down. Therefore, you need to set up a high-availability cluster by provisioning load balancers with multiple control plane nodes. You can use any cloud load balancer, or any hardware load balancer (for example, F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters.
|
||||
|
||||
This tutorial walks you through an example of how to create two [QingCloud load balancers](https://docs.qingcloud.com/product/network/loadbalancer), serving as the internal load balancer and external load balancer respectively, and of how to implement high availability of control plane and etcd nodes using the load balancers.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Make sure you already know how to install KubeSphere on a multi-node cluster by following the [guide](../../../installing-on-linux/introduction/multioverview/). For detailed information about the configuration file that is used for installation, see [Edit the configuration file](../../../installing-on-linux/introduction/multioverview/#2-edit-the-configuration-file). This tutorial focuses more on how to configure load balancers.
|
||||
- You need a [QingCloud](https://console.qingcloud.com/login) account to create load balancers, or follow the guide of any other cloud provider to create load balancers.
|
||||
- For a production environment, it is recommended that you prepare persistent storage and create a StorageClass in advance. For development and testing, you can use the integrated OpenEBS to provision LocalPV as the storage service directly.
|
||||
|
||||
## Architecture
|
||||
|
||||
This example prepares six machines of **Ubuntu 16.04.6**. You will create two load balancers, and deploy three master and etcd nodes on three of the machines. You can configure these master and etcd nodes in `config-sample.yaml` created by KubeKey (Please note that this is the default name, which can be changed by yourself).
|
||||
|
||||

|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
The Kubernetes document [Options for Highly Available topology](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/) demonstrates that there are two options for configuring the topology of a highly available (HA) Kubernetes cluster, i.e. stacked etcd topology and external etcd topology. You should carefully consider the advantages and disadvantages of each topology before setting up an HA cluster according to [this document](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/). This tutorial adopts stacked etcd topology to bootstrap an HA cluster for demonstration purposes.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
## Install an HA Cluster
|
||||
|
||||
### Step 1: Create load balancers
|
||||
|
||||
This step demonstrates how to create load balancers on the QingCloud platform.
|
||||
|
||||
#### Create an internal load balancer
|
||||
|
||||
1. Log in to the [QingCloud console](https://console.qingcloud.com/login). In the menu on the left, under **Network & CDN**, select **Load Balancers**. Click **Create** to create a load balancer.
|
||||
|
||||

|
||||
|
||||
2. In the pop-up window, set a name for the load balancer. Choose the VxNet where your machines are created from the **Network** drop-down list. Here is `pn`. Other fields can be default values as shown below. Click **Submit** to finish.
|
||||
|
||||

|
||||
|
||||
3. Click the load balancer. On the detail page, create a listener that listens on port `6443` with the **Listener Protocol** set to `TCP`.
|
||||
|
||||

|
||||
|
||||
- **Name**: Define a name for this Listener
|
||||
- **Listener Protocol**: Select `TCP` protocol
|
||||
- **Port**: `6443`
|
||||
- **Balance mode**: `Poll`
|
||||
|
||||
Click **Submit** to continue.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
After you create the listener, check the firewall rules of the load balancer. Make sure that port `6443` has been added to the firewall rules and that external traffic is allowed to port `6443`. Otherwise, the installation will fail. You can find the information in **Security Groups** under **Security** on the QingCloud platform.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
4. Click **Add Backend**, and choose the VxNet you just selected (in this example, it is `pn`). Click **Advanced Search**, choose the three control plane nodes, and set the port to `6443` which is the default secure port of api-server.
|
||||
|
||||

|
||||
|
||||
Click **Submit** when you finish.
|
||||
|
||||
5. Click **Apply Changes** to use the configurations. At this point, you can find the three control plane nodes have been added as the backend servers of the listener that is behind the internal load balancer.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
The status of all control plane nodes might show **Not Available** after you added them as backends. This is normal since port `6443` of api-server is not active on control plane nodes yet. The status will change to **Active** and the port of api-server will be exposed after the installation finishes, which means the internal load balancer you configured works as expected.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||

|
||||
|
||||
Record the Intranet VIP shown under **Networks**. The IP address will be added later to the configuration file.
|
||||
|
||||
#### Create an external load balancer
|
||||
|
||||
You need to create an EIP in advance. To create an EIP, go to **Elastic IPs** under **Networks & CDN**.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
Two elastic IPs are needed for this tutorial, one for the VPC network and the other for the external load balancer created in this step. You cannot associate the same EIP to the VPC network and the load balancer at the same time.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
1. Similarly, create an external load balancer while don't select VxNet for the **Network** field. Bind the EIP that you created to this load balancer by clicking **Add IPv4**.
|
||||
|
||||

|
||||
|
||||
2. On the load balancer's detail page, create a listener that listens on port `30880` (NodePort of KubeSphere console) with **Listener Protocol** set to `HTTP`.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
After you create the listener, check the firewall rules of the load balancer. Make sure that port `30880` has been added to the firewall rules and that external traffic is allowed to port `30880`. Otherwise, the installation will fail. You can find the information in **Security Groups** under **Security** on the QingCloud platform.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||

|
||||
|
||||
3. Click **Add Backend**. In **Advanced Search**, choose the `six` machines on which you are going to install KubeSphere within the VxNet `pn`, and set the port to `30880`.
|
||||
|
||||

|
||||
|
||||
Click **Submit** when you finish.
|
||||
|
||||
4. Click **Apply Changes** to use the configurations. At this point, you can find the six machines have been added as the backend servers of the listener that is behind the external load balancer.
|
||||
|
||||
### Step 2: Download KubeKey
|
||||
|
||||
[Kubekey](https://github.com/kubesphere/kubekey) is the next-gen installer which provides an easy, fast and flexible way to install Kubernetes and KubeSphere.
|
||||
|
||||
Follow the step below to download KubeKey.
|
||||
|
||||
{{< tabs >}}
|
||||
|
||||
{{< tab "Good network connections to GitHub/Googleapis" >}}
|
||||
|
||||
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.0.0 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{< tab "Poor network connections to GitHub/Googleapis" >}}
|
||||
|
||||
Run the following command first to make sure you download KubeKey from the correct zone.
|
||||
|
||||
```bash
|
||||
export KKZONE=cn
|
||||
```
|
||||
|
||||
Run the following command to download KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.0.0 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
After you download KubeKey, if you transfer it to a new machine also with poor network connections to Googleapis, you must run `export KKZONE=cn` again before you proceed with the steps below.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{</ tabs >}}
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
The commands above download the latest release (v2.0.0) of KubeKey. You can change the version number in the command to download a specific version.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
Make `kk` executable:
|
||||
|
||||
```bash
|
||||
chmod +x kk
|
||||
```
|
||||
|
||||
Create an example configuration file with default configurations. Here Kubernetes v1.21.5 is used as an example.
|
||||
|
||||
```bash
|
||||
./kk create config --with-kubesphere v3.2.1 --with-kubernetes v1.21.5
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- Recommended Kubernetes versions for KubeSphere 3.2.1: v1.19.x, v1.20.x, v1.21.x or v1.22.x (experimental). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.21.5 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
|
||||
|
||||
- If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later.
|
||||
- If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
### Step 3: Set cluster nodes
|
||||
|
||||
As you adopt the HA topology with stacked control plane nodes, the control plane nodes and etcd nodes are on the same three machines.
|
||||
|
||||
| **Property** | **Description** |
|
||||
| :----------- | :-------------------------------- |
|
||||
| `hosts` | Detailed information of all nodes |
|
||||
| `etcd` | etcd node names |
|
||||
| `control-plane` | Control plane node names |
|
||||
| `worker` | Worker node names |
|
||||
|
||||
Put the control plane nodes (`master1`, `master2` and `master3`) under `etcd` and `master` respectively as below, which means these three machines will serve as both the control plane and etcd nodes. Note that the number of etcd needs to be odd. Meanwhile, it is not recommended that you install etcd on worker nodes since the memory consumption of etcd is very high.
|
||||
|
||||
#### config-sample.yaml Example
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
hosts:
|
||||
- {name: master1, address: 192.168.0.2, internalAddress: 192.168.0.2, user: ubuntu, password: Testing123}
|
||||
- {name: master2, address: 192.168.0.3, internalAddress: 192.168.0.3, user: ubuntu, password: Testing123}
|
||||
- {name: master3, address: 192.168.0.4, internalAddress: 192.168.0.4, user: ubuntu, password: Testing123}
|
||||
- {name: node1, address: 192.168.0.5, internalAddress: 192.168.0.5, user: ubuntu, password: Testing123}
|
||||
- {name: node2, address: 192.168.0.6, internalAddress: 192.168.0.6, user: ubuntu, password: Testing123}
|
||||
- {name: node3, address: 192.168.0.7, internalAddress: 192.168.0.7, user: ubuntu, password: Testing123}
|
||||
roleGroups:
|
||||
etcd:
|
||||
- master1
|
||||
- master2
|
||||
- master3
|
||||
control-plane:
|
||||
- master1
|
||||
- master2
|
||||
- master3
|
||||
worker:
|
||||
- node1
|
||||
- node2
|
||||
- node3
|
||||
```
|
||||
|
||||
For a complete configuration sample explanation, see [this file](https://github.com/kubesphere/kubekey/blob/release-1.2/docs/config-example.md).
|
||||
|
||||
### Step 4: Configure the load balancer
|
||||
|
||||
In addition to the node information, you need to provide the load balancer information in the same YAML file. For the Intranet VIP address, you can find it in the last part when you create [an internal load balancer](#step-1-create-load-balancers). Assume the VIP address and listening port of the **internal load balancer** are `192.168.0.253` and `6443`
|
||||
|
||||
respectively, and you can refer to the following example.
|
||||
|
||||
#### The configuration example in config-sample.yaml
|
||||
|
||||
```yaml
|
||||
## Internal LB config example
|
||||
## apiserver_loadbalancer_domain_name: "lb.kubesphere.local"
|
||||
controlPlaneEndpoint:
|
||||
domain: lb.kubesphere.local
|
||||
address: "192.168.0.253"
|
||||
port: 6443
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- The address and port should be indented by two spaces in `config-sample.yaml`, and the address should be VIP.
|
||||
- The domain name of the load balancer is `lb.kubesphere.local` by default for internal access. If you need to change the domain name, uncomment and modify it.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
### Step 5: Kubernetes cluster configurations (Optional)
|
||||
|
||||
Kubekey provides some fields and parameters to allow the cluster administrator to customize Kubernetes installation, including Kubernetes version, network plugins and image registry. There are some default values provided in `config-sample.yaml`. You can modify Kubernetes-related configurations in the file based on your needs. For more information, see [Kubernetes Cluster Configurations](../../../installing-on-linux/introduction/vars/).
|
||||
|
||||
### Step 6: Persistent storage plugin configurations
|
||||
|
||||
Considering data persistence in a production environment, you need to prepare persistent storage and configure the storage plugin (for example, CSI) in `config-sample.yaml` to define which storage service you want.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
For testing or development, you can skip this part. KubeKey will use the integrated OpenEBS to provision LocalPV as the storage service directly.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
**Available storage plugins and clients**
|
||||
|
||||
- Ceph RBD & CephFS
|
||||
- GlusterFS
|
||||
- QingCloud CSI
|
||||
- QingStor CSI
|
||||
- More plugins will be supported in future releases
|
||||
|
||||
Make sure you have configured the storage plugin before you get started. KubeKey will create a StorageClass and persistent volumes for related workloads during the installation. For more information, see [Persistent Storage Configurations](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/).
|
||||
|
||||
### Step 7: Enable pluggable components (Optional)
|
||||
|
||||
KubeSphere has decoupled some core feature components since v2.1.0. These components are designed to be pluggable which means you can enable them either before or after installation. By default, KubeSphere will be installed with the minimal package if you do not enable them.
|
||||
|
||||
You can enable any of them according to your demands. It is highly recommended that you install these pluggable components to discover the full-stack features and capabilities provided by KubeSphere. Make sure your machines have sufficient CPU and memory before you enable them. See [Enable Pluggable Components](../../../pluggable-components/) for details.
|
||||
|
||||
### Step 8: Start to bootstrap a cluster
|
||||
|
||||
After you complete the configuration, you can execute the following command to start the installation:
|
||||
|
||||
```bash
|
||||
./kk create cluster -f config-sample.yaml
|
||||
```
|
||||
|
||||
### Step 9: Verify the installation
|
||||
|
||||
Inspect the logs of installation. When you see output logs as follows, it means KubeSphere has been successfully deployed.
|
||||
|
||||
```bash
|
||||
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
|
||||
```
|
||||
|
||||
```bash
|
||||
#####################################################
|
||||
### Welcome to KubeSphere! ###
|
||||
#####################################################
|
||||
|
||||
Console: http://192.168.0.3:30880
|
||||
Account: admin
|
||||
Password: P@88w0rd
|
||||
|
||||
NOTES:
|
||||
1. After you log into the console, please check the
|
||||
monitoring status of service components in
|
||||
the "Cluster Management". If any service is not
|
||||
ready, please wait patiently until all components
|
||||
are up and running.
|
||||
2. Please change the default password after login.
|
||||
|
||||
#####################################################
|
||||
https://kubesphere.io 2020-08-13 10:50:24
|
||||
#####################################################
|
||||
```
|
||||
|
||||
### Step 10: Verify the HA cluster
|
||||
|
||||
Now that you have finished the installation, go back to the detail page of both the internal and external load balancers to see the status.
|
||||
|
||||

|
||||
|
||||
Both listeners show that the status is **Active**, meaning nodes are up and running.
|
||||
|
||||

|
||||
|
||||
In the web console of KubeSphere, you can also see that all the nodes are functioning well.
|
||||
|
||||
To verify if the cluster is highly available, you can turn off an instance on purpose. For example, the above console is accessed through the address `IP: 30880` (the EIP address here is the one bound to the external load balancer). If the cluster is highly available, the console will still work well even if you shut down a master node.
|
||||
|
||||
## See Also
|
||||
|
||||
[Multi-node Installation](../../../installing-on-linux/introduction/multioverview/)
|
||||
|
||||
[Kubernetes Cluster Configurations](../../../installing-on-linux/introduction/vars/)
|
||||
|
||||
[Persistent Storage Configurations](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/)
|
||||
|
||||
[Enable Pluggable Components](../../../pluggable-components/)
|
||||
|
|
@ -0,0 +1,340 @@
|
|||
---
|
||||
title: "Deploy KubeSphere on QingCloud Instances"
|
||||
keywords: "KubeSphere, Installation, HA, High-availability, LoadBalancer"
|
||||
description: "Learn how to create a high-availability cluster on QingCloud platform."
|
||||
linkTitle: "Deploy KubeSphere on QingCloud Instances"
|
||||
Weight: 3420
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
For a production environment, you need to consider the high availability of the cluster. If key components (for example, kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same control plane node, Kubernetes and KubeSphere will be unavailable once the control plane node goes down. Therefore, you need to set up a high-availability cluster by provisioning load balancers with multiple control plane nodes. You can use any cloud load balancer, or any hardware load balancer (for example, F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters.
|
||||
|
||||
This tutorial walks you through an example of how to create two [QingCloud load balancers](https://docs.qingcloud.com/product/network/loadbalancer), serving as the internal load balancer and external load balancer respectively, and of how to implement high availability of control plane and etcd nodes using the load balancers.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Make sure you already know how to install KubeSphere on a multi-node cluster by following the [guide](../../../installing-on-linux/introduction/multioverview/). For detailed information about the configuration file that is used for installation, see [Edit the configuration file](../../../installing-on-linux/introduction/multioverview/#2-edit-the-configuration-file). This tutorial focuses more on how to configure load balancers.
|
||||
- You need a [QingCloud](https://console.qingcloud.com/login) account to create load balancers, or follow the guide of any other cloud provider to create load balancers.
|
||||
- For a production environment, it is recommended that you prepare persistent storage and create a StorageClass in advance. For development and testing, you can use the integrated OpenEBS to provision LocalPV as the storage service directly.
|
||||
|
||||
## Architecture
|
||||
|
||||
This example prepares six machines of **Ubuntu 16.04.6**. You will create two load balancers, and deploy three control plane nodes and etcd nodes on three of the machines. You can configure these control plane and etcd nodes in `config-sample.yaml` created by KubeKey (Please note that this is the default name, which can be changed by yourself).
|
||||
|
||||

|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
The Kubernetes document [Options for Highly Available topology](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/) demonstrates that there are two options for configuring the topology of a highly available (HA) Kubernetes cluster, i.e. stacked etcd topology and external etcd topology. You should carefully consider the advantages and disadvantages of each topology before setting up an HA cluster according to [this document](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/). This tutorial adopts stacked etcd topology to bootstrap an HA cluster for demonstration purposes.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
## Install an HA Cluster
|
||||
|
||||
### Step 1: Create load balancers
|
||||
|
||||
This step demonstrates how to create load balancers on the QingCloud platform.
|
||||
|
||||
#### Create an internal load balancer
|
||||
|
||||
1. Log in to the [QingCloud console](https://console.qingcloud.com/login). In the menu on the left, under **Network & CDN**, select **Load Balancers**. Click **Create** to create a load balancer.
|
||||
|
||||

|
||||
|
||||
2. In the pop-up window, set a name for the load balancer. Choose the VxNet where your machines are created from the **Network** drop-down list. Here is `pn`. Other fields can be default values as shown below. Click **Submit** to finish.
|
||||
|
||||

|
||||
|
||||
3. Click the load balancer. On the detail page, create a listener that listens on port `6443` with the **Listener Protocol** set to `TCP`.
|
||||
|
||||

|
||||
|
||||
- **Name**: Define a name for this Listener
|
||||
- **Listener Protocol**: Select `TCP` protocol
|
||||
- **Port**: `6443`
|
||||
- **Balance mode**: `Poll`
|
||||
|
||||
Click **Submit** to continue.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
After you create the listener, check the firewall rules of the load balancer. Make sure that port `6443` has been added to the firewall rules and that external traffic is allowed to port `6443`. Otherwise, the installation will fail. You can find the information in **Security Groups** under **Security** on the QingCloud platform.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
4. Click **Add Backend**, and choose the VxNet you just selected (in this example, it is `pn`). Click **Advanced Search**, choose the three control plane nodes, and set the port to `6443` which is the default secure port of api-server.
|
||||
|
||||

|
||||
|
||||
Click **Submit** when you finish.
|
||||
|
||||
5. Click **Apply Changes** to use the configurations. At this point, you can find the three control plane nodes have been added as the backend servers of the listener that is behind the internal load balancer.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
The status of all control plane nodes might show **Not Available** after you added them as backends. This is normal since port `6443` of api-server is not active on control plane nodes yet. The status will change to **Active** and the port of api-server will be exposed after the installation finishes, which means the internal load balancer you configured works as expected.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||

|
||||
|
||||
Record the Intranet VIP shown under **Networks**. The IP address will be added later to the configuration file.
|
||||
|
||||
#### Create an external load balancer
|
||||
|
||||
You need to create an EIP in advance. To create an EIP, go to **Elastic IPs** under **Networks & CDN**.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
Two elastic IPs are needed for this tutorial, one for the VPC network and the other for the external load balancer created in this step. You cannot associate the same EIP to the VPC network and the load balancer at the same time.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
1. Similarly, create an external load balancer while don't select VxNet for the **Network** field. Bind the EIP that you created to this load balancer by clicking **Add IPv4**.
|
||||
|
||||

|
||||
|
||||
2. On the load balancer's detail page, create a listener that listens on port `30880` (NodePort of KubeSphere console) with **Listener Protocol** set to `HTTP`.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
After you create the listener, check the firewall rules of the load balancer. Make sure that port `30880` has been added to the firewall rules and that external traffic is allowed to port `30880`. Otherwise, the installation will fail. You can find the information in **Security Groups** under **Security** on the QingCloud platform.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||

|
||||
|
||||
3. Click **Add Backend**. In **Advanced Search**, choose the `six` machines on which you are going to install KubeSphere within the VxNet `pn`, and set the port to `30880`.
|
||||
|
||||

|
||||
|
||||
Click **Submit** when you finish.
|
||||
|
||||
4. Click **Apply Changes** to use the configurations. At this point, you can find the six machines have been added as the backend servers of the listener that is behind the external load balancer.
|
||||
|
||||
### Step 2: Download KubeKey
|
||||
|
||||
[Kubekey](https://github.com/kubesphere/kubekey) is the next-gen installer which provides an easy, fast and flexible way to install Kubernetes and KubeSphere.
|
||||
|
||||
Follow the step below to download KubeKey.
|
||||
|
||||
{{< tabs >}}
|
||||
|
||||
{{< tab "Good network connections to GitHub/Googleapis" >}}
|
||||
|
||||
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.0.0 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{< tab "Poor network connections to GitHub/Googleapis" >}}
|
||||
|
||||
Run the following command first to make sure you download KubeKey from the correct zone.
|
||||
|
||||
```bash
|
||||
export KKZONE=cn
|
||||
```
|
||||
|
||||
Run the following command to download KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.0.0 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
After you download KubeKey, if you transfer it to a new machine also with poor network connections to Googleapis, you must run `export KKZONE=cn` again before you proceed with the steps below.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{</ tabs >}}
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
The commands above download the latest release (v2.0.0) of KubeKey. You can change the version number in the command to download a specific version.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
Make `kk` executable:
|
||||
|
||||
```bash
|
||||
chmod +x kk
|
||||
```
|
||||
|
||||
Create an example configuration file with default configurations. Here Kubernetes v1.21.5 is used as an example.
|
||||
|
||||
```bash
|
||||
./kk create config --with-kubesphere v3.2.1 --with-kubernetes v1.21.5
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- Recommended Kubernetes versions for KubeSphere 3.2.1: v1.19.x, v1.20.x, v1.21.x or v1.22.x (experimental). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.21.5 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
|
||||
|
||||
- If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later.
|
||||
- If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
### Step 3: Set cluster nodes
|
||||
|
||||
As you adopt the HA topology with stacked control plane nodes, the control plane nodes and etcd nodes are on the same three machines.
|
||||
|
||||
| **Property** | **Description** |
|
||||
| :----------- | :-------------------------------- |
|
||||
| `hosts` | Detailed information of all nodes |
|
||||
| `etcd` | etcd node names |
|
||||
| `control-plane` | Control plane node names |
|
||||
| `worker` | Worker node names |
|
||||
|
||||
Put the control plane nodes (`master1`, `master2` and `master3`) under `etcd` and `master` respectively as below, which means these three machines will serve as both the control plane and etcd nodes. Note that the number of etcd needs to be odd. Meanwhile, it is not recommended that you install etcd on worker nodes since the memory consumption of etcd is very high.
|
||||
|
||||
#### config-sample.yaml Example
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
hosts:
|
||||
- {name: master1, address: 192.168.0.2, internalAddress: 192.168.0.2, user: ubuntu, password: Testing123}
|
||||
- {name: master2, address: 192.168.0.3, internalAddress: 192.168.0.3, user: ubuntu, password: Testing123}
|
||||
- {name: master3, address: 192.168.0.4, internalAddress: 192.168.0.4, user: ubuntu, password: Testing123}
|
||||
- {name: node1, address: 192.168.0.5, internalAddress: 192.168.0.5, user: ubuntu, password: Testing123}
|
||||
- {name: node2, address: 192.168.0.6, internalAddress: 192.168.0.6, user: ubuntu, password: Testing123}
|
||||
- {name: node3, address: 192.168.0.7, internalAddress: 192.168.0.7, user: ubuntu, password: Testing123}
|
||||
roleGroups:
|
||||
etcd:
|
||||
- master1
|
||||
- master2
|
||||
- master3
|
||||
control-plane:
|
||||
- master1
|
||||
- master2
|
||||
- master3
|
||||
worker:
|
||||
- node1
|
||||
- node2
|
||||
- node3
|
||||
```
|
||||
|
||||
For a complete configuration sample explanation, see [this file](https://github.com/kubesphere/kubekey/blob/release-1.2/docs/config-example.md).
|
||||
|
||||
### Step 4: Configure the load balancer
|
||||
|
||||
In addition to the node information, you need to provide the load balancer information in the same YAML file. For the Intranet VIP address, you can find it in the last part when you create [an internal load balancer](#step-1-create-load-balancers). Assume the VIP address and listening port of the **internal load balancer** are `192.168.0.253` and `6443`
|
||||
|
||||
respectively, and you can refer to the following example.
|
||||
|
||||
#### The configuration example in config-sample.yaml
|
||||
|
||||
```yaml
|
||||
## Internal LB config example
|
||||
## apiserver_loadbalancer_domain_name: "lb.kubesphere.local"
|
||||
controlPlaneEndpoint:
|
||||
domain: lb.kubesphere.local
|
||||
address: "192.168.0.253"
|
||||
port: 6443
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- The address and port should be indented by two spaces in `config-sample.yaml`, and the address should be VIP.
|
||||
- The domain name of the load balancer is `lb.kubesphere.local` by default for internal access. If you need to change the domain name, uncomment and modify it.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
### Step 5: Kubernetes cluster configurations (Optional)
|
||||
|
||||
Kubekey provides some fields and parameters to allow the cluster administrator to customize Kubernetes installation, including Kubernetes version, network plugins and image registry. There are some default values provided in `config-sample.yaml`. You can modify Kubernetes-related configurations in the file based on your needs. For more information, see [Kubernetes Cluster Configurations](../../../installing-on-linux/introduction/vars/).
|
||||
|
||||
### Step 6: Persistent storage plugin configurations
|
||||
|
||||
Considering data persistence in a production environment, you need to prepare persistent storage and configure the storage plugin (for example, CSI) in `config-sample.yaml` to define which storage service you want.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
For testing or development, you can skip this part. KubeKey will use the integrated OpenEBS to provision LocalPV as the storage service directly.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
**Available storage plugins and clients**
|
||||
|
||||
- Ceph RBD & CephFS
|
||||
- GlusterFS
|
||||
- QingCloud CSI
|
||||
- QingStor CSI
|
||||
- More plugins will be supported in future releases
|
||||
|
||||
Make sure you have configured the storage plugin before you get started. KubeKey will create a StorageClass and persistent volumes for related workloads during the installation. For more information, see [Persistent Storage Configurations](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/).
|
||||
|
||||
### Step 7: Enable pluggable components (Optional)
|
||||
|
||||
KubeSphere has decoupled some core feature components since v2.1.0. These components are designed to be pluggable which means you can enable them either before or after installation. By default, KubeSphere will be installed with the minimal package if you do not enable them.
|
||||
|
||||
You can enable any of them according to your demands. It is highly recommended that you install these pluggable components to discover the full-stack features and capabilities provided by KubeSphere. Make sure your machines have sufficient CPU and memory before you enable them. See [Enable Pluggable Components](../../../pluggable-components/) for details.
|
||||
|
||||
### Step 8: Start to bootstrap a cluster
|
||||
|
||||
After you complete the configuration, you can execute the following command to start the installation:
|
||||
|
||||
```bash
|
||||
./kk create cluster -f config-sample.yaml
|
||||
```
|
||||
|
||||
### Step 9: Verify the installation
|
||||
|
||||
Inspect the logs of installation. When you see output logs as follows, it means KubeSphere has been successfully deployed.
|
||||
|
||||
```bash
|
||||
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
|
||||
```
|
||||
|
||||
```bash
|
||||
#####################################################
|
||||
### Welcome to KubeSphere! ###
|
||||
#####################################################
|
||||
|
||||
Console: http://192.168.0.3:30880
|
||||
Account: admin
|
||||
Password: P@88w0rd
|
||||
|
||||
NOTES:
|
||||
1. After you log into the console, please check the
|
||||
monitoring status of service components in
|
||||
the "Cluster Management". If any service is not
|
||||
ready, please wait patiently until all components
|
||||
are up and running.
|
||||
2. Please change the default password after login.
|
||||
|
||||
#####################################################
|
||||
https://kubesphere.io 2020-08-13 10:50:24
|
||||
#####################################################
|
||||
```
|
||||
|
||||
### Step 10: Verify the HA cluster
|
||||
|
||||
Now that you have finished the installation, go back to the detail page of both the internal and external load balancers to see the status.
|
||||
|
||||

|
||||
|
||||
Both listeners show that the status is **Active**, meaning nodes are up and running.
|
||||
|
||||

|
||||
|
||||
In the web console of KubeSphere, you can also see that all the nodes are functioning well.
|
||||
|
||||
To verify if the cluster is highly available, you can turn off an instance on purpose. For example, the above console is accessed through the address `IP: 30880` (the EIP address here is the one bound to the external load balancer). If the cluster is highly available, the console will still work well even if you shut down a master node.
|
||||
|
||||
## See Also
|
||||
|
||||
[Multi-node Installation](../../../installing-on-linux/introduction/multioverview/)
|
||||
|
||||
[Kubernetes Cluster Configurations](../../../installing-on-linux/introduction/vars/)
|
||||
|
||||
[Persistent Storage Configurations](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/)
|
||||
|
||||
[Enable Pluggable Components](../../../pluggable-components/)
|
||||
|
|
@ -0,0 +1,340 @@
|
|||
---
|
||||
title: "Deploy KubeSphere on QingCloud Instances"
|
||||
keywords: "KubeSphere, Installation, HA, High-availability, LoadBalancer"
|
||||
description: "Learn how to create a high-availability cluster on QingCloud platform."
|
||||
linkTitle: "Deploy KubeSphere on QingCloud Instances"
|
||||
Weight: 3420
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
For a production environment, you need to consider the high availability of the cluster. If key components (for example, kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same control plane node, Kubernetes and KubeSphere will be unavailable once the control plane node goes down. Therefore, you need to set up a high-availability cluster by provisioning load balancers with multiple control plane nodes. You can use any cloud load balancer, or any hardware load balancer (for example, F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters.
|
||||
|
||||
This tutorial walks you through an example of how to create two [QingCloud load balancers](https://docs.qingcloud.com/product/network/loadbalancer), serving as the internal load balancer and external load balancer respectively, and of how to implement high availability of control plane and etcd nodes using the load balancers.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Make sure you already know how to install KubeSphere on a multi-node cluster by following the [guide](../../../installing-on-linux/introduction/multioverview/). For detailed information about the configuration file that is used for installation, see [Edit the configuration file](../../../installing-on-linux/introduction/multioverview/#2-edit-the-configuration-file). This tutorial focuses more on how to configure load balancers.
|
||||
- You need a [QingCloud](https://console.qingcloud.com/login) account to create load balancers, or follow the guide of any other cloud provider to create load balancers.
|
||||
- For a production environment, it is recommended that you prepare persistent storage and create a StorageClass in advance. For development and testing, you can use the integrated OpenEBS to provision LocalPV as the storage service directly.
|
||||
|
||||
## Architecture
|
||||
|
||||
This example prepares six machines of **Ubuntu 16.04.6**. You will create two load balancers, and deploy three control plane nodes and etcd nodes on three of the machines. You can configure these control plane and etcd nodes in `config-sample.yaml` created by KubeKey (Please note that this is the default name, which can be changed by yourself).
|
||||
|
||||

|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
The Kubernetes document [Options for Highly Available topology](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/) demonstrates that there are two options for configuring the topology of a highly available (HA) Kubernetes cluster, i.e. stacked etcd topology and external etcd topology. You should carefully consider the advantages and disadvantages of each topology before setting up an HA cluster according to [this document](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/). This tutorial adopts stacked etcd topology to bootstrap an HA cluster for demonstration purposes.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
## Install an HA Cluster
|
||||
|
||||
### Step 1: Create load balancers
|
||||
|
||||
This step demonstrates how to create load balancers on the QingCloud platform.
|
||||
|
||||
#### Create an internal load balancer
|
||||
|
||||
1. Log in to the [QingCloud console](https://console.qingcloud.com/login). In the menu on the left, under **Network & CDN**, select **Load Balancers**. Click **Create** to create a load balancer.
|
||||
|
||||

|
||||
|
||||
2. In the pop-up window, set a name for the load balancer. Choose the VxNet where your machines are created from the **Network** drop-down list. Here is `pn`. Other fields can be default values as shown below. Click **Submit** to finish.
|
||||
|
||||

|
||||
|
||||
3. Click the load balancer. On the detail page, create a listener that listens on port `6443` with the **Listener Protocol** set to `TCP`.
|
||||
|
||||

|
||||
|
||||
- **Name**: Define a name for this Listener
|
||||
- **Listener Protocol**: Select `TCP` protocol
|
||||
- **Port**: `6443`
|
||||
- **Balance mode**: `Poll`
|
||||
|
||||
Click **Submit** to continue.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
After you create the listener, check the firewall rules of the load balancer. Make sure that port `6443` has been added to the firewall rules and that external traffic is allowed to port `6443`. Otherwise, the installation will fail. You can find the information in **Security Groups** under **Security** on the QingCloud platform.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
4. Click **Add Backend**, and choose the VxNet you just selected (in this example, it is `pn`). Click **Advanced Search**, choose the three control plane nodes, and set the port to `6443` which is the default secure port of api-server.
|
||||
|
||||

|
||||
|
||||
Click **Submit** when you finish.
|
||||
|
||||
5. Click **Apply Changes** to use the configurations. At this point, you can find the three control plane nodes have been added as the backend servers of the listener that is behind the internal load balancer.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
The status of all control plane nodes might show **Not Available** after you added them as backends. This is normal since port `6443` of api-server is not active on control plane nodes yet. The status will change to **Active** and the port of api-server will be exposed after the installation finishes, which means the internal load balancer you configured works as expected.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||

|
||||
|
||||
Record the Intranet VIP shown under **Networks**. The IP address will be added later to the configuration file.
|
||||
|
||||
#### Create an external load balancer
|
||||
|
||||
You need to create an EIP in advance. To create an EIP, go to **Elastic IPs** under **Networks & CDN**.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
Two elastic IPs are needed for this tutorial, one for the VPC network and the other for the external load balancer created in this step. You cannot associate the same EIP to the VPC network and the load balancer at the same time.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
1. Similarly, create an external load balancer while don't select VxNet for the **Network** field. Bind the EIP that you created to this load balancer by clicking **Add IPv4**.
|
||||
|
||||

|
||||
|
||||
2. On the load balancer's detail page, create a listener that listens on port `30880` (NodePort of KubeSphere console) with **Listener Protocol** set to `HTTP`.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
After you create the listener, check the firewall rules of the load balancer. Make sure that port `30880` has been added to the firewall rules and that external traffic is allowed to port `30880`. Otherwise, the installation will fail. You can find the information in **Security Groups** under **Security** on the QingCloud platform.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||

|
||||
|
||||
3. Click **Add Backend**. In **Advanced Search**, choose the `six` machines on which you are going to install KubeSphere within the VxNet `pn`, and set the port to `30880`.
|
||||
|
||||

|
||||
|
||||
Click **Submit** when you finish.
|
||||
|
||||
4. Click **Apply Changes** to use the configurations. At this point, you can find the six machines have been added as the backend servers of the listener that is behind the external load balancer.
|
||||
|
||||
### Step 2: Download KubeKey
|
||||
|
||||
[Kubekey](https://github.com/kubesphere/kubekey) is the next-gen installer which provides an easy, fast and flexible way to install Kubernetes and KubeSphere.
|
||||
|
||||
Follow the step below to download KubeKey.
|
||||
|
||||
{{< tabs >}}
|
||||
|
||||
{{< tab "Good network connections to GitHub/Googleapis" >}}
|
||||
|
||||
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.0.0 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{< tab "Poor network connections to GitHub/Googleapis" >}}
|
||||
|
||||
Run the following command first to make sure you download KubeKey from the correct zone.
|
||||
|
||||
```bash
|
||||
export KKZONE=cn
|
||||
```
|
||||
|
||||
Run the following command to download KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.0.0 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
After you download KubeKey, if you transfer it to a new machine also with poor network connections to Googleapis, you must run `export KKZONE=cn` again before you proceed with the steps below.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{</ tabs >}}
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
The commands above download the latest release (v2.0.0) of KubeKey. You can change the version number in the command to download a specific version.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
Make `kk` executable:
|
||||
|
||||
```bash
|
||||
chmod +x kk
|
||||
```
|
||||
|
||||
Create an example configuration file with default configurations. Here Kubernetes v1.21.5 is used as an example.
|
||||
|
||||
```bash
|
||||
./kk create config --with-kubesphere v3.2.1 --with-kubernetes v1.21.5
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- Recommended Kubernetes versions for KubeSphere 3.2.1: v1.19.x, v1.20.x, v1.21.x or v1.22.x (experimental). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.21.5 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
|
||||
|
||||
- If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later.
|
||||
- If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
### Step 3: Set cluster nodes
|
||||
|
||||
As you adopt the HA topology with stacked control plane nodes, the control plane nodes and etcd nodes are on the same three machines.
|
||||
|
||||
| **Property** | **Description** |
|
||||
| :----------- | :-------------------------------- |
|
||||
| `hosts` | Detailed information of all nodes |
|
||||
| `etcd` | etcd node names |
|
||||
| `control-plane` | Control plane node names |
|
||||
| `worker` | Worker node names |
|
||||
|
||||
Put the control plane nodes (`master1`, `master2` and `master3`) under `etcd` and `master` respectively as below, which means these three machines will serve as both the control plane and etcd nodes. Note that the number of etcd needs to be odd. Meanwhile, it is not recommended that you install etcd on worker nodes since the memory consumption of etcd is very high.
|
||||
|
||||
#### config-sample.yaml Example
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
hosts:
|
||||
- {name: master1, address: 192.168.0.2, internalAddress: 192.168.0.2, user: ubuntu, password: Testing123}
|
||||
- {name: master2, address: 192.168.0.3, internalAddress: 192.168.0.3, user: ubuntu, password: Testing123}
|
||||
- {name: master3, address: 192.168.0.4, internalAddress: 192.168.0.4, user: ubuntu, password: Testing123}
|
||||
- {name: node1, address: 192.168.0.5, internalAddress: 192.168.0.5, user: ubuntu, password: Testing123}
|
||||
- {name: node2, address: 192.168.0.6, internalAddress: 192.168.0.6, user: ubuntu, password: Testing123}
|
||||
- {name: node3, address: 192.168.0.7, internalAddress: 192.168.0.7, user: ubuntu, password: Testing123}
|
||||
roleGroups:
|
||||
etcd:
|
||||
- master1
|
||||
- master2
|
||||
- master3
|
||||
control-plane:
|
||||
- master1
|
||||
- master2
|
||||
- master3
|
||||
worker:
|
||||
- node1
|
||||
- node2
|
||||
- node3
|
||||
```
|
||||
|
||||
For a complete configuration sample explanation, see [this file](https://github.com/kubesphere/kubekey/blob/release-1.2/docs/config-example.md).
|
||||
|
||||
### Step 4: Configure the load balancer
|
||||
|
||||
In addition to the node information, you need to provide the load balancer information in the same YAML file. For the Intranet VIP address, you can find it in the last part when you create [an internal load balancer](#step-1-create-load-balancers). Assume the VIP address and listening port of the **internal load balancer** are `192.168.0.253` and `6443`
|
||||
|
||||
respectively, and you can refer to the following example.
|
||||
|
||||
#### The configuration example in config-sample.yaml
|
||||
|
||||
```yaml
|
||||
## Internal LB config example
|
||||
## apiserver_loadbalancer_domain_name: "lb.kubesphere.local"
|
||||
controlPlaneEndpoint:
|
||||
domain: lb.kubesphere.local
|
||||
address: "192.168.0.253"
|
||||
port: 6443
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- The address and port should be indented by two spaces in `config-sample.yaml`, and the address should be VIP.
|
||||
- The domain name of the load balancer is `lb.kubesphere.local` by default for internal access. If you need to change the domain name, uncomment and modify it.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
### Step 5: Kubernetes cluster configurations (Optional)
|
||||
|
||||
Kubekey provides some fields and parameters to allow the cluster administrator to customize Kubernetes installation, including Kubernetes version, network plugins and image registry. There are some default values provided in `config-sample.yaml`. You can modify Kubernetes-related configurations in the file based on your needs. For more information, see [Kubernetes Cluster Configurations](../../../installing-on-linux/introduction/vars/).
|
||||
|
||||
### Step 6: Persistent storage plugin configurations
|
||||
|
||||
Considering data persistence in a production environment, you need to prepare persistent storage and configure the storage plugin (for example, CSI) in `config-sample.yaml` to define which storage service you want.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
For testing or development, you can skip this part. KubeKey will use the integrated OpenEBS to provision LocalPV as the storage service directly.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
**Available storage plugins and clients**
|
||||
|
||||
- Ceph RBD & CephFS
|
||||
- GlusterFS
|
||||
- QingCloud CSI
|
||||
- QingStor CSI
|
||||
- More plugins will be supported in future releases
|
||||
|
||||
Make sure you have configured the storage plugin before you get started. KubeKey will create a StorageClass and persistent volumes for related workloads during the installation. For more information, see [Persistent Storage Configurations](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/).
|
||||
|
||||
### Step 7: Enable pluggable components (Optional)
|
||||
|
||||
KubeSphere has decoupled some core feature components since v2.1.0. These components are designed to be pluggable which means you can enable them either before or after installation. By default, KubeSphere will be installed with the minimal package if you do not enable them.
|
||||
|
||||
You can enable any of them according to your demands. It is highly recommended that you install these pluggable components to discover the full-stack features and capabilities provided by KubeSphere. Make sure your machines have sufficient CPU and memory before you enable them. See [Enable Pluggable Components](../../../pluggable-components/) for details.
|
||||
|
||||
### Step 8: Start to bootstrap a cluster
|
||||
|
||||
After you complete the configuration, you can execute the following command to start the installation:
|
||||
|
||||
```bash
|
||||
./kk create cluster -f config-sample.yaml
|
||||
```
|
||||
|
||||
### Step 9: Verify the installation
|
||||
|
||||
Inspect the logs of installation. When you see output logs as follows, it means KubeSphere has been successfully deployed.
|
||||
|
||||
```bash
|
||||
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
|
||||
```
|
||||
|
||||
```bash
|
||||
#####################################################
|
||||
### Welcome to KubeSphere! ###
|
||||
#####################################################
|
||||
|
||||
Console: http://192.168.0.3:30880
|
||||
Account: admin
|
||||
Password: P@88w0rd
|
||||
|
||||
NOTES:
|
||||
1. After you log into the console, please check the
|
||||
monitoring status of service components in
|
||||
the "Cluster Management". If any service is not
|
||||
ready, please wait patiently until all components
|
||||
are up and running.
|
||||
2. Please change the default password after login.
|
||||
|
||||
#####################################################
|
||||
https://kubesphere.io 2020-08-13 10:50:24
|
||||
#####################################################
|
||||
```
|
||||
|
||||
### Step 10: Verify the HA cluster
|
||||
|
||||
Now that you have finished the installation, go back to the detail page of both the internal and external load balancers to see the status.
|
||||
|
||||

|
||||
|
||||
Both listeners show that the status is **Active**, meaning nodes are up and running.
|
||||
|
||||

|
||||
|
||||
In the web console of KubeSphere, you can also see that all the nodes are functioning well.
|
||||
|
||||
To verify if the cluster is highly available, you can turn off an instance on purpose. For example, the above console is accessed through the address `IP: 30880` (the EIP address here is the one bound to the external load balancer). If the cluster is highly available, the console will still work well even if you shut down a control plane node.
|
||||
|
||||
## See Also
|
||||
|
||||
[Multi-node Installation](../../../installing-on-linux/introduction/multioverview/)
|
||||
|
||||
[Kubernetes Cluster Configurations](../../../installing-on-linux/introduction/vars/)
|
||||
|
||||
[Persistent Storage Configurations](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/)
|
||||
|
||||
[Enable Pluggable Components](../../../pluggable-components/)
|
||||
|
|
@ -0,0 +1,340 @@
|
|||
---
|
||||
title: "Deploy KubeSphere on QingCloud Instances"
|
||||
keywords: "KubeSphere, Installation, HA, High-availability, LoadBalancer"
|
||||
description: "Learn how to create a high-availability cluster on QingCloud platform."
|
||||
linkTitle: "Deploy KubeSphere on QingCloud Instances"
|
||||
Weight: 3420
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
For a production environment, you need to consider the high availability of the cluster. If key components (for example, kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same control plane node, Kubernetes and KubeSphere will be unavailable once the control plane node goes down. Therefore, you need to set up a high-availability cluster by provisioning load balancers with multiple control plane nodes. You can use any cloud load balancer, or any hardware load balancer (for example, F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters.
|
||||
|
||||
This tutorial walks you through an example of how to create two [QingCloud load balancers](https://docs.qingcloud.com/product/network/loadbalancer), serving as the internal load balancer and external load balancer respectively, and of how to implement high availability of control plane and etcd nodes using the load balancers.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Make sure you already know how to install KubeSphere on a multi-node cluster by following the [guide](../../../installing-on-linux/introduction/multioverview/). For detailed information about the configuration file that is used for installation, see [Edit the configuration file](../../../installing-on-linux/introduction/multioverview/#2-edit-the-configuration-file). This tutorial focuses more on how to configure load balancers.
|
||||
- You need a [QingCloud](https://console.qingcloud.com/login) account to create load balancers, or follow the guide of any other cloud provider to create load balancers.
|
||||
- For a production environment, it is recommended that you prepare persistent storage and create a StorageClass in advance. For development and testing, you can use the integrated OpenEBS to provision LocalPV as the storage service directly.
|
||||
|
||||
## Architecture
|
||||
|
||||
This example prepares six machines of **Ubuntu 16.04.6**. You will create two load balancers, and deploy three control plane nodes and etcd nodes on three of the machines. You can configure these control plane and etcd nodes in `config-sample.yaml` created by KubeKey (Please note that this is the default name, which can be changed by yourself).
|
||||
|
||||

|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
The Kubernetes document [Options for Highly Available topology](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/) demonstrates that there are two options for configuring the topology of a highly available (HA) Kubernetes cluster, i.e. stacked etcd topology and external etcd topology. You should carefully consider the advantages and disadvantages of each topology before setting up an HA cluster according to [this document](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/). This tutorial adopts stacked etcd topology to bootstrap an HA cluster for demonstration purposes.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
## Install an HA Cluster
|
||||
|
||||
### Step 1: Create load balancers
|
||||
|
||||
This step demonstrates how to create load balancers on the QingCloud platform.
|
||||
|
||||
#### Create an internal load balancer
|
||||
|
||||
1. Log in to the [QingCloud console](https://console.qingcloud.com/login). In the menu on the left, under **Network & CDN**, select **Load Balancers**. Click **Create** to create a load balancer.
|
||||
|
||||

|
||||
|
||||
2. In the pop-up window, set a name for the load balancer. Choose the VxNet where your machines are created from the **Network** drop-down list. Here is `pn`. Other fields can be default values as shown below. Click **Submit** to finish.
|
||||
|
||||

|
||||
|
||||
3. Click the load balancer. On the detail page, create a listener that listens on port `6443` with the **Listener Protocol** set to `TCP`.
|
||||
|
||||

|
||||
|
||||
- **Name**: Define a name for this Listener
|
||||
- **Listener Protocol**: Select `TCP` protocol
|
||||
- **Port**: `6443`
|
||||
- **Balance mode**: `Poll`
|
||||
|
||||
Click **Submit** to continue.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
After you create the listener, check the firewall rules of the load balancer. Make sure that port `6443` has been added to the firewall rules and that external traffic is allowed to port `6443`. Otherwise, the installation will fail. You can find the information in **Security Groups** under **Security** on the QingCloud platform.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
4. Click **Add Backend**, and choose the VxNet you just selected (in this example, it is `pn`). Click **Advanced Search**, choose the three control plane nodes, and set the port to `6443` which is the default secure port of api-server.
|
||||
|
||||

|
||||
|
||||
Click **Submit** when you finish.
|
||||
|
||||
5. Click **Apply Changes** to use the configurations. At this point, you can find the three control plane nodes have been added as the backend servers of the listener that is behind the internal load balancer.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
The status of all control plane nodes might show **Not Available** after you added them as backends. This is normal since port `6443` of api-server is not active on control plane nodes yet. The status will change to **Active** and the port of api-server will be exposed after the installation finishes, which means the internal load balancer you configured works as expected.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||

|
||||
|
||||
Record the Intranet VIP shown under **Networks**. The IP address will be added later to the configuration file.
|
||||
|
||||
#### Create an external load balancer
|
||||
|
||||
You need to create an EIP in advance. To create an EIP, go to **Elastic IPs** under **Networks & CDN**.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
Two elastic IPs are needed for this tutorial, one for the VPC network and the other for the external load balancer created in this step. You cannot associate the same EIP to the VPC network and the load balancer at the same time.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
1. Similarly, create an external load balancer while don't select VxNet for the **Network** field. Bind the EIP that you created to this load balancer by clicking **Add IPv4**.
|
||||
|
||||

|
||||
|
||||
2. On the load balancer's detail page, create a listener that listens on port `30880` (NodePort of KubeSphere console) with **Listener Protocol** set to `HTTP`.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
After you create the listener, check the firewall rules of the load balancer. Make sure that port `30880` has been added to the firewall rules and that external traffic is allowed to port `30880`. Otherwise, the installation will fail. You can find the information in **Security Groups** under **Security** on the QingCloud platform.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||

|
||||
|
||||
3. Click **Add Backend**. In **Advanced Search**, choose the `six` machines on which you are going to install KubeSphere within the VxNet `pn`, and set the port to `30880`.
|
||||
|
||||

|
||||
|
||||
Click **Submit** when you finish.
|
||||
|
||||
4. Click **Apply Changes** to use the configurations. At this point, you can find the six machines have been added as the backend servers of the listener that is behind the external load balancer.
|
||||
|
||||
### Step 2: Download KubeKey
|
||||
|
||||
[Kubekey](https://github.com/kubesphere/kubekey) is the next-gen installer which provides an easy, fast and flexible way to install Kubernetes and KubeSphere.
|
||||
|
||||
Follow the step below to download KubeKey.
|
||||
|
||||
{{< tabs >}}
|
||||
|
||||
{{< tab "Good network connections to GitHub/Googleapis" >}}
|
||||
|
||||
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.0.0 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{< tab "Poor network connections to GitHub/Googleapis" >}}
|
||||
|
||||
Run the following command first to make sure you download KubeKey from the correct zone.
|
||||
|
||||
```bash
|
||||
export KKZONE=cn
|
||||
```
|
||||
|
||||
Run the following command to download KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.0.0 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
After you download KubeKey, if you transfer it to a new machine also with poor network connections to Googleapis, you must run `export KKZONE=cn` again before you proceed with the steps below.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{</ tabs >}}
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
The commands above download the latest release (v2.0.0) of KubeKey. You can change the version number in the command to download a specific version.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
Make `kk` executable:
|
||||
|
||||
```bash
|
||||
chmod +x kk
|
||||
```
|
||||
|
||||
Create an example configuration file with default configurations. Here Kubernetes v1.21.5 is used as an example.
|
||||
|
||||
```bash
|
||||
./kk create config --with-kubesphere v3.2.1 --with-kubernetes v1.21.5
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- Recommended Kubernetes versions for KubeSphere 3.2.1: v1.19.x, v1.20.x, v1.21.x or v1.22.x (experimental). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.21.5 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
|
||||
|
||||
- If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later.
|
||||
- If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
### Step 3: Set cluster nodes
|
||||
|
||||
As you adopt the HA topology with stacked control plane nodes, the control plane nodes and etcd nodes are on the same three machines.
|
||||
|
||||
| **Property** | **Description** |
|
||||
| :----------- | :-------------------------------- |
|
||||
| `hosts` | Detailed information of all nodes |
|
||||
| `etcd` | etcd node names |
|
||||
| `control-plane` | Control plane node names |
|
||||
| `worker` | Worker node names |
|
||||
|
||||
Put the control plane nodes (`master1`, `master2` and `master3`) under `etcd` and `master` respectively as below, which means these three machines will serve as both the control plane and etcd nodes. Note that the number of etcd needs to be odd. Meanwhile, it is not recommended that you install etcd on worker nodes since the memory consumption of etcd is very high.
|
||||
|
||||
#### config-sample.yaml Example
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
hosts:
|
||||
- {name: master1, address: 192.168.0.2, internalAddress: 192.168.0.2, user: ubuntu, password: Testing123}
|
||||
- {name: master2, address: 192.168.0.3, internalAddress: 192.168.0.3, user: ubuntu, password: Testing123}
|
||||
- {name: master3, address: 192.168.0.4, internalAddress: 192.168.0.4, user: ubuntu, password: Testing123}
|
||||
- {name: node1, address: 192.168.0.5, internalAddress: 192.168.0.5, user: ubuntu, password: Testing123}
|
||||
- {name: node2, address: 192.168.0.6, internalAddress: 192.168.0.6, user: ubuntu, password: Testing123}
|
||||
- {name: node3, address: 192.168.0.7, internalAddress: 192.168.0.7, user: ubuntu, password: Testing123}
|
||||
roleGroups:
|
||||
etcd:
|
||||
- master1
|
||||
- master2
|
||||
- master3
|
||||
control-plane:
|
||||
- master1
|
||||
- master2
|
||||
- master3
|
||||
worker:
|
||||
- node1
|
||||
- node2
|
||||
- node3
|
||||
```
|
||||
|
||||
For a complete configuration sample explanation, see [this file](https://github.com/kubesphere/kubekey/blob/release-1.2/docs/config-example.md).
|
||||
|
||||
### Step 4: Configure the load balancer
|
||||
|
||||
In addition to the node information, you need to provide the load balancer information in the same YAML file. For the Intranet VIP address, you can find it in the last part when you create [an internal load balancer](#step-1-create-load-balancers). Assume the VIP address and listening port of the **internal load balancer** are `192.168.0.253` and `6443`
|
||||
|
||||
respectively, and you can refer to the following example.
|
||||
|
||||
#### The configuration example in config-sample.yaml
|
||||
|
||||
```yaml
|
||||
## Internal LB config example
|
||||
## apiserver_loadbalancer_domain_name: "lb.kubesphere.local"
|
||||
controlPlaneEndpoint:
|
||||
domain: lb.kubesphere.local
|
||||
address: "192.168.0.253"
|
||||
port: 6443
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- The address and port should be indented by two spaces in `config-sample.yaml`, and the address should be VIP.
|
||||
- The domain name of the load balancer is `lb.kubesphere.local` by default for internal access. If you need to change the domain name, uncomment and modify it.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
### Step 5: Kubernetes cluster configurations (Optional)
|
||||
|
||||
Kubekey provides some fields and parameters to allow the cluster administrator to customize Kubernetes installation, including Kubernetes version, network plugins and image registry. There are some default values provided in `config-sample.yaml`. You can modify Kubernetes-related configurations in the file based on your needs. For more information, see [Kubernetes Cluster Configurations](../../../installing-on-linux/introduction/vars/).
|
||||
|
||||
### Step 6: Persistent storage plugin configurations
|
||||
|
||||
Considering data persistence in a production environment, you need to prepare persistent storage and configure the storage plugin (for example, CSI) in `config-sample.yaml` to define which storage service you want.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
For testing or development, you can skip this part. KubeKey will use the integrated OpenEBS to provision LocalPV as the storage service directly.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
**Available storage plugins and clients**
|
||||
|
||||
- Ceph RBD & CephFS
|
||||
- GlusterFS
|
||||
- QingCloud CSI
|
||||
- QingStor CSI
|
||||
- More plugins will be supported in future releases
|
||||
|
||||
Make sure you have configured the storage plugin before you get started. KubeKey will create a StorageClass and persistent volumes for related workloads during the installation. For more information, see [Persistent Storage Configurations](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/).
|
||||
|
||||
### Step 7: Enable pluggable components (Optional)
|
||||
|
||||
KubeSphere has decoupled some core feature components since v2.1.0. These components are designed to be pluggable which means you can enable them either before or after installation. By default, KubeSphere will be installed with the minimal package if you do not enable them.
|
||||
|
||||
You can enable any of them according to your demands. It is highly recommended that you install these pluggable components to discover the full-stack features and capabilities provided by KubeSphere. Make sure your machines have sufficient CPU and memory before you enable them. See [Enable Pluggable Components](../../../pluggable-components/) for details.
|
||||
|
||||
### Step 8: Start to bootstrap a cluster
|
||||
|
||||
After you complete the configuration, you can execute the following command to start the installation:
|
||||
|
||||
```bash
|
||||
./kk create cluster -f config-sample.yaml
|
||||
```
|
||||
|
||||
### Step 9: Verify the installation
|
||||
|
||||
Inspect the logs of installation. When you see output logs as follows, it means KubeSphere has been successfully deployed.
|
||||
|
||||
```bash
|
||||
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
|
||||
```
|
||||
|
||||
```bash
|
||||
#####################################################
|
||||
### Welcome to KubeSphere! ###
|
||||
#####################################################
|
||||
|
||||
Console: http://192.168.0.3:30880
|
||||
Account: admin
|
||||
Password: P@88w0rd
|
||||
|
||||
NOTES:
|
||||
1. After you log into the console, please check the
|
||||
monitoring status of service components in
|
||||
the "Cluster Management". If any service is not
|
||||
ready, please wait patiently until all components
|
||||
are up and running.
|
||||
2. Please change the default password after login.
|
||||
|
||||
#####################################################
|
||||
https://kubesphere.io 2020-08-13 10:50:24
|
||||
#####################################################
|
||||
```
|
||||
|
||||
### Step 10: Verify the HA cluster
|
||||
|
||||
Now that you have finished the installation, go back to the detail page of both the internal and external load balancers to see the status.
|
||||
|
||||

|
||||
|
||||
Both listeners show that the status is **Active**, meaning nodes are up and running.
|
||||
|
||||

|
||||
|
||||
In the web console of KubeSphere, you can also see that all the nodes are functioning well.
|
||||
|
||||
To verify if the cluster is highly available, you can turn off an instance on purpose. For example, the above console is accessed through the address `IP: 30880` (the EIP address here is the one bound to the external load balancer). If the cluster is highly available, the console will still work well even if you shut down a control plane node.
|
||||
|
||||
## See Also
|
||||
|
||||
[Multi-node Installation](../../../installing-on-linux/introduction/multioverview/)
|
||||
|
||||
[Kubernetes Cluster Configurations](../../../installing-on-linux/introduction/vars/)
|
||||
|
||||
[Persistent Storage Configurations](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/)
|
||||
|
||||
[Enable Pluggable Components](../../../pluggable-components/)
|
||||
|
|
@ -0,0 +1,172 @@
|
|||
---
|
||||
title: "Features"
|
||||
keywords: "KubeSphere, Kubernetes, Docker, Jenkins, Istio, Features"
|
||||
description: "KubeSphere Key Features"
|
||||
|
||||
linkTitle: "Features"
|
||||
weight: 1300
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
As an [open source container platform](https://kubesphere.io/), KubeSphere provides enterprises with a robust, secure and feature-rich platform, boasting the most common functionalities needed for enterprises adopting Kubernetes, such as multi-cluster deployment and management, network policy configuration, Service Mesh (Istio-based), DevOps projects (CI/CD), security management, Source-to-Image and Binary-to-Image, multi-tenant management, multi-dimensional monitoring, log query and collection, alerting and notification, auditing, application management, and image registry management.
|
||||
|
||||
It also supports various open source storage and network solutions, as well as cloud storage services. For example, KubeSphere presents users with a powerful cloud-native tool [OpenELB](https://openelb.github.io/), a CNCF-certified load balancer developed for bare metal Kubernetes clusters.
|
||||
|
||||
With an easy-to-use web console in place, KubeSphere eases the learning curve for users and drives the adoption of Kubernetes.
|
||||
|
||||

|
||||
|
||||
The following modules elaborate on the key features and benefits provided by KubeSphere. For detailed information, see the respective chapter in this guide.
|
||||
|
||||
## Provisioning and Maintaining Kubernetes
|
||||
|
||||
### Provisioning Kubernetes Clusters
|
||||
|
||||
[KubeKey](https://github.com/kubesphere/kubekey) allows you to deploy Kubernetes on your infrastructure out of box, provisioning Kubernetes clusters with high availability. It is recommended that at least three master nodes are configured behind a load balancer for production environment.
|
||||
|
||||
### Kubernetes Resource Management
|
||||
|
||||
KubeSphere provides a graphical web console, giving users a clear view of a variety of Kubernetes resources, including Pods and containers, clusters and nodes, workloads, secrets and ConfigMaps, services and Ingress, jobs and CronJobs, and applications. With wizard user interfaces, users can easily interact with these resources for service discovery, HPA, image management, scheduling, high availability implementation, container health check and more.
|
||||
|
||||
As KubeSphere 3.0 features enhanced observability, users are able to keep track of resources from multi-tenant perspectives, such as custom monitoring, events, auditing logs, alerts and notifications.
|
||||
|
||||
### Cluster Upgrade and Scaling
|
||||
|
||||
The next-gen installer [KubeKey](https://github.com/kubesphere/kubekey) provides an easy way of installation, management and maintenance. Moreover, it supports rolling upgrades of Kubernetes clusters so that the cluster service is always available while being upgraded. Also, you can add new nodes to a Kubernetes cluster to include more workloads by using KubeKey.
|
||||
|
||||
## Multi-cluster Management and Deployment
|
||||
|
||||
As the IT world sees a growing number of cloud-native applications reshaping software portfolios for enterprises, users tend to deploy their clusters across locations, geographies, and clouds. Against this backdrop, KubeSphere has undergone a significant upgrade to address the pressing need of users with its brand-new multi-cluster feature.
|
||||
|
||||
With KubeSphere, users can manage the infrastructure underneath, such as adding or deleting clusters. Heterogeneous clusters deployed on any infrastructure (for example, Amazon EKS and Google Kubernetes Engine) can be managed in a unified way. This is made possible by a central control plane of KubeSphere with two efficient management approaches available.
|
||||
|
||||
- **Solo**. Independently deployed Kubernetes clusters can be maintained and managed together in KubeSphere container platform.
|
||||
- **Federation**. Multiple Kubernetes clusters can be aggregated together as a Kubernetes resource pool. When users deploy applications, replicas can be deployed on different Kubernetes clusters in the pool. In this regard, high availability is achieved across zones and clusters.
|
||||
|
||||
KubeSphere allows users to deploy applications across clusters. More importantly, an application can also be configured to run on a certain cluster. Besides, the multi-cluster feature, paired with [OpenPitrix](https://github.com/openpitrix/openpitrix), an industry-leading application management platform, enables users to manage apps across their whole lifecycle, including release, removal and distribution.
|
||||
|
||||
For more information, see [Multi-cluster Management](../../multicluster-management/).
|
||||
|
||||
## DevOps Support
|
||||
|
||||
KubeSphere provides a pluggable DevOps component based on popular CI/CD tools such as Jenkins. It features automated workflows and tools including binary-to-image (B2I) and source-to-image (S2I) to package source code or binary artifacts into ready-to-run container images.
|
||||
|
||||

|
||||
|
||||
### CI/CD Pipeline
|
||||
|
||||
- **Automation**. CI/CD pipelines and build strategies are based on Jenkins, streamlining and automating the development, test and production process. Dependency caches are used to accelerate build and deployment.
|
||||
- **Out-of-box**. Users can ship their Jenkins build strategy and client plugin to create a Jenkins pipeline based on Git repository/SVN. They can define any step and stage in the built-in Jenkinsfile. Common agent types are embedded, such as Maven, Node.js and Go. Users can customize the agent type as well.
|
||||
- **Visualization**. Users can easily interact with a visualized control panel to set conditions and manage CI/CD pipelines.
|
||||
- **Quality Management**. Static code analysis is supported to detect bugs, code smells and security vulnerabilities.
|
||||
- **Logs**. The entire running process of CI/CD pipelines is recorded.
|
||||
|
||||
### Source-to-Image
|
||||
|
||||
Source-to-Image (S2I) is a toolkit and automated workflow for building reproducible container images from source code. S2I produces ready-to-run images by injecting source code into a container image and making the container ready to execute from source code.
|
||||
|
||||
S2I allows you to publish your service to Kubernetes without writing a Dockerfile. You just need to provide a source code repository address, and specify the target image registry. All configurations will be stored as different resources in Kubernetes. Your service will be automatically published to Kubernetes, and the image will be pushed to the target registry as well.
|
||||
|
||||

|
||||
|
||||
### Binary-to-Image
|
||||
|
||||
Similar to S2I, Binary-to-Image (B2I) is a toolkit and automated workflow for building reproducible container images from binary (for example, Jar, War, Binary package).
|
||||
|
||||
You just need to upload your application binary package, and specify the image registry to which you want to push. The rest is exactly the same as S2I.
|
||||
|
||||
For more information, see [DevOps User Guide](../../devops-user-guide/).
|
||||
|
||||
## Istio-based Service Mesh
|
||||
|
||||
KubeSphere service mesh is composed of a set of ecosystem projects, such as Istio, Envoy and Jaeger. We design a unified user interface to use and manage these tools. Most features are out-of-box and have been designed from the developer's perspective, which means KubeSphere can help you to reduce the learning curve since you do not need to deep dive into those tools individually.
|
||||
|
||||
KubeSphere service mesh provides fine-grained traffic management, observability, tracing, and service identity and security management for a distributed application. Therefore, developers can focus on core business. With service mesh management of KubeSphere, users can better track, route and optimize communications within Kubernetes for cloud-native apps.
|
||||
|
||||
### Traffic Management
|
||||
|
||||
- **Canary release** represents an important deployment strategy of new versions for testing purposes. Traffic is separated with a pre-configured ratio into a canary release and a production release respectively. If everything goes well, users can change the percentage and gradually replace the old version with the new one.
|
||||
- **Blue-green deployment** allows users to run two versions of an application at the same time. Blue stands for the current app version and green represents the new version tested for functionality and performance. Once the testing results are successful, application traffic is routed from the in-production version (blue) to the new one (green).
|
||||
- **Traffic mirroring** enables teams to bring changes to production with as little risk as possible. Mirroring sends a copy of live traffic to a mirrored service.
|
||||
- **Circuit breaker** allows users to set limits for calls to individual hosts within a service, such as the number of concurrent connections or how many times calls to this host have failed.
|
||||
|
||||
For more information, see [Grayscale Release](../../project-user-guide/grayscale-release/overview/).
|
||||
|
||||
### Visualization
|
||||
|
||||
KubeSphere service mesh has the ability to visualize the connections between microservices and the topology of how they interconnect. In this regard, observability is extremely useful in understanding the interconnection of cloud-native microservices.
|
||||
|
||||
### Distributed Tracing
|
||||
|
||||
Based on Jaeger, KubeSphere service mesh enables users to track how services interact with each other. It helps users gain a deeper understanding of request latency, bottlenecks, serialization and parallelism via visualization.
|
||||
|
||||
## Multi-tenant Management
|
||||
|
||||
In KubeSphere, resources (for example, clusters) can be shared between tenants. First, administrators or managers need to set different account roles with different authorizations. After that, members in the platform can be assigned with these roles to perform specific actions on varied resources. Meanwhile, as KubeSphere completely isolates tenants, they will not affect each other at all.
|
||||
|
||||
- **Multi-tenancy**. It provides role-based fine-grained authentication in a unified way and a three-tier authorization system.
|
||||
- **Unified authentication**. For enterprises, KubeSphere is compatible with their central authentication system that is base on LDAP or AD protocol. Single sign-on (SSO) is also supported to achieve unified authentication of tenant identity.
|
||||
- **Authorization system**. It is organized into three levels: cluster, workspace and project. KubeSphere ensures resources can be shared while different roles at multiple levels are completely isolated for resource security.
|
||||
|
||||
For more information, see [Role and Member Management in Workspace](../../workspace-administration/role-and-member-management/).
|
||||
|
||||
## Observability
|
||||
|
||||
### Multi-dimensional Monitoring
|
||||
|
||||
KubeSphere features a self-updating monitoring system with graphical interfaces that streamline the whole process of operation and maintenance. It provides customized monitoring of a variety of resources and includes a set of alerts that can immediately notify users of any occurring issues.
|
||||
|
||||
- **Customized monitoring dashboard**. Users can decide exactly what metics need to be monitored in what kind of form. Different templates are available in KubeSphere for users to select, such as Elasticsearch, MySQL, and Redis. Alternatively, they can also create their own monitoring templates, including charts, colors, intervals and units.
|
||||
- **O&M-friendly**. The monitoring system can be operated in a visualized interface with open standard APIs for enterprises to integrate their existing systems. Therefore, they can implement operation and maintenance in a unified way.
|
||||
- **Third-party compatibility**. KubeSphere is compatible with Prometheus, which is the de facto metrics collection platform for monitoring in Kubernetes environments. Monitoring data can be seamlessly displayed in the web console of KubeSphere.
|
||||
|
||||
- **Multi-dimensional monitoring at second-level precision**.
|
||||
- For infrastructure monitoring, the system provides comprehensive metrics such as CPU utilization, memory utilization, CPU load average, disk usage, inode utilization, disk throughput, IOPS, network outbound/inbound rate, Pod status, etcd service status, and API Server status.
|
||||
- For application resource monitoring, the system provides five key monitoring metrics: CPU utilization, memory consumption, Pod number, network outbound and inbound rate. Besides, users can sort data based on resource consumption and search metics by customizing the time range. In this way, occurring problems can be quickly located so that users can take necessary action.
|
||||
- **Ranking**. Users can sort data by node, workspace and project, which gives them a graphical view of how their resources are running in a straightforward way.
|
||||
- **Component monitoring**. It allows users to quickly locate any component failures to avoid unnecessary business downtime.
|
||||
|
||||
### Alerting, Events, Auditing and Notifications
|
||||
|
||||
- **Customized alerting policies and rules**. The alerting system is based on multi-tenant monitoring of multi-dimensional metrics. The system will send alerts related to a wide spectrum of resources such as pod, network and workload. In this regard, users can customize their own alerting policy by setting specific rules, such as repetition interval and time. The threshold and alerting level can also be defined by users themselves.
|
||||
- **Accurate event tracking**. KubeSphere allows users to know what is happening inside a cluster, such as container running status (successful or failed), node scheduling, and image pulling result. They will be accurately recorded with the specific reason, status and message displayed in the web console. In a production environment, this will help users to respond to any issues in time.
|
||||
- **Enhanced auditing security**. As KubeSphere features fine-grained management of user authorization, resources and network can be completely isolated to ensure data security. The comprehensive auditing feature allows users to search for activities related to any operation or alert.
|
||||
- **Diversified notification methods**. Emails represent a key approach for users to receive notifications of relevant activities they want to know. They can be sent based on the rule set by users themselves, who are able to customize the sender email address and their receiver lists. Besides, other channels, such as Slack and WeChat, are also supported to meet the need of our users. In this connection, KubeSphere provides users with more notification preferences as they are updated on the latest development in KubeSphere no matter what channel they select.
|
||||
|
||||
For more information, please see [Project User Guide](../../project-user-guide/).
|
||||
|
||||
## Log Query and Collection
|
||||
|
||||
- **Multi-tenant log management**. In KubeSphere log search system, different tenants can only see their own log information. Logs can be exported as records for future reference.
|
||||
- **Multi-level log query**. Users can search for logs related to various resources, such as projects, workloads, and pods. Flexible and convenient log collection configuration options are available.
|
||||
- **Multiple log collectors**. Users can choose log collectors such as Elasticsearch, Kafka, and Fluentd.
|
||||
- **On-disk log collection**. For applications whose logs are saved in a Pod sidecar as a file, users can enable Disk Log Collection.
|
||||
|
||||
## Application Management and Orchestration
|
||||
|
||||
- **App Store**. KubeSphere provides an app store based on [OpenPitrix](https://github.com/openpitrix/openpitrix), an industry-leading open source system for app management across the whole lifecycle, including release, removal, and distribution.
|
||||
- **App repository**. In KubeSphere, users can create an app repository hosted either in object storage (such as [QingStor](https://www.qingcloud.com/products/qingstor/) or [AWS S3](https://aws.amazon.com/what-is-cloud-object-storage/)) or in [GitHub](https://github.com/). App packages submitted to the app repository are composed of Helm Chart template files of the app.
|
||||
- **App template**. With app templates, KubeSphere provides a visualized way for app deployment with just one click. Internally, app templates can help different teams in the enterprise to share middleware and business systems. Externally, they can serve as an industry standard for application delivery based on different scenarios and needs.
|
||||
|
||||
## Multiple Storage Solutions
|
||||
|
||||
- Open source storage solutions are available such as GlusterFS, CephRBD, and NFS.
|
||||
- NeonSAN CSI plugin connects to QingStor NeonSAN to meet core business requirements for low latency, high resilience, and high performance.
|
||||
- QingCloud CSI plugin connects to various block storage services in QingCloud platform.
|
||||
|
||||
## Multiple Network Solutions
|
||||
|
||||
- Open source network solutions are available such as Calico and Flannel.
|
||||
|
||||
- [OpenELB](https://github.com/kubesphere/openelb), a load balancer developed for bare metal Kubernetes clusters, is designed by KubeSphere development team. This CNCF-certified tool serves as an important solution for developers. It mainly features:
|
||||
|
||||
1. ECMP routing load balancing
|
||||
2. BGP dynamic routing configuration
|
||||
3. VIP management
|
||||
4. LoadBalancerIP assignment in Kubernetes services (v0.3.0)
|
||||
5. Installation with Helm Chart (v0.3.0)
|
||||
6. Dynamic BGP server configuration through CRD (v0.3.0)
|
||||
7. Dynamic BGP peer configuration through CRD (v0.3.0)
|
||||
|
||||
For more information, please see [this article](https://kubesphere.io/conferences/porter/).
|
||||
|
|
@ -0,0 +1,172 @@
|
|||
---
|
||||
title: "Features"
|
||||
keywords: "KubeSphere, Kubernetes, Docker, Jenkins, Istio, Features"
|
||||
description: "KubeSphere Key Features"
|
||||
|
||||
linkTitle: "Features"
|
||||
weight: 1300
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
As an [open source container platform](https://kubesphere.io/), KubeSphere provides enterprises with a robust, secure and feature-rich platform, boasting the most common functionalities needed for enterprises adopting Kubernetes, such as multi-cluster deployment and management, network policy configuration, Service Mesh (Istio-based), DevOps projects (CI/CD), security management, Source-to-Image and Binary-to-Image, multi-tenant management, multi-dimensional monitoring, log query and collection, alerting and notification, auditing, application management, and image registry management.
|
||||
|
||||
It also supports various open source storage and network solutions, as well as cloud storage services. For example, KubeSphere presents users with a powerful cloud-native tool [OpenELB](https://openelb.github.io/), a CNCF-certified load balancer developed for bare metal Kubernetes clusters.
|
||||
|
||||
With an easy-to-use web console in place, KubeSphere eases the learning curve for users and drives the adoption of Kubernetes.
|
||||
|
||||

|
||||
|
||||
The following modules elaborate on the key features and benefits provided by KubeSphere. For detailed information, see the respective chapter in this guide.
|
||||
|
||||
## Provisioning and Maintaining Kubernetes
|
||||
|
||||
### Provisioning Kubernetes Clusters
|
||||
|
||||
[KubeKey](https://github.com/kubesphere/kubekey) allows you to deploy Kubernetes on your infrastructure out of box, provisioning Kubernetes clusters with high availability. It is recommended that at least three control plane nodes are configured behind a load balancer for production environment.
|
||||
|
||||
### Kubernetes Resource Management
|
||||
|
||||
KubeSphere provides a graphical web console, giving users a clear view of a variety of Kubernetes resources, including Pods and containers, clusters and nodes, workloads, secrets and ConfigMaps, services and Ingress, jobs and CronJobs, and applications. With wizard user interfaces, users can easily interact with these resources for service discovery, HPA, image management, scheduling, high availability implementation, container health check and more.
|
||||
|
||||
As KubeSphere 3.0 features enhanced observability, users are able to keep track of resources from multi-tenant perspectives, such as custom monitoring, events, auditing logs, alerts and notifications.
|
||||
|
||||
### Cluster Upgrade and Scaling
|
||||
|
||||
The next-gen installer [KubeKey](https://github.com/kubesphere/kubekey) provides an easy way of installation, management and maintenance. Moreover, it supports rolling upgrades of Kubernetes clusters so that the cluster service is always available while being upgraded. Also, you can add new nodes to a Kubernetes cluster to include more workloads by using KubeKey.
|
||||
|
||||
## Multi-cluster Management and Deployment
|
||||
|
||||
As the IT world sees a growing number of cloud-native applications reshaping software portfolios for enterprises, users tend to deploy their clusters across locations, geographies, and clouds. Against this backdrop, KubeSphere has undergone a significant upgrade to address the pressing need of users with its brand-new multi-cluster feature.
|
||||
|
||||
With KubeSphere, users can manage the infrastructure underneath, such as adding or deleting clusters. Heterogeneous clusters deployed on any infrastructure (for example, Amazon EKS and Google Kubernetes Engine) can be managed in a unified way. This is made possible by a central control plane of KubeSphere with two efficient management approaches available.
|
||||
|
||||
- **Solo**. Independently deployed Kubernetes clusters can be maintained and managed together in KubeSphere container platform.
|
||||
- **Federation**. Multiple Kubernetes clusters can be aggregated together as a Kubernetes resource pool. When users deploy applications, replicas can be deployed on different Kubernetes clusters in the pool. In this regard, high availability is achieved across zones and clusters.
|
||||
|
||||
KubeSphere allows users to deploy applications across clusters. More importantly, an application can also be configured to run on a certain cluster. Besides, the multi-cluster feature, paired with [OpenPitrix](https://github.com/openpitrix/openpitrix), an industry-leading application management platform, enables users to manage apps across their whole lifecycle, including release, removal and distribution.
|
||||
|
||||
For more information, see [Multi-cluster Management](../../multicluster-management/).
|
||||
|
||||
## DevOps Support
|
||||
|
||||
KubeSphere provides a pluggable DevOps component based on popular CI/CD tools such as Jenkins. It features automated workflows and tools including binary-to-image (B2I) and source-to-image (S2I) to package source code or binary artifacts into ready-to-run container images.
|
||||
|
||||

|
||||
|
||||
### CI/CD Pipeline
|
||||
|
||||
- **Automation**. CI/CD pipelines and build strategies are based on Jenkins, streamlining and automating the development, test and production process. Dependency caches are used to accelerate build and deployment.
|
||||
- **Out-of-box**. Users can ship their Jenkins build strategy and client plugin to create a Jenkins pipeline based on Git repository/SVN. They can define any step and stage in the built-in Jenkinsfile. Common agent types are embedded, such as Maven, Node.js and Go. Users can customize the agent type as well.
|
||||
- **Visualization**. Users can easily interact with a visualized control panel to set conditions and manage CI/CD pipelines.
|
||||
- **Quality Management**. Static code analysis is supported to detect bugs, code smells and security vulnerabilities.
|
||||
- **Logs**. The entire running process of CI/CD pipelines is recorded.
|
||||
|
||||
### Source-to-Image
|
||||
|
||||
Source-to-Image (S2I) is a toolkit and automated workflow for building reproducible container images from source code. S2I produces ready-to-run images by injecting source code into a container image and making the container ready to execute from source code.
|
||||
|
||||
S2I allows you to publish your service to Kubernetes without writing a Dockerfile. You just need to provide a source code repository address, and specify the target image registry. All configurations will be stored as different resources in Kubernetes. Your service will be automatically published to Kubernetes, and the image will be pushed to the target registry as well.
|
||||
|
||||

|
||||
|
||||
### Binary-to-Image
|
||||
|
||||
Similar to S2I, Binary-to-Image (B2I) is a toolkit and automated workflow for building reproducible container images from binary (for example, Jar, War, Binary package).
|
||||
|
||||
You just need to upload your application binary package, and specify the image registry to which you want to push. The rest is exactly the same as S2I.
|
||||
|
||||
For more information, see [DevOps User Guide](../../devops-user-guide/).
|
||||
|
||||
## Istio-based Service Mesh
|
||||
|
||||
KubeSphere service mesh is composed of a set of ecosystem projects, such as Istio, Envoy and Jaeger. We design a unified user interface to use and manage these tools. Most features are out-of-box and have been designed from the developer's perspective, which means KubeSphere can help you to reduce the learning curve since you do not need to deep dive into those tools individually.
|
||||
|
||||
KubeSphere service mesh provides fine-grained traffic management, observability, tracing, and service identity and security management for a distributed application. Therefore, developers can focus on core business. With service mesh management of KubeSphere, users can better track, route and optimize communications within Kubernetes for cloud-native apps.
|
||||
|
||||
### Traffic Management
|
||||
|
||||
- **Canary release** represents an important deployment strategy of new versions for testing purposes. Traffic is separated with a pre-configured ratio into a canary release and a production release respectively. If everything goes well, users can change the percentage and gradually replace the old version with the new one.
|
||||
- **Blue-green deployment** allows users to run two versions of an application at the same time. Blue stands for the current app version and green represents the new version tested for functionality and performance. Once the testing results are successful, application traffic is routed from the in-production version (blue) to the new one (green).
|
||||
- **Traffic mirroring** enables teams to bring changes to production with as little risk as possible. Mirroring sends a copy of live traffic to a mirrored service.
|
||||
- **Circuit breaker** allows users to set limits for calls to individual hosts within a service, such as the number of concurrent connections or how many times calls to this host have failed.
|
||||
|
||||
For more information, see [Grayscale Release](../../project-user-guide/grayscale-release/overview/).
|
||||
|
||||
### Visualization
|
||||
|
||||
KubeSphere service mesh has the ability to visualize the connections between microservices and the topology of how they interconnect. In this regard, observability is extremely useful in understanding the interconnection of cloud-native microservices.
|
||||
|
||||
### Distributed Tracing
|
||||
|
||||
Based on Jaeger, KubeSphere service mesh enables users to track how services interact with each other. It helps users gain a deeper understanding of request latency, bottlenecks, serialization and parallelism via visualization.
|
||||
|
||||
## Multi-tenant Management
|
||||
|
||||
In KubeSphere, resources (for example, clusters) can be shared between tenants. First, administrators or managers need to set different account roles with different authorizations. After that, members in the platform can be assigned with these roles to perform specific actions on varied resources. Meanwhile, as KubeSphere completely isolates tenants, they will not affect each other at all.
|
||||
|
||||
- **Multi-tenancy**. It provides role-based fine-grained authentication in a unified way and a three-tier authorization system.
|
||||
- **Unified authentication**. For enterprises, KubeSphere is compatible with their central authentication system that is base on LDAP or AD protocol. Single sign-on (SSO) is also supported to achieve unified authentication of tenant identity.
|
||||
- **Authorization system**. It is organized into three levels: cluster, workspace and project. KubeSphere ensures resources can be shared while different roles at multiple levels are completely isolated for resource security.
|
||||
|
||||
For more information, see [Role and Member Management in Workspace](../../workspace-administration/role-and-member-management/).
|
||||
|
||||
## Observability
|
||||
|
||||
### Multi-dimensional Monitoring
|
||||
|
||||
KubeSphere features a self-updating monitoring system with graphical interfaces that streamline the whole process of operation and maintenance. It provides customized monitoring of a variety of resources and includes a set of alerts that can immediately notify users of any occurring issues.
|
||||
|
||||
- **Customized monitoring dashboard**. Users can decide exactly what metics need to be monitored in what kind of form. Different templates are available in KubeSphere for users to select, such as Elasticsearch, MySQL, and Redis. Alternatively, they can also create their own monitoring templates, including charts, colors, intervals and units.
|
||||
- **O&M-friendly**. The monitoring system can be operated in a visualized interface with open standard APIs for enterprises to integrate their existing systems. Therefore, they can implement operation and maintenance in a unified way.
|
||||
- **Third-party compatibility**. KubeSphere is compatible with Prometheus, which is the de facto metrics collection platform for monitoring in Kubernetes environments. Monitoring data can be seamlessly displayed in the web console of KubeSphere.
|
||||
|
||||
- **Multi-dimensional monitoring at second-level precision**.
|
||||
- For infrastructure monitoring, the system provides comprehensive metrics such as CPU utilization, memory utilization, CPU load average, disk usage, inode utilization, disk throughput, IOPS, network outbound/inbound rate, Pod status, etcd service status, and API Server status.
|
||||
- For application resource monitoring, the system provides five key monitoring metrics: CPU utilization, memory consumption, Pod number, network outbound and inbound rate. Besides, users can sort data based on resource consumption and search metics by customizing the time range. In this way, occurring problems can be quickly located so that users can take necessary action.
|
||||
- **Ranking**. Users can sort data by node, workspace and project, which gives them a graphical view of how their resources are running in a straightforward way.
|
||||
- **Component monitoring**. It allows users to quickly locate any component failures to avoid unnecessary business downtime.
|
||||
|
||||
### Alerting, Events, Auditing and Notifications
|
||||
|
||||
- **Customized alerting policies and rules**. The alerting system is based on multi-tenant monitoring of multi-dimensional metrics. The system will send alerts related to a wide spectrum of resources such as pod, network and workload. In this regard, users can customize their own alerting policy by setting specific rules, such as repetition interval and time. The threshold and alerting level can also be defined by users themselves.
|
||||
- **Accurate event tracking**. KubeSphere allows users to know what is happening inside a cluster, such as container running status (successful or failed), node scheduling, and image pulling result. They will be accurately recorded with the specific reason, status and message displayed in the web console. In a production environment, this will help users to respond to any issues in time.
|
||||
- **Enhanced auditing security**. As KubeSphere features fine-grained management of user authorization, resources and network can be completely isolated to ensure data security. The comprehensive auditing feature allows users to search for activities related to any operation or alert.
|
||||
- **Diversified notification methods**. Emails represent a key approach for users to receive notifications of relevant activities they want to know. They can be sent based on the rule set by users themselves, who are able to customize the sender email address and their receiver lists. Besides, other channels, such as Slack and WeChat, are also supported to meet the need of our users. In this connection, KubeSphere provides users with more notification preferences as they are updated on the latest development in KubeSphere no matter what channel they select.
|
||||
|
||||
For more information, please see [Project User Guide](../../project-user-guide/).
|
||||
|
||||
## Log Query and Collection
|
||||
|
||||
- **Multi-tenant log management**. In KubeSphere log search system, different tenants can only see their own log information. Logs can be exported as records for future reference.
|
||||
- **Multi-level log query**. Users can search for logs related to various resources, such as projects, workloads, and pods. Flexible and convenient log collection configuration options are available.
|
||||
- **Multiple log collectors**. Users can choose log collectors such as Elasticsearch, Kafka, and Fluentd.
|
||||
- **On-disk log collection**. For applications whose logs are saved in a Pod sidecar as a file, users can enable Disk Log Collection.
|
||||
|
||||
## Application Management and Orchestration
|
||||
|
||||
- **App Store**. KubeSphere provides an app store based on [OpenPitrix](https://github.com/openpitrix/openpitrix), an industry-leading open source system for app management across the whole lifecycle, including release, removal, and distribution.
|
||||
- **App repository**. In KubeSphere, users can create an app repository hosted either in object storage (such as [QingStor](https://www.qingcloud.com/products/qingstor/) or [AWS S3](https://aws.amazon.com/what-is-cloud-object-storage/)) or in [GitHub](https://github.com/). App packages submitted to the app repository are composed of Helm Chart template files of the app.
|
||||
- **App template**. With app templates, KubeSphere provides a visualized way for app deployment with just one click. Internally, app templates can help different teams in the enterprise to share middleware and business systems. Externally, they can serve as an industry standard for application delivery based on different scenarios and needs.
|
||||
|
||||
## Multiple Storage Solutions
|
||||
|
||||
- Open source storage solutions are available such as GlusterFS, CephRBD, and NFS.
|
||||
- NeonSAN CSI plugin connects to QingStor NeonSAN to meet core business requirements for low latency, high resilience, and high performance.
|
||||
- QingCloud CSI plugin connects to various block storage services in QingCloud platform.
|
||||
|
||||
## Multiple Network Solutions
|
||||
|
||||
- Open source network solutions are available such as Calico and Flannel.
|
||||
|
||||
- [OpenELB](https://github.com/kubesphere/openelb), a load balancer developed for bare metal Kubernetes clusters, is designed by KubeSphere development team. This CNCF-certified tool serves as an important solution for developers. It mainly features:
|
||||
|
||||
1. ECMP routing load balancing
|
||||
2. BGP dynamic routing configuration
|
||||
3. VIP management
|
||||
4. LoadBalancerIP assignment in Kubernetes services (v0.3.0)
|
||||
5. Installation with Helm Chart (v0.3.0)
|
||||
6. Dynamic BGP server configuration through CRD (v0.3.0)
|
||||
7. Dynamic BGP peer configuration through CRD (v0.3.0)
|
||||
|
||||
For more information, please see [this article](https://kubesphere.io/conferences/porter/).
|
||||
|
|
@ -0,0 +1,124 @@
|
|||
---
|
||||
title: '三步搞定 ARM64 离线部署 Kubernetes + KubeSphere'
|
||||
tag: 'Kubernetes,KubeSphere,arm'
|
||||
keywords: 'Kubernetes, KubeSphere, ARM64, 信创'
|
||||
description: 'KubeSphere 作为一款深受国内外开发者所喜爱的开源容器平台,也将积极参与并探索在 ARM 架构下的应用与创新。本文将主要介绍如何在 ARM64 环境下部署 Kubernetes 和 KubeSphere。'
|
||||
createTime: '2021-03-29'
|
||||
author: '郭峰'
|
||||
snapshot: 'https://pek3b.qingstor.com/kubesphere-community/images/arm.png'
|
||||
---
|
||||
|
||||
### 背景
|
||||
|
||||
由于 ARM 架构具有低功耗和并行好的特点,其应用也将会越来越广泛。KubeSphere 作为一款深受国内外开发者所喜爱的开源容器平台,也将积极参与并探索在 ARM 架构下的应用与创新。本文将主要介绍如何在 ARM64 环境下部署 Kubernetes 和 KubeSphere。
|
||||
|
||||
### 环境准备
|
||||
#### 节点
|
||||
KubeSphere 支持的操作系统包括:
|
||||
- Ubuntu 16.04, 18.04
|
||||
- Debian Buster, Stretch
|
||||
- CentOS/RHEL 7
|
||||
- SUSE Linux Enterprise Server 15
|
||||
- openEuler
|
||||
|
||||
这里以一台 openEuler 20.09 64bit 为例:
|
||||
|name|ip|role|
|
||||
|---|---|---|
|
||||
|node1|172.169.102.249|etcd, master, worker|
|
||||
|
||||
确保机器已经安装所需依赖软件(sudo curl openssl ebtables socat ipset conntrack docker)
|
||||
|
||||
[具体环境要求参见](https://github.com/kubesphere/kubekey/tree/release-1.0#requirements-and-recommendations)
|
||||
|
||||
关于多节点安装请参考 [KubeSphere 官方文档](https://kubesphere.com.cn/docs/installing-on-linux/introduction/multioverview/)。
|
||||
|
||||
> 建议:可将安装了所有依赖软件的操作系统制作成系统镜像使用,避免每台机器都安装依赖软件,即可提升交付部署效率,又可避免依赖问题的发生。
|
||||
|
||||
|
||||
> 提示:如使用 centos7.x、ubuntu18.04,则可以选择使用 kk 命令对机器进行初始化。
|
||||
> 解压安装包,并创建好配置文件之后(创建方法请看下文),可执行如下命令对节点进行初始化:
|
||||
> `./kk init os -s ./dependencies -f config-example.yaml`
|
||||
> 如使用该命令遇到依赖问题,可自行安装相关依赖软件。
|
||||
|
||||
#### 镜像仓库
|
||||
可使用 harbor 或其他第三方镜像仓库。
|
||||
|
||||
> 提示:可使用 kk 命令自动创建测试用自签名镜像仓库。注意,请确保当前机器存在`registry:2`,如没有,可从解压包 kubesphere-images-v3.0.0/registry.tar 中导入,导入命令:`docker load < registry.tar`。
|
||||
> 创建测试用自签名镜像仓库:
|
||||
> `./kk init os -f config-example.yaml --add-images-repo`
|
||||
> 注意:由 kk 启动的镜像仓库端口为443,请确保所有机器均可访问当前机器443端口。镜像数据存储到本地/mnt/registry (建议单独挂盘)。
|
||||
|
||||
### 安装包下载:
|
||||
> 提示:该安装包仅包含 Kubernetes + KubeSphere-core 镜像,如需更多组件 arm64 镜像,可自行编译构建。
|
||||
|
||||
```
|
||||
# md5: 3ad57823faf2dfe945e2fe3dcfd4ace9
|
||||
curl -Ok https://kubesphere-installer.pek3b.qingstor.com/offline/v3.0.0/kubesphere-core-v3.0.0-offline-linux-arm64.tar.gz
|
||||
```
|
||||
### 安装步骤:
|
||||
#### 1. 创建集群配置文件
|
||||
安装包解压后进入`kubesphere-core-v3.0.0-offline-linux-arm64`
|
||||
```
|
||||
./kk create config
|
||||
```
|
||||
根据实际环境信息修改生成的配置文件`config-sample.yaml`,也可使用-f参数自定义配置文件路径。kk 详细用法可参考:https://github.com/kubesphere/kubekey
|
||||
|
||||
> 注意填写正确的私有仓库地址`privateRegistry`(如已准备好私有仓库可设置为已有仓库地址,若使用 kk 创建私有仓库,则该参数设置为:dockerhub.kubekey.local)
|
||||
|
||||
```
|
||||
apiVersion: kubekey.kubesphere.io/v1alpha1
|
||||
kind: Cluster
|
||||
metadata:
|
||||
name: sample
|
||||
spec:
|
||||
hosts:
|
||||
# 注意指定节点 arch 为 arm64
|
||||
- {name: node1, address: 172.169.102.249, internalAddress: 172.169.102.249, password: Qcloud@123, arch: arm64}
|
||||
roleGroups:
|
||||
etcd:
|
||||
- node1
|
||||
control-plane:
|
||||
- node1
|
||||
worker:
|
||||
- node1
|
||||
controlPlaneEndpoint:
|
||||
domain: lb.kubesphere.local
|
||||
address: ""
|
||||
port: 6443
|
||||
kubernetes:
|
||||
version: v1.17.9
|
||||
imageRepo: kubesphere
|
||||
clusterName: cluster.local
|
||||
network:
|
||||
plugin: calico
|
||||
kubePodsCIDR: 10.233.64.0/18
|
||||
kubeServiceCIDR: 10.233.0.0/18
|
||||
registry:
|
||||
registryMirrors: []
|
||||
insecureRegistries: []
|
||||
privateRegistry: dockerhub.kubekey.local
|
||||
addons: []
|
||||
|
||||
```
|
||||
#### 2. 导入镜像
|
||||
进入`kubesphere-all-v3.0.0-offline-linux-arm64/kubesphere-images-v3.0.0`
|
||||
使用 offline-installation-tool.sh 将镜像导入之前准备的仓库中:
|
||||
```
|
||||
# 脚本后镜像仓库地址请填写真实仓库地址
|
||||
./offline-installation-tool.sh -l images-list-v3.0.0.txt -d kubesphere-images -r dockerhub.kubekey.local
|
||||
```
|
||||
|
||||
#### 3. 执行安装
|
||||
```
|
||||
# 以上准备工作完成且再次检查配置文件无误后,执行安装。
|
||||
./kk create cluster -f config-sample.yaml --with-kubesphere
|
||||
```
|
||||
|
||||
### 查看结果
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
|
||||
|
||||
|
|
@ -0,0 +1,124 @@
|
|||
---
|
||||
title: '三步搞定 ARM64 离线部署 Kubernetes + KubeSphere'
|
||||
tag: 'Kubernetes,KubeSphere,arm'
|
||||
keywords: 'Kubernetes, KubeSphere, ARM64, 信创'
|
||||
description: 'KubeSphere 作为一款深受国内外开发者所喜爱的开源容器平台,也将积极参与并探索在 ARM 架构下的应用与创新。本文将主要介绍如何在 ARM64 环境下部署 Kubernetes 和 KubeSphere。'
|
||||
createTime: '2021-03-29'
|
||||
author: '郭峰'
|
||||
snapshot: 'https://pek3b.qingstor.com/kubesphere-community/images/arm.png'
|
||||
---
|
||||
|
||||
### 背景
|
||||
|
||||
由于 ARM 架构具有低功耗和并行好的特点,其应用也将会越来越广泛。KubeSphere 作为一款深受国内外开发者所喜爱的开源容器平台,也将积极参与并探索在 ARM 架构下的应用与创新。本文将主要介绍如何在 ARM64 环境下部署 Kubernetes 和 KubeSphere。
|
||||
|
||||
### 环境准备
|
||||
#### 节点
|
||||
KubeSphere 支持的操作系统包括:
|
||||
- Ubuntu 16.04, 18.04
|
||||
- Debian Buster, Stretch
|
||||
- CentOS/RHEL 7
|
||||
- SUSE Linux Enterprise Server 15
|
||||
- openEuler
|
||||
|
||||
这里以一台 openEuler 20.09 64bit 为例:
|
||||
|name|ip|role|
|
||||
|---|---|---|
|
||||
|node1|172.169.102.249|etcd, control plane, worker|
|
||||
|
||||
确保机器已经安装所需依赖软件(sudo curl openssl ebtables socat ipset conntrack docker)
|
||||
|
||||
[具体环境要求参见](https://github.com/kubesphere/kubekey/tree/release-1.0#requirements-and-recommendations)
|
||||
|
||||
关于多节点安装请参考 [KubeSphere 官方文档](https://kubesphere.com.cn/docs/installing-on-linux/introduction/multioverview/)。
|
||||
|
||||
> 建议:可将安装了所有依赖软件的操作系统制作成系统镜像使用,避免每台机器都安装依赖软件,即可提升交付部署效率,又可避免依赖问题的发生。
|
||||
|
||||
|
||||
> 提示:如使用 centos7.x、ubuntu18.04,则可以选择使用 kk 命令对机器进行初始化。
|
||||
> 解压安装包,并创建好配置文件之后(创建方法请看下文),可执行如下命令对节点进行初始化:
|
||||
> `./kk init os -s ./dependencies -f config-example.yaml`
|
||||
> 如使用该命令遇到依赖问题,可自行安装相关依赖软件。
|
||||
|
||||
#### 镜像仓库
|
||||
可使用 harbor 或其他第三方镜像仓库。
|
||||
|
||||
> 提示:可使用 kk 命令自动创建测试用自签名镜像仓库。注意,请确保当前机器存在`registry:2`,如没有,可从解压包 kubesphere-images-v3.0.0/registry.tar 中导入,导入命令:`docker load < registry.tar`。
|
||||
> 创建测试用自签名镜像仓库:
|
||||
> `./kk init os -f config-example.yaml --add-images-repo`
|
||||
> 注意:由 kk 启动的镜像仓库端口为443,请确保所有机器均可访问当前机器443端口。镜像数据存储到本地/mnt/registry (建议单独挂盘)。
|
||||
|
||||
### 安装包下载:
|
||||
> 提示:该安装包仅包含 Kubernetes + KubeSphere-core 镜像,如需更多组件 arm64 镜像,可自行编译构建。
|
||||
|
||||
```
|
||||
# md5: 3ad57823faf2dfe945e2fe3dcfd4ace9
|
||||
curl -Ok https://kubesphere-installer.pek3b.qingstor.com/offline/v3.0.0/kubesphere-core-v3.0.0-offline-linux-arm64.tar.gz
|
||||
```
|
||||
### 安装步骤:
|
||||
#### 1. 创建集群配置文件
|
||||
安装包解压后进入`kubesphere-core-v3.0.0-offline-linux-arm64`
|
||||
```
|
||||
./kk create config
|
||||
```
|
||||
根据实际环境信息修改生成的配置文件`config-sample.yaml`,也可使用-f参数自定义配置文件路径。kk 详细用法可参考:https://github.com/kubesphere/kubekey
|
||||
|
||||
> 注意填写正确的私有仓库地址`privateRegistry`(如已准备好私有仓库可设置为已有仓库地址,若使用 kk 创建私有仓库,则该参数设置为:dockerhub.kubekey.local)
|
||||
|
||||
```
|
||||
apiVersion: kubekey.kubesphere.io/v1alpha1
|
||||
kind: Cluster
|
||||
metadata:
|
||||
name: sample
|
||||
spec:
|
||||
hosts:
|
||||
# 注意指定节点 arch 为 arm64
|
||||
- {name: node1, address: 172.169.102.249, internalAddress: 172.169.102.249, password: Qcloud@123, arch: arm64}
|
||||
roleGroups:
|
||||
etcd:
|
||||
- node1
|
||||
control-plane:
|
||||
- node1
|
||||
worker:
|
||||
- node1
|
||||
controlPlaneEndpoint:
|
||||
domain: lb.kubesphere.local
|
||||
address: ""
|
||||
port: 6443
|
||||
kubernetes:
|
||||
version: v1.17.9
|
||||
imageRepo: kubesphere
|
||||
clusterName: cluster.local
|
||||
network:
|
||||
plugin: calico
|
||||
kubePodsCIDR: 10.233.64.0/18
|
||||
kubeServiceCIDR: 10.233.0.0/18
|
||||
registry:
|
||||
registryMirrors: []
|
||||
insecureRegistries: []
|
||||
privateRegistry: dockerhub.kubekey.local
|
||||
addons: []
|
||||
|
||||
```
|
||||
#### 2. 导入镜像
|
||||
进入`kubesphere-all-v3.0.0-offline-linux-arm64/kubesphere-images-v3.0.0`
|
||||
使用 offline-installation-tool.sh 将镜像导入之前准备的仓库中:
|
||||
```
|
||||
# 脚本后镜像仓库地址请填写真实仓库地址
|
||||
./offline-installation-tool.sh -l images-list-v3.0.0.txt -d kubesphere-images -r dockerhub.kubekey.local
|
||||
```
|
||||
|
||||
#### 3. 执行安装
|
||||
```
|
||||
# 以上准备工作完成且再次检查配置文件无误后,执行安装。
|
||||
./kk create cluster -f config-sample.yaml --with-kubesphere
|
||||
```
|
||||
|
||||
### 查看结果
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
|
||||
|
||||
|
|
@ -0,0 +1,501 @@
|
|||
---
|
||||
title: '使用 KubeKey 快速离线部署 KubeSphere 集群'
|
||||
tag: 'KubeSphere, KubeKey'
|
||||
keywords: 'Kubernetes, KubeSphere, KubeKey'
|
||||
description: 'KubeKey 是一个用于部署 Kubernetes 集群的开源轻量级工具。KubeKey v2.0.0 版本新增了清单(manifest)和制品(artifact)的概念,为用户离线部署 Kubernetes 集群提供了一种解决方案。'
|
||||
createTime: '2022-03-11'
|
||||
author: '尹珉'
|
||||
snapshot: 'https://pek3b.qingstor.com/kubesphere-community/images/kubekey-kubesphere-cluster.png'
|
||||
---
|
||||
|
||||
## 一、KubeKey 介绍
|
||||
|
||||
KubeKey(以下简称 KK) 是一个用于部署 Kubernetes 集群的开源轻量级工具。它提供了一种灵活、快速、便捷的方式来仅安装 Kubernetes/K3s,或同时安装 Kubernetes/K3s 和 KubeSphere,以及其他云原生插件。除此之外,它也是扩展和升级集群的有效工具。
|
||||
|
||||
KubeKey v2.0.0 版本新增了清单(manifest)和制品(artifact)的概念,为用户离线部署 Kubernetes 集群提供了一种解决方案。在过去,用户需要准备部署工具,镜像 tar 包和其他相关的二进制文件,每位用户需要部署的 Kubernetes 版本和需要部署的镜像都是不同的。现在使用 KK,用户只需使用清单 manifest 文件来定义将要离线部署的集群环境需要的内容,再通过该 manifest 来导出制品 artifact 文件即可完成准备工作。离线部署时只需要 KK 和 artifact 就可快速、简单的在环境中部署镜像仓库和 Kubernetes 集群。
|
||||
|
||||
## 二、部署准备
|
||||
|
||||
### 1. 资源清单
|
||||
|
||||
| 名称 | 数量 | 用途
|
||||
| ------------ | ---- | ------
|
||||
| KubeSphere 3.2.1 | 1 | 源集群打包使用
|
||||
| 服务器 | 2 | 离线环境部署使用 |
|
||||
|
||||
### 2. 源集群中下载解压 KK 2.0.0-rc-3
|
||||
|
||||
说明:由于 KK 版本不断更新请按照 GitHub 上最新 Releases 版本为准
|
||||
|
||||
```bash
|
||||
$ wget https://github.com/kubesphere/kubekey/releases/download/v2.0.0-rc.3/kubekey-v2.0.0-rc.3-linux-amd64.tar.gz
|
||||
```
|
||||
```bash
|
||||
$ tar -zxvf kubekey-v2.0.0-rc.3-linux-amd64.tar.gz
|
||||
```
|
||||
|
||||
### 3. 源集群中使用 KK 创建 manifest
|
||||
|
||||
说明:manifest 就是一个描述当前 Kubernetes 集群信息和定义 artifact 制品中需要包含哪些内容的文本文件。目前有两种方式来生成该文件:
|
||||
|
||||
根据模版手动创建并编写该文件。
|
||||
使用 KK 命令根据已存在的集群生成该文件。
|
||||
|
||||
```bash
|
||||
$ ./kk create manifest
|
||||
```
|
||||
|
||||
### 4. 源集群中修改 manifest 配置
|
||||
|
||||
说明:
|
||||
|
||||
1. reppostiory 部分需要指定服务器系统的依赖 iso 包,可以直接在 url 中填入对应下载地址或者提前下载 iso 包到本地在 localPath 里填写本地存放路径并删除 url 配置项即可
|
||||
|
||||
2. 开启 harbor、docker-compose 配置项,为后面通过 KK 自建 harbor 仓库推送镜像使用
|
||||
|
||||
3. 默认创建的 manifest 里面的镜像列表从 docker.io 获取,建议修改以下示例中的青云仓库中获取镜像
|
||||
|
||||
4. 可根据实际情况修改 manifest-sample.yaml 文件的内容,用以之后导出期望的 artifact 文件
|
||||
|
||||
```bash
|
||||
$ vim manifest.yaml
|
||||
```
|
||||
|
||||
```yaml
|
||||
---
|
||||
apiVersion: kubekey.kubesphere.io/v1alpha2
|
||||
kind: Manifest
|
||||
metadata:
|
||||
name: sample
|
||||
spec:
|
||||
arches:
|
||||
- amd64
|
||||
operatingSystems:
|
||||
- arch: amd64
|
||||
type: linux
|
||||
id: centos
|
||||
version: "7"
|
||||
repository:
|
||||
iso:
|
||||
localPath: /mnt/sdb/kk2.0-rc/kubekey/centos-7-amd64-rpms.iso
|
||||
url: #这里填写下载地址也可以
|
||||
kubernetesDistributions:
|
||||
- type: kubernetes
|
||||
version: v1.21.5
|
||||
components:
|
||||
helm:
|
||||
version: v3.6.3
|
||||
cni:
|
||||
version: v0.9.1
|
||||
etcd:
|
||||
version: v3.4.13
|
||||
## For now, if your cluster container runtime is containerd, KubeKey will add a docker 20.10.8 container runtime in the below list.
|
||||
## The reason is KubeKey creates a cluster with containerd by installing a docker first and making kubelet connect the socket file of containerd which docker contained.
|
||||
containerRuntimes:
|
||||
- type: docker
|
||||
version: 20.10.8
|
||||
crictl:
|
||||
version: v1.22.0
|
||||
##
|
||||
# docker-registry:
|
||||
# version: "2"
|
||||
harbor:
|
||||
version: v2.4.1
|
||||
docker-compose:
|
||||
version: v2.2.2
|
||||
images:
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.22.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.22.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.22.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.22.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.21.5
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.21.5
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.21.5
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.21.5
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.20.10
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.20.10
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.20.10
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.20.10
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.19.9
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.19.9
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.19.9
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.19.9
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.5
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.4.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.20.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.20.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.20.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.20.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/typha:v3.20.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/flannel:v0.12.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:2.10.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:2.10.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.3
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/nfs-subdir-external-provisioner:v4.0.2
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/ks-installer:v3.2.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/ks-apiserver:v3.2.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/ks-console:v3.2.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/ks-controller-manager:v3.2.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.21.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.20.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kubefed:v0.8.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/tower:v0.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/minio:RELEASE.2019-08-07T01-59-21Z
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/mc:RELEASE.2019-08-07T23-14-43Z
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/snapshot-controller:v4.0.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/nginx-ingress-controller:v0.48.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/defaultbackend-amd64:1.4
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/metrics-server:v0.4.2
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/redis:5.0.14-alpine
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.0.25-alpine
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/alpine:3.14
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/openldap:1.3.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/netshoot:v1.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/cloudcore:v1.7.2
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/edge-watcher:v0.1.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/edge-watcher-agent:v0.1.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/gatekeeper:v3.5.2
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/openpitrix-jobs:v3.2.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/devops-apiserver:v3.2.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/devops-controller:v3.2.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/devops-tools:v3.2.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/ks-jenkins:v3.2.0-2.249.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/jnlp-slave:3.27-1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-base:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-nodejs:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-python:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-base:v3.2.0-podman
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-nodejs:v3.2.0-podman
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.0-podman
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-python:v3.2.0-podman
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.0-podman
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.0-podman
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/s2ioperator:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/s2irun:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/s2i-binary:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java11-centos7:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java11-runtime:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java8-centos7:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java8-runtime:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/java-11-centos7:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/java-8-centos7:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/java-8-runtime:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/java-11-runtime:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-8-centos7:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-6-centos7:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-4-centos7:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/python-36-centos7:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/python-35-centos7:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/python-34-centos7:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/python-27-centos7:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/configmap-reload:v0.3.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus:v2.26.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-config-reloader:v0.43.2
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-operator:v0.43.2
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.8.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-state-metrics:v1.9.7
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/node-exporter:v0.18.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-prometheus-adapter-amd64:v0.6.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/alertmanager:v0.21.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/thanos:v0.18.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/grafana:7.4.3
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.8.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager-operator:v1.4.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager:v1.4.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/notification-tenant-sidecar:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/elasticsearch-curator:v5.7.6
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/elasticsearch-oss:6.7.0-1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/fluentbit-operator:v0.11.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/docker:19.03
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/fluent-bit:v1.8.3
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/log-sidecar-injector:1.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/filebeat:6.7.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-operator:v0.3.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-exporter:v0.3.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-ruler:v0.3.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-auditing-operator:v0.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-auditing-webhook:v0.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/pilot:1.11.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/proxyv2:1.11.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-operator:1.27
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-agent:1.27
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-collector:1.27
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-query:1.27
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-es-index-cleaner:1.27
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kiali-operator:v1.38.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kiali:v1.38
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/busybox:1.31.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/nginx:1.14-alpine
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/wget:1.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/hello:plain-text
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/wordpress:4.8-apache
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/hpa-example:latest
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/java:openjdk-8-jre-alpine
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/fluentd:v1.4.2-2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/perl:latest
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-productpage-v1:1.16.2
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-reviews-v1:1.16.2
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-reviews-v2:1.16.2
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-details-v1:1.16.2
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-ratings-v1:1.16.3
|
||||
registry:
|
||||
auths: {}
|
||||
```
|
||||
|
||||
### 5. 源集群中导出制品 artifact
|
||||
|
||||
说明:
|
||||
|
||||
制品就是一个根据指定的 manifest 文件内容导出的包含镜像 tar 包和相关二进制文件的 tgz 包。在 KK 初始化镜像仓库、创建集群、添加节点和升级集群的命令中均可指定一个 artifact,KK 将自动解包该 artifact 并将在执行命令时直接使用解包出来的文件。
|
||||
|
||||
注意:
|
||||
|
||||
1. 导出命令会从互联网中下载相应的二进制文件,请确保网络连接正常。
|
||||
|
||||
2. 导出命令会根据 manifest 文件中的镜像列表逐个拉取镜像,请确保 KK 的工作节点已安装 containerd 或最低版本为 18.09 的 docker。
|
||||
|
||||
3. KK 会解析镜像列表中的镜像名,若镜像名中的镜像仓库需要鉴权信息,可在 manifest 文件中的 .registry.auths 字段中进行配置。
|
||||
|
||||
4. 若需要导出的 artifact 文件中包含操作系统依赖文件(如:conntarck、chrony 等),可在 operationSystem 元素中的 .repostiory.iso.url 中配置相应的 ISO 依赖文件下载地址。
|
||||
|
||||
```bash
|
||||
$ export KKZONE=cn
|
||||
$ ./kk artifact export -m manifest-sample.yaml -o kubesphere.tar.gz
|
||||
#默认tar包的名字是kubekey-artifact.tar.gz,可通过-o参数自定义包名
|
||||
```
|
||||
|
||||
## 三、离线环境安装集群
|
||||
|
||||
### 1. 离线环境下载 KK
|
||||
|
||||
```bash
|
||||
$ wget https://github.com/kubesphere/kubekey/releases/download/v2.0.0-rc.3/kubekey-v2.0.0-rc.3-linux-amd64.tar.gz
|
||||
```
|
||||
|
||||
### 2. 创建离线集群配置文件
|
||||
|
||||
```bash
|
||||
$./kk create config --with-kubesphere v3.2.1 --with-kubernetes v1.21.5 -f config-sample.yaml
|
||||
```
|
||||
|
||||
### 3. 修改配置文件
|
||||
|
||||
```bash
|
||||
$ vim config-sample.yaml
|
||||
```
|
||||
|
||||
说明:
|
||||
|
||||
1. 按照实际离线环境配置修改节点信息
|
||||
2. 必须指定 registry 仓库部署节点(因为 KK 部署自建 harbor 仓库需要使用)
|
||||
3. registry 里必须指定 type 类型为 harbor,不配 harbor 的话默认是装的 docker registry
|
||||
|
||||
```yaml
|
||||
apiVersion: kubekey.kubesphere.io/v1alpha2
|
||||
kind: Cluster
|
||||
metadata:
|
||||
name: sample
|
||||
spec:
|
||||
hosts:
|
||||
- {name: master, address: 192.168.149.133, internalAddress: 192.168.149.133, user: root, password: "Supaur@2022"}
|
||||
- {name: node1, address: 192.168.149.134, internalAddress: 192.168.149.134, user: root, password: "Supaur@2022"}
|
||||
|
||||
roleGroups:
|
||||
etcd:
|
||||
- master
|
||||
control-plane:
|
||||
- master
|
||||
worker:
|
||||
- node1
|
||||
# 如需使用 kk 自动部署镜像仓库,请设置该主机组 (建议仓库与集群分离部署,减少相互影响)
|
||||
registry:
|
||||
- node1
|
||||
controlPlaneEndpoint:
|
||||
## Internal loadbalancer for apiservers
|
||||
# internalLoadbalancer: haproxy
|
||||
|
||||
domain: lb.kubesphere.local
|
||||
address: ""
|
||||
port: 6443
|
||||
kubernetes:
|
||||
version: v1.21.5
|
||||
clusterName: cluster.local
|
||||
network:
|
||||
plugin: calico
|
||||
kubePodsCIDR: 10.233.64.0/18
|
||||
kubeServiceCIDR: 10.233.0.0/18
|
||||
## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
|
||||
multusCNI:
|
||||
enabled: false
|
||||
registry:
|
||||
# 如需使用 kk 部署 harbor, 可将该参数设置为 harbor,不设置该参数且需使用 kk 创建容器镜像仓库,将默认使用docker registry。
|
||||
type: harbor
|
||||
# 如使用 kk 部署的 harbor 或其他需要登录的仓库,可设置对应仓库的auths,如使用 kk 创建的 docker registry 仓库,则无需配置该参数。
|
||||
# 注意:如使用 kk 部署 harbor,该参数请于 harbor 启动后设置。
|
||||
#auths:
|
||||
# "dockerhub.kubekey.local":
|
||||
# username: admin
|
||||
# password: Harbor12345
|
||||
plainHTTP: false
|
||||
# 设置集群部署时使用的私有仓库
|
||||
privateRegistry: "dockerhub.kubekey.local"
|
||||
namespaceOverride: ""
|
||||
registryMirrors: []
|
||||
insecureRegistries: []
|
||||
addons: []
|
||||
```
|
||||
|
||||
### 4. 方式一:执行脚本创建 harbor 项目
|
||||
|
||||
**4.1 下载指定脚本初始化 harbor 仓库**
|
||||
|
||||
```bash
|
||||
$ curl https://github.com/kubesphere/ks-installer/blob/master/scripts/create_project_harbor.sh
|
||||
```
|
||||
|
||||
**4.2 修改脚本配置文件**
|
||||
|
||||
说明:
|
||||
1. 修改 url 的值为 https://dockerhub.kubekey.local
|
||||
2. 需要指定仓库项目名称和镜像列表的项目名称保持一致
|
||||
3. 脚本末尾 curl 命令末尾加上 -k
|
||||
```bash
|
||||
$ vim create_project_harbor.sh
|
||||
```
|
||||
```yaml
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Copyright 2018 The KubeSphere Authors.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
url="https://dockerhub.kubekey.local" #修改url的值为https://dockerhub.kubekey.local
|
||||
user="admin"
|
||||
passwd="Harbor12345"
|
||||
|
||||
harbor_projects=(library
|
||||
kubesphereio #需要指定仓库项目名称和镜像列表的项目名称保持一致
|
||||
)
|
||||
|
||||
for project in "${harbor_projects[@]}"; do
|
||||
echo "creating $project"
|
||||
curl -u "${user}:${passwd}" -X POST -H "Content-Type: application/json" "${url}/api/v2.0/projects" -d "{ \"project_name\": \"${project}\", \"public\": true}" -k #curl命令末尾加上 -k
|
||||
done
|
||||
```
|
||||
```bash
|
||||
$ chmod +x create_project_harbor.sh
|
||||
```
|
||||
```bash
|
||||
$ ./create_project_harbor.sh
|
||||
```
|
||||
|
||||
**4.3 方式二:登录 harbor 仓库创建项目**
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
### 5. 使用 KK 安装镜像仓库
|
||||
|
||||
说明:
|
||||
1. config-sample.yaml(离线环境集群的配置文件)
|
||||
2. kubesphere.tar.gz(源集群打包出来的 tar 包镜像)
|
||||
3. harbor 安装文件在 /opt/harbor , 如需运维 harbor,可至该目录下。
|
||||
```bash
|
||||
$ ./kk init registry -f config-sample.yaml -a kubesphere.tar.gz
|
||||
```
|
||||
|
||||
### 6. 再次修改集群配置文件
|
||||
|
||||
说明:
|
||||
|
||||
1. 新增 auths 配置增加 dockerhub.kubekey.local、账号密码
|
||||
|
||||
2. privateRegistry 增加 dockerhub.kubekey.local
|
||||
|
||||
3. namespaceOverride 增加 kubesphereio(对应仓库里新建的项目)
|
||||
```bash
|
||||
$ vim config-sample.yaml
|
||||
```
|
||||
|
||||
```yaml
|
||||
...
|
||||
registry:
|
||||
type: harbor
|
||||
auths:
|
||||
"dockerhub.kubekey.local":
|
||||
username: admin
|
||||
password: Harbor12345
|
||||
plainHTTP: false
|
||||
privateRegistry: "dockerhub.kubekey.local"
|
||||
namespaceOverride: "kubesphereio"
|
||||
registryMirrors: []
|
||||
insecureRegistries: []
|
||||
addons: []
|
||||
|
||||
```
|
||||
|
||||
### 7. 安装 KubeSphere 集群
|
||||
|
||||
说明 :
|
||||
1. config-sample.yaml(离线环境集群的配置文件)
|
||||
2. kubesphere.tar.gz(源集群打包出来的 tar 包镜像)
|
||||
3. 指定 K8s 版本、KubepShere 版本
|
||||
4. --with-packages(必须添加否则 ios 依赖安装失败)
|
||||
|
||||
```bash
|
||||
$ ./kk create cluster -f config-sample1.yaml -a kubesphere.tar.gz --with-kubernetes v1.21.5 --with-kubesphere v3.2.1 --with-packages
|
||||
```
|
||||
|
||||
### 8. 查看集群集群状态
|
||||
|
||||
```bash
|
||||
$ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
|
||||
```
|
||||
```bash
|
||||
**************************************************
|
||||
#####################################################
|
||||
### Welcome to KubeSphere! ###
|
||||
#####################################################
|
||||
|
||||
Console: http://192.168.149.133:30880
|
||||
Account: admin
|
||||
Password: P@88w0rd
|
||||
|
||||
NOTES:
|
||||
1. After you log into the console, please check the
|
||||
monitoring status of service components in
|
||||
the "Cluster Management". If any service is not
|
||||
ready, please wait patiently until all components
|
||||
are up and running.
|
||||
2. Please change the default password after login.
|
||||
|
||||
#####################################################
|
||||
https://kubesphere.io 2022-02-28 23:30:06
|
||||
#####################################################
|
||||
```
|
||||
|
||||
### 9. 登录 KubeSphere 控制台
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
## 四、结尾
|
||||
|
||||
本教程使用 KK 2.0.0 作为部署工具来实现 KubeSphere 集群在离线环境中的部署,当然 KK 也支持 kubernetes 的部署。希望 KK 能帮助大家实现离线闪电交付的目的。如果大家有好的想法和建议可以到 [Kubekey 仓库](https://github.com/kubesphere/kubekey)中提交 issue 帮助解决。
|
||||
|
|
@ -0,0 +1,501 @@
|
|||
---
|
||||
title: '使用 KubeKey 快速离线部署 KubeSphere 集群'
|
||||
tag: 'KubeSphere, KubeKey'
|
||||
keywords: 'Kubernetes, KubeSphere, KubeKey'
|
||||
description: 'KubeKey 是一个用于部署 Kubernetes 集群的开源轻量级工具。KubeKey v2.0.0 版本新增了清单(manifest)和制品(artifact)的概念,为用户离线部署 Kubernetes 集群提供了一种解决方案。'
|
||||
createTime: '2022-03-11'
|
||||
author: '尹珉'
|
||||
snapshot: 'https://pek3b.qingstor.com/kubesphere-community/images/kubekey-kubesphere-cluster.png'
|
||||
---
|
||||
|
||||
## 一、KubeKey 介绍
|
||||
|
||||
KubeKey(以下简称 KK) 是一个用于部署 Kubernetes 集群的开源轻量级工具。它提供了一种灵活、快速、便捷的方式来仅安装 Kubernetes/K3s,或同时安装 Kubernetes/K3s 和 KubeSphere,以及其他云原生插件。除此之外,它也是扩展和升级集群的有效工具。
|
||||
|
||||
KubeKey v2.0.0 版本新增了清单(manifest)和制品(artifact)的概念,为用户离线部署 Kubernetes 集群提供了一种解决方案。在过去,用户需要准备部署工具,镜像 tar 包和其他相关的二进制文件,每位用户需要部署的 Kubernetes 版本和需要部署的镜像都是不同的。现在使用 KK,用户只需使用清单 manifest 文件来定义将要离线部署的集群环境需要的内容,再通过该 manifest 来导出制品 artifact 文件即可完成准备工作。离线部署时只需要 KK 和 artifact 就可快速、简单的在环境中部署镜像仓库和 Kubernetes 集群。
|
||||
|
||||
## 二、部署准备
|
||||
|
||||
### 1. 资源清单
|
||||
|
||||
| 名称 | 数量 | 用途
|
||||
| ------------ | ---- | ------
|
||||
| KubeSphere 3.2.1 | 1 | 源集群打包使用
|
||||
| 服务器 | 2 | 离线环境部署使用 |
|
||||
|
||||
### 2. 源集群中下载解压 KK 2.0.0-rc-3
|
||||
|
||||
说明:由于 KK 版本不断更新请按照 GitHub 上最新 Releases 版本为准
|
||||
|
||||
```bash
|
||||
$ wget https://github.com/kubesphere/kubekey/releases/download/v2.0.0-rc.3/kubekey-v2.0.0-rc.3-linux-amd64.tar.gz
|
||||
```
|
||||
```bash
|
||||
$ tar -zxvf kubekey-v2.0.0-rc.3-linux-amd64.tar.gz
|
||||
```
|
||||
|
||||
### 3. 源集群中使用 KK 创建 manifest
|
||||
|
||||
说明:manifest 就是一个描述当前 Kubernetes 集群信息和定义 artifact 制品中需要包含哪些内容的文本文件。目前有两种方式来生成该文件:
|
||||
|
||||
根据模版手动创建并编写该文件。
|
||||
使用 KK 命令根据已存在的集群生成该文件。
|
||||
|
||||
```bash
|
||||
$ ./kk create manifest
|
||||
```
|
||||
|
||||
### 4. 源集群中修改 manifest 配置
|
||||
|
||||
说明:
|
||||
|
||||
1. reppostiory 部分需要指定服务器系统的依赖 iso 包,可以直接在 url 中填入对应下载地址或者提前下载 iso 包到本地在 localPath 里填写本地存放路径并删除 url 配置项即可
|
||||
|
||||
2. 开启 harbor、docker-compose 配置项,为后面通过 KK 自建 harbor 仓库推送镜像使用
|
||||
|
||||
3. 默认创建的 manifest 里面的镜像列表从 docker.io 获取,建议修改以下示例中的青云仓库中获取镜像
|
||||
|
||||
4. 可根据实际情况修改 manifest-sample.yaml 文件的内容,用以之后导出期望的 artifact 文件
|
||||
|
||||
```bash
|
||||
$ vim manifest.yaml
|
||||
```
|
||||
|
||||
```yaml
|
||||
---
|
||||
apiVersion: kubekey.kubesphere.io/v1alpha2
|
||||
kind: Manifest
|
||||
metadata:
|
||||
name: sample
|
||||
spec:
|
||||
arches:
|
||||
- amd64
|
||||
operatingSystems:
|
||||
- arch: amd64
|
||||
type: linux
|
||||
id: centos
|
||||
version: "7"
|
||||
repository:
|
||||
iso:
|
||||
localPath: /mnt/sdb/kk2.0-rc/kubekey/centos-7-amd64-rpms.iso
|
||||
url: #这里填写下载地址也可以
|
||||
kubernetesDistributions:
|
||||
- type: kubernetes
|
||||
version: v1.21.5
|
||||
components:
|
||||
helm:
|
||||
version: v3.6.3
|
||||
cni:
|
||||
version: v0.9.1
|
||||
etcd:
|
||||
version: v3.4.13
|
||||
## For now, if your cluster container runtime is containerd, KubeKey will add a docker 20.10.8 container runtime in the below list.
|
||||
## The reason is KubeKey creates a cluster with containerd by installing a docker first and making kubelet connect the socket file of containerd which docker contained.
|
||||
containerRuntimes:
|
||||
- type: docker
|
||||
version: 20.10.8
|
||||
crictl:
|
||||
version: v1.22.0
|
||||
##
|
||||
# docker-registry:
|
||||
# version: "2"
|
||||
harbor:
|
||||
version: v2.4.1
|
||||
docker-compose:
|
||||
version: v2.2.2
|
||||
images:
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.22.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.22.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.22.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.22.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.21.5
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.21.5
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.21.5
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.21.5
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.20.10
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.20.10
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.20.10
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.20.10
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.19.9
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.19.9
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.19.9
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.19.9
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.5
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.4.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.20.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.20.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.20.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.20.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/typha:v3.20.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/flannel:v0.12.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:2.10.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:2.10.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.3
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/nfs-subdir-external-provisioner:v4.0.2
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/ks-installer:v3.2.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/ks-apiserver:v3.2.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/ks-console:v3.2.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/ks-controller-manager:v3.2.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.21.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.20.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kubefed:v0.8.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/tower:v0.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/minio:RELEASE.2019-08-07T01-59-21Z
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/mc:RELEASE.2019-08-07T23-14-43Z
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/snapshot-controller:v4.0.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/nginx-ingress-controller:v0.48.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/defaultbackend-amd64:1.4
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/metrics-server:v0.4.2
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/redis:5.0.14-alpine
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.0.25-alpine
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/alpine:3.14
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/openldap:1.3.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/netshoot:v1.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/cloudcore:v1.7.2
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/edge-watcher:v0.1.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/edge-watcher-agent:v0.1.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/gatekeeper:v3.5.2
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/openpitrix-jobs:v3.2.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/devops-apiserver:v3.2.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/devops-controller:v3.2.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/devops-tools:v3.2.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/ks-jenkins:v3.2.0-2.249.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/jnlp-slave:3.27-1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-base:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-nodejs:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-python:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-base:v3.2.0-podman
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-nodejs:v3.2.0-podman
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.0-podman
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-python:v3.2.0-podman
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.0-podman
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.0-podman
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/s2ioperator:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/s2irun:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/s2i-binary:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java11-centos7:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java11-runtime:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java8-centos7:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java8-runtime:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/java-11-centos7:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/java-8-centos7:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/java-8-runtime:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/java-11-runtime:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-8-centos7:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-6-centos7:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-4-centos7:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/python-36-centos7:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/python-35-centos7:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/python-34-centos7:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/python-27-centos7:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/configmap-reload:v0.3.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus:v2.26.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-config-reloader:v0.43.2
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-operator:v0.43.2
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.8.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-state-metrics:v1.9.7
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/node-exporter:v0.18.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-prometheus-adapter-amd64:v0.6.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/alertmanager:v0.21.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/thanos:v0.18.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/grafana:7.4.3
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.8.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager-operator:v1.4.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager:v1.4.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/notification-tenant-sidecar:v3.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/elasticsearch-curator:v5.7.6
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/elasticsearch-oss:6.7.0-1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/fluentbit-operator:v0.11.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/docker:19.03
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/fluent-bit:v1.8.3
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/log-sidecar-injector:1.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/filebeat:6.7.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-operator:v0.3.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-exporter:v0.3.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-ruler:v0.3.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-auditing-operator:v0.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-auditing-webhook:v0.2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/pilot:1.11.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/proxyv2:1.11.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-operator:1.27
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-agent:1.27
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-collector:1.27
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-query:1.27
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-es-index-cleaner:1.27
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kiali-operator:v1.38.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/kiali:v1.38
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/busybox:1.31.1
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/nginx:1.14-alpine
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/wget:1.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/hello:plain-text
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/wordpress:4.8-apache
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/hpa-example:latest
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/java:openjdk-8-jre-alpine
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/fluentd:v1.4.2-2.0
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/perl:latest
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-productpage-v1:1.16.2
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-reviews-v1:1.16.2
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-reviews-v2:1.16.2
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-details-v1:1.16.2
|
||||
- registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-ratings-v1:1.16.3
|
||||
registry:
|
||||
auths: {}
|
||||
```
|
||||
|
||||
### 5. 源集群中导出制品 artifact
|
||||
|
||||
说明:
|
||||
|
||||
制品就是一个根据指定的 manifest 文件内容导出的包含镜像 tar 包和相关二进制文件的 tgz 包。在 KK 初始化镜像仓库、创建集群、添加节点和升级集群的命令中均可指定一个 artifact,KK 将自动解包该 artifact 并将在执行命令时直接使用解包出来的文件。
|
||||
|
||||
注意:
|
||||
|
||||
1. 导出命令会从互联网中下载相应的二进制文件,请确保网络连接正常。
|
||||
|
||||
2. 导出命令会根据 manifest 文件中的镜像列表逐个拉取镜像,请确保 KK 的工作节点已安装 containerd 或最低版本为 18.09 的 docker。
|
||||
|
||||
3. KK 会解析镜像列表中的镜像名,若镜像名中的镜像仓库需要鉴权信息,可在 manifest 文件中的 .registry.auths 字段中进行配置。
|
||||
|
||||
4. 若需要导出的 artifact 文件中包含操作系统依赖文件(如:conntarck、chrony 等),可在 operationSystem 元素中的 .repostiory.iso.url 中配置相应的 ISO 依赖文件下载地址。
|
||||
|
||||
```bash
|
||||
$ export KKZONE=cn
|
||||
$ ./kk artifact export -m manifest-sample.yaml -o kubesphere.tar.gz
|
||||
#默认tar包的名字是kubekey-artifact.tar.gz,可通过-o参数自定义包名
|
||||
```
|
||||
|
||||
## 三、离线环境安装集群
|
||||
|
||||
### 1. 离线环境下载 KK
|
||||
|
||||
```bash
|
||||
$ wget https://github.com/kubesphere/kubekey/releases/download/v2.0.0-rc.3/kubekey-v2.0.0-rc.3-linux-amd64.tar.gz
|
||||
```
|
||||
|
||||
### 2. 创建离线集群配置文件
|
||||
|
||||
```bash
|
||||
$./kk create config --with-kubesphere v3.2.1 --with-kubernetes v1.21.5 -f config-sample.yaml
|
||||
```
|
||||
|
||||
### 3. 修改配置文件
|
||||
|
||||
```bash
|
||||
$ vim config-sample.yaml
|
||||
```
|
||||
|
||||
说明:
|
||||
|
||||
1. 按照实际离线环境配置修改节点信息
|
||||
2. 必须指定 registry 仓库部署节点(因为 KK 部署自建 harbor 仓库需要使用)
|
||||
3. registry 里必须指定 type 类型为 harbor,不配 harbor 的话默认是装的 docker registry
|
||||
|
||||
```yaml
|
||||
apiVersion: kubekey.kubesphere.io/v1alpha2
|
||||
kind: Cluster
|
||||
metadata:
|
||||
name: sample
|
||||
spec:
|
||||
hosts:
|
||||
- {name: master, address: 192.168.149.133, internalAddress: 192.168.149.133, user: root, password: "Supaur@2022"}
|
||||
- {name: node1, address: 192.168.149.134, internalAddress: 192.168.149.134, user: root, password: "Supaur@2022"}
|
||||
|
||||
roleGroups:
|
||||
etcd:
|
||||
- master
|
||||
control-plane:
|
||||
- master
|
||||
worker:
|
||||
- node1
|
||||
# 如需使用 kk 自动部署镜像仓库,请设置该主机组 (建议仓库与集群分离部署,减少相互影响)
|
||||
registry:
|
||||
- node1
|
||||
controlPlaneEndpoint:
|
||||
## Internal loadbalancer for apiservers
|
||||
# internalLoadbalancer: haproxy
|
||||
|
||||
domain: lb.kubesphere.local
|
||||
address: ""
|
||||
port: 6443
|
||||
kubernetes:
|
||||
version: v1.21.5
|
||||
clusterName: cluster.local
|
||||
network:
|
||||
plugin: calico
|
||||
kubePodsCIDR: 10.233.64.0/18
|
||||
kubeServiceCIDR: 10.233.0.0/18
|
||||
## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
|
||||
multusCNI:
|
||||
enabled: false
|
||||
registry:
|
||||
# 如需使用 kk 部署 harbor, 可将该参数设置为 harbor,不设置该参数且需使用 kk 创建容器镜像仓库,将默认使用docker registry。
|
||||
type: harbor
|
||||
# 如使用 kk 部署的 harbor 或其他需要登录的仓库,可设置对应仓库的auths,如使用 kk 创建的 docker registry 仓库,则无需配置该参数。
|
||||
# 注意:如使用 kk 部署 harbor,该参数请于 harbor 启动后设置。
|
||||
#auths:
|
||||
# "dockerhub.kubekey.local":
|
||||
# username: admin
|
||||
# password: Harbor12345
|
||||
plainHTTP: false
|
||||
# 设置集群部署时使用的私有仓库
|
||||
privateRegistry: "dockerhub.kubekey.local"
|
||||
namespaceOverride: ""
|
||||
registryMirrors: []
|
||||
insecureRegistries: []
|
||||
addons: []
|
||||
```
|
||||
|
||||
### 4. 方式一:执行脚本创建 harbor 项目
|
||||
|
||||
**4.1 下载指定脚本初始化 harbor 仓库**
|
||||
|
||||
```bash
|
||||
$ curl https://github.com/kubesphere/ks-installer/blob/master/scripts/create_project_harbor.sh
|
||||
```
|
||||
|
||||
**4.2 修改脚本配置文件**
|
||||
|
||||
说明:
|
||||
1. 修改 url 的值为 https://dockerhub.kubekey.local
|
||||
2. 需要指定仓库项目名称和镜像列表的项目名称保持一致
|
||||
3. 脚本末尾 curl 命令末尾加上 -k
|
||||
```bash
|
||||
$ vim create_project_harbor.sh
|
||||
```
|
||||
```yaml
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Copyright 2018 The KubeSphere Authors.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
url="https://dockerhub.kubekey.local" #修改url的值为https://dockerhub.kubekey.local
|
||||
user="admin"
|
||||
passwd="Harbor12345"
|
||||
|
||||
harbor_projects=(library
|
||||
kubesphereio #需要指定仓库项目名称和镜像列表的项目名称保持一致
|
||||
)
|
||||
|
||||
for project in "${harbor_projects[@]}"; do
|
||||
echo "creating $project"
|
||||
curl -u "${user}:${passwd}" -X POST -H "Content-Type: application/json" "${url}/api/v2.0/projects" -d "{ \"project_name\": \"${project}\", \"public\": true}" -k #curl命令末尾加上 -k
|
||||
done
|
||||
```
|
||||
```bash
|
||||
$ chmod +x create_project_harbor.sh
|
||||
```
|
||||
```bash
|
||||
$ ./create_project_harbor.sh
|
||||
```
|
||||
|
||||
**4.3 方式二:登录 harbor 仓库创建项目**
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
### 5. 使用 KK 安装镜像仓库
|
||||
|
||||
说明:
|
||||
1. config-sample.yaml(离线环境集群的配置文件)
|
||||
2. kubesphere.tar.gz(源集群打包出来的 tar 包镜像)
|
||||
3. harbor 安装文件在 /opt/harbor , 如需运维 harbor,可至该目录下。
|
||||
```bash
|
||||
$ ./kk init registry -f config-sample.yaml -a kubesphere.tar.gz
|
||||
```
|
||||
|
||||
### 6. 再次修改集群配置文件
|
||||
|
||||
说明:
|
||||
|
||||
1. 新增 auths 配置增加 dockerhub.kubekey.local、账号密码
|
||||
|
||||
2. privateRegistry 增加 dockerhub.kubekey.local
|
||||
|
||||
3. namespaceOverride 增加 kubesphereio(对应仓库里新建的项目)
|
||||
```bash
|
||||
$ vim config-sample.yaml
|
||||
```
|
||||
|
||||
```yaml
|
||||
...
|
||||
registry:
|
||||
type: harbor
|
||||
auths:
|
||||
"dockerhub.kubekey.local":
|
||||
username: admin
|
||||
password: Harbor12345
|
||||
plainHTTP: false
|
||||
privateRegistry: "dockerhub.kubekey.local"
|
||||
namespaceOverride: "kubesphereio"
|
||||
registryMirrors: []
|
||||
insecureRegistries: []
|
||||
addons: []
|
||||
|
||||
```
|
||||
|
||||
### 7. 安装 KubeSphere 集群
|
||||
|
||||
说明 :
|
||||
1. config-sample.yaml(离线环境集群的配置文件)
|
||||
2. kubesphere.tar.gz(源集群打包出来的 tar 包镜像)
|
||||
3. 指定 K8s 版本、KubepShere 版本
|
||||
4. --with-packages(必须添加否则 ios 依赖安装失败)
|
||||
|
||||
```bash
|
||||
$ ./kk create cluster -f config-sample1.yaml -a kubesphere.tar.gz --with-kubernetes v1.21.5 --with-kubesphere v3.2.1 --with-packages
|
||||
```
|
||||
|
||||
### 8. 查看集群集群状态
|
||||
|
||||
```bash
|
||||
$ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
|
||||
```
|
||||
```bash
|
||||
**************************************************
|
||||
#####################################################
|
||||
### Welcome to KubeSphere! ###
|
||||
#####################################################
|
||||
|
||||
Console: http://192.168.149.133:30880
|
||||
Account: admin
|
||||
Password: P@88w0rd
|
||||
|
||||
NOTES:
|
||||
1. After you log into the console, please check the
|
||||
monitoring status of service components in
|
||||
the "Cluster Management". If any service is not
|
||||
ready, please wait patiently until all components
|
||||
are up and running.
|
||||
2. Please change the default password after login.
|
||||
|
||||
#####################################################
|
||||
https://kubesphere.io 2022-02-28 23:30:06
|
||||
#####################################################
|
||||
```
|
||||
|
||||
### 9. 登录 KubeSphere 控制台
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
## 四、结尾
|
||||
|
||||
本教程使用 KK 2.0.0 作为部署工具来实现 KubeSphere 集群在离线环境中的部署,当然 KK 也支持 kubernetes 的部署。希望 KK 能帮助大家实现离线闪电交付的目的。如果大家有好的想法和建议可以到 [Kubekey 仓库](https://github.com/kubesphere/kubekey)中提交 issue 帮助解决。
|
||||
|
|
@ -27,7 +27,7 @@ Cluster nodes are only accessible to cluster administrators. Some node metrics a
|
|||
|
||||
- **Name**: The node name and subnet IP address.
|
||||
- **Status**: The current status of a node, indicating whether a node is available or not.
|
||||
- **Role**: The role of a node, indicating whether a node is a worker or master.
|
||||
- **Role**: The role of a node, indicating whether a node is a worker or the control plane.
|
||||
- **CPU Usage**: The real-time CPU usage of a node.
|
||||
- **Memory Usage**: The real-time memory usage of a node.
|
||||
- **Pods**: The real-time usage of Pods on a node.
|
||||
|
|
|
|||
|
|
@ -60,7 +60,7 @@ Usually, a cluster can be used after restarting, but the cluster may be unavaila
|
|||
Ensure all cluster dependencies are ready, such as external storage.
|
||||
### Step 2: Power on cluster machines
|
||||
Wait for the cluster to be up and running, which may take about 10 minutes.
|
||||
### Step 3: Check all master nodes' status
|
||||
### Step 3: Check the status of all control plane components
|
||||
Check the status of core components, such as etcd services, and make sure everything is ready.
|
||||
```bash
|
||||
kubectl get nodes -l node-role.kubernetes.io/master
|
||||
|
|
|
|||
|
|
@ -11,7 +11,7 @@ When you use KubeKey to set up a cluster, you create a configuration file which
|
|||
```bash
|
||||
spec:
|
||||
hosts:
|
||||
- {name: master, address: 192.168.0.2, internalAddress: 192.168.0.2, user: ubuntu, password: Testing123}
|
||||
- {name: control plane, address: 192.168.0.2, internalAddress: 192.168.0.2, user: ubuntu, password: Testing123}
|
||||
- {name: node1, address: 192.168.0.3, internalAddress: 192.168.0.3, user: ubuntu, password: Testing123}
|
||||
- {name: node2, address: 192.168.0.4, internalAddress: 192.168.0.4, user: ubuntu, password: Testing123}
|
||||
```
|
||||
|
|
@ -30,7 +30,7 @@ If you see an error message as above, verify that:
|
|||
|
||||
```bash
|
||||
hosts:
|
||||
- {name: master, address: 192.168.0.2, internalAddress: 192.168.0.2, port: 8022, user: ubuntu, password: Testing123}
|
||||
- {name: control plane, address: 192.168.0.2, internalAddress: 192.168.0.2, port: 8022, user: ubuntu, password: Testing123}
|
||||
```
|
||||
|
||||
- SSH connections are not restricted in `/etc/ssh/sshd_config`. For example, `PasswordAuthentication` should be set to `true`.
|
||||
|
|
|
|||
|
|
@ -30,7 +30,7 @@ You need to select:
|
|||
|
||||
- To install KubeSphere 3.2.1 on Kubernetes, your Kubernetes version must be v1.19.x, v1.20.x, v1.21.x, or v1.22.x (experimental).
|
||||
- 2 nodes are included in this example. You can add more nodes based on your own needs especially in a production environment.
|
||||
- The machine type Standard / 4 GB / 2 vCPUs is for minimal installation. If you plan to enable several pluggable components or use the cluster for production, you can upgrade your nodes to a more powerfull type (such as CPU-Optimized / 8 GB / 4 vCPUs). It seems that DigitalOcean provisions the master nodes based on the type of the worker nodes, and for Standard ones the API server can become unresponsive quite soon.
|
||||
- The machine type Standard / 4 GB / 2 vCPUs is for minimal installation. If you plan to enable several pluggable components or use the cluster for production, you can upgrade your nodes to a more powerful type (such as CPU-Optimized / 8 GB / 4 vCPUs). It seems that DigitalOcean provisions the control plane nodes based on the type of the worker nodes, and for Standard ones the API server can become unresponsive quite soon.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
|
|
|
|||
|
|
@ -78,7 +78,7 @@ Perform the following steps to configure [EdgeMesh](https://kubeedge.io/en/docs/
|
|||
|
||||
## Create Firewall Rules and Port Forwarding Rules
|
||||
|
||||
To make sure edge nodes can successfully talk to your cluster, you must forward ports for outside traffic to get into your network. Specifically, map an external port to the corresponding internal IP address (master node) and port based on the table below. Besides, you also need to create firewall rules to allow traffic to these ports (`10000` to `10004`).
|
||||
To make sure edge nodes can successfully talk to your cluster, you must forward ports for outside traffic to get into your network. Specifically, map an external port to the corresponding internal IP address (control plane node) and port based on the table below. Besides, you also need to create firewall rules to allow traffic to these ports (`10000` to `10004`).
|
||||
|
||||
| Fields | External Ports | Fields | Internal Ports |
|
||||
| ------------------- | -------------- | ----------------------- | -------------- |
|
||||
|
|
@ -125,7 +125,7 @@ To make sure edge nodes can successfully talk to your cluster, you must forward
|
|||
|
||||
{{</ notice >}}
|
||||
|
||||
6. After an edge node joins your cluster, some Pods may be scheduled to it while they remains in the `Pending` state on the edge node. Due to the tolerations some DaemonSets (for example, Calico) have, in the current version (KubeSphere 3.2.1), you need to manually patch some Pods so that they will not be schedule to the edge node.
|
||||
6. After an edge node joins your cluster, some Pods may be scheduled to it while they remain in the `Pending` state on the edge node. Due to the tolerations some DaemonSets (for example, Calico) have, in the current version (KubeSphere 3.2.1), you need to manually patch some Pods so that they will not be scheduled to the edge node.
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
|
|
|||
|
|
@ -6,13 +6,13 @@ linkTitle: "Set up an HA Cluster Using a Load Balancer"
|
|||
weight: 3220
|
||||
---
|
||||
|
||||
You can set up a single-master Kubernetes cluster with KubeSphere installed based on the tutorial of [Multi-node Installation](../../../installing-on-linux/introduction/multioverview/). Single-master clusters may be sufficient for development and testing in most cases. For a production environment, however, you need to consider the high availability of the cluster. If key components (for example, kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same master node, Kubernetes and KubeSphere will be unavailable once the master node goes down. Therefore, you need to set up a high-availability cluster by provisioning load balancers with multiple master nodes. You can use any cloud load balancer, or any hardware load balancer (for example, F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters.
|
||||
You can set up Kubernetes cluster (a control plane node) with KubeSphere installed based on the tutorial of [Multi-node Installation](../../../installing-on-linux/introduction/multioverview/). Clusters with a control plane node may be sufficient for development and testing in most cases. For a production environment, however, you need to consider the high availability of the cluster. If key components (for example, kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same control plane node, Kubernetes and KubeSphere will be unavailable once the control plane node goes down. Therefore, you need to set up a high-availability cluster by provisioning load balancers with multiple control plane nodes. You can use any cloud load balancer, or any hardware load balancer (for example, F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters.
|
||||
|
||||
This tutorial demonstrates the general configurations of a high-availability cluster as you install KubeSphere on Linux.
|
||||
|
||||
## Architecture
|
||||
|
||||
Make sure you have prepared six Linux machines before you begin, with three of them serving as master nodes and the other three as worker nodes. The following image shows details of these machines, including their private IP address and role. For more information about system and network requirements, see [Multi-node Installation](../../../installing-on-linux/introduction/multioverview/#step-1-prepare-linux-hosts).
|
||||
Make sure you have prepared six Linux machines before you begin, with three of them serving as control plane nodes and the other three as worker nodes. The following image shows details of these machines, including their private IP address and role. For more information about system and network requirements, see [Multi-node Installation](../../../installing-on-linux/introduction/multioverview/#step-1-prepare-linux-hosts).
|
||||
|
||||

|
||||
|
||||
|
|
|
|||
|
|
@ -33,7 +33,7 @@ If you have an existing Kubernetes cluster, see [Overview of Installing on Kuber
|
|||
## Before Installation
|
||||
|
||||
- As images will be pulled from the Internet, your environment must have Internet access. Otherwise, you need to [install KubeSphere in an air-gapped environment](../air-gapped-installation/).
|
||||
- For all-in-one installation, the only one node is both the master and the worker.
|
||||
- For all-in-one installation, the only one node is both the control plane and the worker.
|
||||
- For multi-node installation, you need to provide host information in a configuration file.
|
||||
- See [Port Requirements](../port-firewall/) before installation.
|
||||
|
||||
|
|
|
|||
|
|
@ -269,7 +269,7 @@ At the same time, you must provide the login information used to connect to each
|
|||
|
||||
#### controlPlaneEndpoint (for HA installation only)
|
||||
|
||||
The `controlPlaneEndpoint` is where you provide your external load balancer information for an HA cluster. You need to prepare and configure the external load balancer if and only if you need to install multiple master nodes. Please note that the address and port should be indented by two spaces in `config-sample.yaml`, and `address` should be your load balancer's IP address. See [HA Configurations](../../../installing-on-linux/high-availability-configurations/ha-configuration/) for details.
|
||||
The `controlPlaneEndpoint` is where you provide your external load balancer information for an HA cluster. You need to prepare and configure the external load balancer if and only if you need to install multiple control plane nodes. Please note that the address and port should be indented by two spaces in `config-sample.yaml`, and `address` should be your load balancer's IP address. See [HA Configurations](../../../installing-on-linux/high-availability-configurations/ha-configuration/) for details.
|
||||
|
||||
#### addons
|
||||
|
||||
|
|
|
|||
|
|
@ -9,9 +9,9 @@ weight: 3510
|
|||
|
||||
## Introduction
|
||||
|
||||
For a production environment, we need to consider the high availability of the cluster. If the key components (for example, kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same master node, Kubernetes and KubeSphere will be unavailable once the master node goes down. Therefore, we need to set up a high-availability cluster by provisioning load balancers with multiple master nodes. You can use any cloud load balancer, or any hardware load balancer (for example, F5). In addition, Keepalived and [HAProxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters.
|
||||
For a production environment, we need to consider the high availability of the cluster. If the key components (for example, kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same control plane node, Kubernetes and KubeSphere will be unavailable once the control plane node goes down. Therefore, we need to set up a high-availability cluster by provisioning load balancers with multiple control plane nodes. You can use any cloud load balancer, or any hardware load balancer (for example, F5). In addition, Keepalived and [HAProxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters.
|
||||
|
||||
This tutorial walks you through an example of how to create Keepalived and HAProxy, and implement high availability of master and etcd nodes using the load balancers on VMware vSphere.
|
||||
This tutorial walks you through an example of how to create Keepalived and HAProxy, and implement high availability of control plane and etcd nodes using the load balancers on VMware vSphere.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
|
|
@ -83,7 +83,7 @@ You can follow the New Virtual Machine wizard to create a virtual machine to pla
|
|||
|
||||
## Install a Load Balancer using Keepalived and HAProxy
|
||||
|
||||
For a production environment, you have to prepare an external load balancer for your multiple-master cluster. If you do not have a load balancer, you can install it using Keepalived and HAProxy. If you are provisioning a development or testing environment by installing a single-master cluster, please skip this section.
|
||||
For a production environment, you have to prepare an external load balancer for your cluster with multiple control plane nodes. If you do not have a load balancer, you can install it using Keepalived and HAProxy. If you are provisioning a development or testing environment by installing a cluster with a control plane node, please skip this section.
|
||||
|
||||
### Yum Install
|
||||
|
||||
|
|
|
|||
|
|
@ -27,7 +27,7 @@ Besides these VMs, other resources like Load Balancer, Virtual Network and Netwo
|
|||
|
||||
## Architecture
|
||||
|
||||
Six machines of **Ubuntu 18.04** will be deployed in an Azure Resource Group. Three of them are grouped into an availability set, serving as both the Master and etcd nodes. The other three VMs will be defined as a VMSS where Worker nodes will be running.
|
||||
Six machines of **Ubuntu 18.04** will be deployed in an Azure Resource Group. Three of them are grouped into an availability set, serving as both the control plane and etcd nodes. The other three VMs will be defined as a VMSS where Worker nodes will be running.
|
||||
|
||||

|
||||
|
||||
|
|
|
|||
|
|
@ -8,9 +8,9 @@ Weight: 3420
|
|||
|
||||
## Introduction
|
||||
|
||||
For a production environment, you need to consider the high availability of the cluster. If key components (for example, kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same master node, Kubernetes and KubeSphere will be unavailable once the master node goes down. Therefore, you need to set up a high-availability cluster by provisioning load balancers with multiple master nodes. You can use any cloud load balancer, or any hardware load balancer (for example, F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters.
|
||||
For a production environment, you need to consider the high availability of the cluster. If key components (for example, kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same control plane node, Kubernetes and KubeSphere will be unavailable once the control plane node goes down. Therefore, you need to set up a high-availability cluster by provisioning load balancers with multiple control plane nodes. You can use any cloud load balancer, or any hardware load balancer (for example, F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters.
|
||||
|
||||
This tutorial walks you through an example of how to create two [QingCloud load balancers](https://docs.qingcloud.com/product/network/loadbalancer), serving as the internal load balancer and external load balancer respectively, and of how to implement high availability of master and etcd nodes using the load balancers.
|
||||
This tutorial walks you through an example of how to create two [QingCloud load balancers](https://docs.qingcloud.com/product/network/loadbalancer), serving as the internal load balancer and external load balancer respectively, and of how to implement high availability of control plane and etcd nodes using the load balancers.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
|
|
@ -20,7 +20,7 @@ This tutorial walks you through an example of how to create two [QingCloud load
|
|||
|
||||
## Architecture
|
||||
|
||||
This example prepares six machines of **Ubuntu 16.04.6**. You will create two load balancers, and deploy three master and etcd nodes on three of the machines. You can configure these master and etcd nodes in `config-sample.yaml` created by KubeKey (Please note that this is the default name, which can be changed by yourself).
|
||||
This example prepares six machines of **Ubuntu 16.04.6**. You will create two load balancers, and deploy three control plane nodes and etcd nodes on three of the machines. You can configure these control plane and etcd nodes in `config-sample.yaml` created by KubeKey (Please note that this is the default name, which can be changed by yourself).
|
||||
|
||||

|
||||
|
||||
|
|
@ -63,17 +63,17 @@ This step demonstrates how to create load balancers on the QingCloud platform.
|
|||
|
||||
{{</ notice >}}
|
||||
|
||||
4. Click **Add Backend**, and choose the VxNet you just selected (in this example, it is `pn`). Click **Advanced Search**, choose the three master nodes, and set the port to `6443` which is the default secure port of api-server.
|
||||
4. Click **Add Backend**, and choose the VxNet you just selected (in this example, it is `pn`). Click **Advanced Search**, choose the three control plane nodes, and set the port to `6443` which is the default secure port of api-server.
|
||||
|
||||

|
||||
|
||||
Click **Submit** when you finish.
|
||||
|
||||
5. Click **Apply Changes** to use the configurations. At this point, you can find the three masters have been added as the backend servers of the listener that is behind the internal load balancer.
|
||||
5. Click **Apply Changes** to use the configurations. At this point, you can find the three control plane nodes have been added as the backend servers of the listener that is behind the internal load balancer.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
The status of all masters might show **Not Available** after you added them as backends. This is normal since port `6443` of api-server is not active on master nodes yet. The status will change to **Active** and the port of api-server will be exposed after the installation finishes, which means the internal load balancer you configured works as expected.
|
||||
The status of all control plane nodes might show **Not Available** after you added them as backends. This is normal since port `6443` of api-server is not active on control plane nodes yet. The status will change to **Active** and the port of api-server will be exposed after the installation finishes, which means the internal load balancer you configured works as expected.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
|
|
@ -184,16 +184,16 @@ Create an example configuration file with default configurations. Here Kubernete
|
|||
|
||||
### Step 3: Set cluster nodes
|
||||
|
||||
As you adopt the HA topology with stacked control plane nodes, the master nodes and etcd nodes are on the same three machines.
|
||||
As you adopt the HA topology with stacked control plane nodes, the control plane nodes and etcd nodes are on the same three machines.
|
||||
|
||||
| **Property** | **Description** |
|
||||
| :----------- | :-------------------------------- |
|
||||
| `hosts` | Detailed information of all nodes |
|
||||
| `etcd` | etcd node names |
|
||||
| `master` | Master node names |
|
||||
| `control-plane` | Control plane node names |
|
||||
| `worker` | Worker node names |
|
||||
|
||||
Put the master node name (`master1`, `master2` and `master3`) under `etcd` and `master` respectively as below, which means these three machines will serve as both the master and etcd nodes. Note that the number of etcd needs to be odd. Meanwhile, it is not recommended that you install etcd on worker nodes since the memory consumption of etcd is very high.
|
||||
Put the control plane nodes (`master1`, `master2` and `master3`) under `etcd` and `master` respectively as below, which means these three machines will serve as both the control plane and etcd nodes. Note that the number of etcd needs to be odd. Meanwhile, it is not recommended that you install etcd on worker nodes since the memory consumption of etcd is very high.
|
||||
|
||||
#### config-sample.yaml Example
|
||||
|
||||
|
|
@ -327,7 +327,7 @@ Both listeners show that the status is **Active**, meaning nodes are up and runn
|
|||
|
||||
In the web console of KubeSphere, you can also see that all the nodes are functioning well.
|
||||
|
||||
To verify if the cluster is highly available, you can turn off an instance on purpose. For example, the above console is accessed through the address `IP: 30880` (the EIP address here is the one bound to the external load balancer). If the cluster is highly available, the console will still work well even if you shut down a master node.
|
||||
To verify if the cluster is highly available, you can turn off an instance on purpose. For example, the above console is accessed through the address `IP: 30880` (the EIP address here is the one bound to the external load balancer). If the cluster is highly available, the console will still work well even if you shut down a control plane node.
|
||||
|
||||
## See Also
|
||||
|
||||
|
|
|
|||
|
|
@ -23,7 +23,7 @@ The following modules elaborate on the key features and benefits provided by Kub
|
|||
|
||||
### Provisioning Kubernetes Clusters
|
||||
|
||||
[KubeKey](https://github.com/kubesphere/kubekey) allows you to deploy Kubernetes on your infrastructure out of box, provisioning Kubernetes clusters with high availability. It is recommended that at least three master nodes are configured behind a load balancer for production environment.
|
||||
[KubeKey](https://github.com/kubesphere/kubekey) allows you to deploy Kubernetes on your infrastructure out of box, provisioning Kubernetes clusters with high availability. It is recommended that at least three control plane nodes are configured behind a load balancer for production environment.
|
||||
|
||||
### Kubernetes Resource Management
|
||||
|
||||
|
|
|
|||
|
|
@ -24,7 +24,7 @@ KubeSphere 支持的操作系统包括:
|
|||
这里以一台 openEuler 20.09 64bit 为例:
|
||||
|name|ip|role|
|
||||
|---|---|---|
|
||||
|node1|172.169.102.249|etcd, master, worker|
|
||||
|node1|172.169.102.249|etcd, control plane, worker|
|
||||
|
||||
确保机器已经安装所需依赖软件(sudo curl openssl ebtables socat ipset conntrack docker)
|
||||
|
||||
|
|
|
|||
Loading…
Reference in New Issue