mirror of
https://github.com/kubesphere/website.git
synced 2025-12-26 00:12:48 +00:00
update download link for v3.0.0, sync /en to /zh
Signed-off-by: FeynmanZhou <pengfeizhou@yunify.com>
This commit is contained in:
parent
246110d956
commit
6369605927
|
|
@ -71,8 +71,8 @@ For how to set up or cancel a default StorageClass, refer to Kubernetes official
|
|||
Use [ks-installer](https://github.com/kubesphere/ks-installer) to deploy KubeSphere on an existing Kubernetes cluster. It is suggested that you install it in minimal size.
|
||||
|
||||
```bash
|
||||
$ kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/kubesphere-installer.yaml
|
||||
$ kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml
|
||||
$ kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml
|
||||
$ kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml
|
||||
|
||||
```
|
||||
|
||||
|
|
|
|||
|
|
@ -69,11 +69,11 @@ All the other Resources will be placed in MC_KubeSphereRG_KuberSphereCluster_wes
|
|||
## Deploy KubeSphere on AKS
|
||||
To start deploying KubeSphere, use the following command.
|
||||
```bash
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/kubesphere-installer.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml
|
||||
```
|
||||
Download the cluster-configuration.yaml as below and you can customize the configuration. You can also enable pluggable components by setting the `enabled` property to `true` in this file.
|
||||
```bash
|
||||
wget https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml
|
||||
wget https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml
|
||||
```
|
||||
As `metrics-server` is already installed on AKS, you need to disable the component in the cluster-configuration.yaml file by changing `true` to `false` for `enabled`.
|
||||
```bash
|
||||
|
|
|
|||
|
|
@ -44,7 +44,7 @@ Now that the cluster is ready, you can install KubeSphere following this steps:
|
|||
- Install KubeSphere using kubectl. The following command is only for the default minimal installation.
|
||||
|
||||
```bash
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/kubesphere-installer.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml
|
||||
```
|
||||
|
||||
- Create a local cluster-configuration.yaml.
|
||||
|
|
@ -53,7 +53,7 @@ Now that the cluster is ready, you can install KubeSphere following this steps:
|
|||
vi cluster-configuration.yaml
|
||||
```
|
||||
|
||||
- Copy all the content in this [file](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml) and paste it to your local cluster-configuration.yaml.
|
||||
- Copy all the content in this [file](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) and paste it to your local cluster-configuration.yaml.
|
||||
|
||||
- Save the file when you finish. Execute the following command to start installation:
|
||||
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: "Deploy KubeSphere on EKS"
|
||||
title: "Deploy KubeSphere on AWS EKS"
|
||||
keywords: 'Kubernetes, KubeSphere, EKS, Installation'
|
||||
description: 'How to install KubeSphere on EKS'
|
||||
|
||||
|
|
@ -71,14 +71,14 @@ When your cluster provisioning is complete (usually between 10 and 15 minutes),
|
|||
- Config node group
|
||||

|
||||
|
||||
{{< notice note >}}
|
||||
- Supported Kubernetes versions for KubeSphere 3.0.0: 1.15.x, 1.16.x, 1.17.x, 1.18.x.
|
||||
- Ubuntu is used for the operating system here as an example. For more information on supported systems, see Overview.
|
||||
- 3 nodes are included in this example. You can add more nodes based on your own needs especially in a production environment.
|
||||
- The machine type t3.medium (2 vCPU, 4GB memory) is for minimal installation. If you want to enable pluggable components or use the cluster for production, please select a machine type with more resources.
|
||||
- For other settings, you can change them as well based on your own needs or use the default value.
|
||||
{{< notice note >}}
|
||||
- Supported Kubernetes versions for KubeSphere 3.0.0: 1.15.x, 1.16.x, 1.17.x, 1.18.x.
|
||||
- Ubuntu is used for the operating system here as an example. For more information on supported systems, see Overview.
|
||||
- 3 nodes are included in this example. You can add more nodes based on your own needs especially in a production environment.
|
||||
- The machine type t3.medium (2 vCPU, 4GB memory) is for minimal installation. If you want to enable pluggable components or use the cluster for production, please select a machine type with more resources.
|
||||
- For other settings, you can change them as well based on your own needs or use the default value.
|
||||
|
||||
{{</ notice >}}
|
||||
{{</ notice >}}
|
||||
|
||||
- When the EKS cluster is ready, you can connect to the cluster with kubectl.
|
||||
## configure kubectl
|
||||
|
|
@ -111,13 +111,13 @@ For more information, see the help page with the aws eks update-kubeconfig help
|
|||
- Install KubeSphere using kubectl. The following command is only for the default minimal installation.
|
||||
|
||||
```bash
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/kubesphere-installer.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml
|
||||
```
|
||||

|
||||
|
||||
- Create a local cluster-configuration.yaml.
|
||||
```shell
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml
|
||||
```
|
||||

|
||||
|
||||
|
|
@ -165,9 +165,8 @@ kubectl get svc -nkubesphere-system
|
|||
|
||||
- Log in the console with the default account and password (`admin/P@88w0rd`). In the cluster overview page, you can see the dashboard as shown in the following image.
|
||||
|
||||

|
||||

|
||||
|
||||
## Enable Pluggable Components (Optional)
|
||||
|
||||
The example above demonstrates the process of a default minimal installation. To enable other components in KubeSphere, see Enable Pluggable Components for more details.
|
||||
|
||||
|
|
|
|||
|
|
@ -48,7 +48,7 @@ This guide walks you through the steps of deploying KubeSphere on [Google Kubern
|
|||
- Install KubeSphere using kubectl. The following command is only for the default minimal installation.
|
||||
|
||||
```bash
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/kubesphere-installer.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml
|
||||
```
|
||||
|
||||
- Create a local cluster-configuration.yaml.
|
||||
|
|
@ -57,7 +57,7 @@ kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/maste
|
|||
vi cluster-configuration.yaml
|
||||
```
|
||||
|
||||
- Copy all the content in this [file](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml) and paste it to your local cluster-configuration.yaml. Navigate to `metrics_server`, and change `true` to `false` for `enabled`.
|
||||
- Copy all the content in this [file](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) and paste it to your local cluster-configuration.yaml. Navigate to `metrics_server`, and change `true` to `false` for `enabled`.
|
||||
|
||||

|
||||
|
||||
|
|
|
|||
|
|
@ -68,11 +68,11 @@ If you do not copy and execute the command above, you cannot proceed with the st
|
|||
- Install KubeSphere using kubectl. The following command is only for the default minimal installation.
|
||||
|
||||
```bash
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/kubesphere-installer.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml
|
||||
```
|
||||
|
||||
```bash
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml
|
||||
```
|
||||
|
||||
- Inspect the logs of installation:
|
||||
|
|
|
|||
|
|
@ -26,16 +26,16 @@ After you make sure your existing Kubernetes cluster meets all the requirements,
|
|||
- Execute the following commands to start installation:
|
||||
|
||||
```bash
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/kubesphere-installer.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml
|
||||
```
|
||||
|
||||
```bash
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
If your server has trouble accessing GitHub, you can copy the content in [kubesphere-installer.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/kubesphere-installer.yaml) and [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml) respectively and past it to local files. You then can use `kubectl apply -f` for the local files to install KubeSphere.
|
||||
If your server has trouble accessing GitHub, you can copy the content in [kubesphere-installer.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml) and [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) respectively and past it to local files. You then can use `kubectl apply -f` for the local files to install KubeSphere.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
|
|
@ -47,7 +47,7 @@ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=
|
|||
|
||||
{{< notice tip >}}
|
||||
|
||||
In some environments, you may find the installation process stopped by issues related to `metrics_server`, as some cloud providers have already installed metrics server in their platform. In this case, please manually create a local cluster-configuration.yaml file (copy the [content](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml) to it). In this file, disable `metrics_server` by changing `true` to `false` for `enabled`, and use `kubectl apply -f cluster-configuration.yaml` to execute it.
|
||||
In some environments, you may find the installation process stopped by issues related to `metrics_server`, as some cloud providers have already installed metrics server in their platform. In this case, please manually create a local cluster-configuration.yaml file (copy the [content](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) to it). In this file, disable `metrics_server` by changing `true` to `false` for `enabled`, and use `kubectl apply -f cluster-configuration.yaml` to execute it.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
|
|
|
|||
|
|
@ -7,218 +7,4 @@ description: 'How to install KubeSphere on air-gapped Linux machines'
|
|||
weight: 2240
|
||||
---
|
||||
|
||||
The air-gapped installation is almost the same as the online installation except it creates a local registry to host the Docker images. We will demonstrate how to install KubeSphere and Kubernetes on air-gapped environment.
|
||||
|
||||
> Note: The dependencies in different operating systems may cause upexpected problems. If you encounter any installation problems on air-gapped environment, please describe your OS information and error logs on [GitHub](https://github.com/kubesphere/kubesphere/issues).
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information.
|
||||
> - Installer will use `/var/lib/docker` as the default directory where all Docker related files, including the images, are stored. We recommend you to add additional storage to a disk with at least 100G mounted at `/var/lib/docker` and `/mnt/registry` respectively, use the [fdisk](https://www.computerhope.com/unix/fdisk.htm) command for reference.
|
||||
- Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [OpenEBS](https://openebs.io/) to provide storage service with dynamic provisioning. It is convenient for testing and development. For production, please [configure supported persistent storage service](../storage-configuration) and prepare [high availability configuration](../master-ha) before installation.
|
||||
- Since the air-gapped machines cannot connect to apt or yum source, please use clean Linux machine to avoid this problem.
|
||||
|
||||
## Step 1: Prepare Linux Hosts
|
||||
|
||||
The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements.
|
||||
|
||||
- Supported OSes: CentOS 7.4 ~ 7.7 (64-bit), Ubuntu 16.04.5/16.04.6/18.04.1/18.04.2/18.04.3 LTS (64-bit)
|
||||
- Time synchronization is required across all nodes, otherwise the installation may not succeed;
|
||||
- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`;
|
||||
- If you are using `Ubuntu 18.04`, you need to use the user `root`.
|
||||
- Ensure your disk of each node is at least 100G.
|
||||
- CPU and memory in total of all machines: 2 cores and 4 GB for minimal installation; 8 cores and 16 GB for complete installation.
|
||||
|
||||
|
||||
The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes.
|
||||
|
||||
> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide.
|
||||
|
||||
| Host IP | Host Name | Role |
|
||||
| --- | --- | --- |
|
||||
|192.168.0.1|master|master, etcd|
|
||||
|192.168.0.2|node1|node|
|
||||
|192.168.0.3|node2|node|
|
||||
|
||||
### Cluster Architecture
|
||||
|
||||
#### Single Master, Single Etcd, Two Nodes
|
||||
|
||||

|
||||
|
||||
## Step 2: Download Installer Package
|
||||
|
||||
Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`.
|
||||
|
||||
```bash
|
||||
curl -L https://kubesphere.io/download/offline/latest > kubesphere-all-offline-v2.1.1.tar.gz \
|
||||
&& tar -zxf kubesphere-all-offline-v2.1.1.tar.gz && cd kubesphere-all-offline-v2.1.1/conf
|
||||
```
|
||||
|
||||
## Step 3: Configure Host Template
|
||||
|
||||
> This step is only for multi-node installation, you can skip this step if you choose all-in-one installation.
|
||||
|
||||
Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file.
|
||||
|
||||
> Note:
|
||||
>
|
||||
> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`.
|
||||
> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`.
|
||||
> - master, node1 and node2 are the host names of each node and all host names should be in lowercase.
|
||||
|
||||
### hosts.ini
|
||||
|
||||
```ini
|
||||
[all]
|
||||
master ansible_connection=local ip=192.168.0.1
|
||||
node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD
|
||||
node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD
|
||||
|
||||
[local-registry]
|
||||
master
|
||||
|
||||
[kube-master]
|
||||
master
|
||||
|
||||
[kube-node]
|
||||
node1
|
||||
node2
|
||||
|
||||
[etcd]
|
||||
master
|
||||
|
||||
[k8s-cluster:children]
|
||||
kube-node
|
||||
kube-master
|
||||
```
|
||||
|
||||
> Note:
|
||||
>
|
||||
> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here.
|
||||
> - Installer will use a node as the local registry for docker images, defaults to "master" in the group `[local-registry]`.
|
||||
> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively.
|
||||
> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`.
|
||||
>
|
||||
> Parameters Specification:
|
||||
>
|
||||
> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection.
|
||||
> - `ansible_host`: The name of the host to be connected.
|
||||
> - `ip`: The ip of the host to be connected.
|
||||
> - `ansible_user`: The default ssh user name to use.
|
||||
> - `ansible_become_pass`: Allows you to set the privilege escalation password.
|
||||
> - `ansible_ssh_pass`: The password of the host to be connected using root.
|
||||
|
||||
## Step 4: Enable All Components
|
||||
|
||||
> This is step is complete installation. You can skip this step if you choose a minimal installation.
|
||||
|
||||
Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default.
|
||||
|
||||
```yaml
|
||||
# LOGGING CONFIGURATION
|
||||
# logging is an optional component when installing KubeSphere, and
|
||||
# Kubernetes builtin logging APIs will be used if logging_enabled is set to false.
|
||||
# Builtin logging only provides limited functions, so recommend to enable logging.
|
||||
logging_enabled: true # Whether to install logging system
|
||||
elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number
|
||||
elasticsearch_data_replica: 2 # total number of data nodes
|
||||
elasticsearch_volume_size: 20Gi # Elasticsearch volume size
|
||||
log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default.
|
||||
elk_prefix: logstash # the string making up index names. The index name will be formatted as ks-<elk_prefix>-log
|
||||
kibana_enabled: false # Kibana Whether to install built-in Grafana
|
||||
#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption.
|
||||
#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port
|
||||
|
||||
#DevOps Configuration
|
||||
devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image)
|
||||
jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default
|
||||
jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default
|
||||
jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default
|
||||
jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters
|
||||
jenkinsJavaOpts_Xmx: 6g
|
||||
jenkinsJavaOpts_MaxRAM: 8g
|
||||
sonarqube_enabled: true # Whether to install built-in SonarQube
|
||||
#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption.
|
||||
#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token
|
||||
|
||||
# Following components are all optional for KubeSphere,
|
||||
# Which could be turned on to install it before installation or later by updating its value to true
|
||||
openpitrix_enabled: true # KubeSphere application store
|
||||
metrics_server_enabled: true # For KubeSphere HPA to use
|
||||
servicemesh_enabled: true # KubeSphere service mesh system(Istio-based)
|
||||
notification_enabled: true # KubeSphere notification system
|
||||
alerting_enabled: true # KubeSphere alerting system
|
||||
```
|
||||
|
||||
## Step 5: Install KubeSphere to Linux Machines
|
||||
|
||||
> Note:
|
||||
>
|
||||
> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default.
|
||||
> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions.
|
||||
> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation.
|
||||
> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts.
|
||||
|
||||
**1.** Enter `scripts` folder, and execute `install.sh` using `root` user:
|
||||
|
||||
```bash
|
||||
cd ../cripts
|
||||
./install.sh
|
||||
```
|
||||
|
||||
**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume.
|
||||
|
||||
```bash
|
||||
################################################
|
||||
KubeSphere Installer Menu
|
||||
################################################
|
||||
* 1) All-in-one
|
||||
* 2) Multi-node
|
||||
* 3) Quit
|
||||
################################################
|
||||
https://kubesphere.io/ 2020-02-24
|
||||
################################################
|
||||
Please input an option: 2
|
||||
|
||||
```
|
||||
|
||||
**3.** Verify the multi-node installation:
|
||||
|
||||
**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go.
|
||||
|
||||
```bash
|
||||
successsful!
|
||||
#####################################################
|
||||
### Welcome to KubeSphere! ###
|
||||
#####################################################
|
||||
|
||||
Console: http://192.168.0.1:30880
|
||||
Account: admin
|
||||
Password: P@88w0rd
|
||||
|
||||
NOTE:Please modify the default password after login.
|
||||
#####################################################
|
||||
```
|
||||
|
||||
> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components).
|
||||
|
||||
**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in.
|
||||
|
||||

|
||||
|
||||
<font color=red>Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up.</font>
|
||||
|
||||

|
||||
|
||||
## Enable Pluggable Components
|
||||
|
||||
If you already have set up minimal installation, you still can edit the ConfigMap of ks-installer using the following command. Please make sure there is enough resource in your machines, see [Pluggable Components Overview](/en/installation/pluggable-components/).
|
||||
|
||||
```bash
|
||||
kubectl edit cm -n kubesphere-system ks-installer
|
||||
```
|
||||
|
||||
## FAQ
|
||||
|
||||
If you have further questions please do not hesitate to raise issues on [GitHub](https://github.com/kubesphere/kubesphere/issues).
|
||||
TBD
|
||||
|
|
|
|||
|
|
@ -49,7 +49,7 @@ Please see the requirements for hardware and operating system shown below. To ge
|
|||
|
||||
The path `/var/lib/docker` is mainly used to store the container data, and will gradually increase in size during use and operation. In the case of a production environment, it is recommended that `/var/lib/docker` should mount a drive separately.
|
||||
|
||||
{{</ notice >}}
|
||||
{{</ notice >}}
|
||||
|
||||
### Node Requirements
|
||||
|
||||
|
|
@ -81,49 +81,44 @@ This example includes three hosts as below with the master node serving as the t
|
|||
|
||||
## Step 2: Download KubeKey
|
||||
|
||||
As below, you can either download the binary file or build the binary package from source code.
|
||||
As below, you can either download the binary file.
|
||||
|
||||
Download the Installer for KubeSphere v3.0.0.
|
||||
|
||||
{{< tabs >}}
|
||||
|
||||
{{< tab "Download Binary" >}}
|
||||
{{< tab "For users with poor network to GitHub" >}}
|
||||
|
||||
Execute the following command:
|
||||
For users in China, you can download the installer using this link.
|
||||
|
||||
```bash
|
||||
curl -O -k https://kubernetes.pek3b.qingstor.com/tools/kubekey/kk
|
||||
wget https://kubesphere.io/kubekey/releases/v1.0.0
|
||||
```
|
||||
{{</ tab >}}
|
||||
|
||||
{{< tab "For users with good network to GitHub" >}}
|
||||
|
||||
For users with good network to GitHub, you can download it from [GitHub Release Page](https://github.com/kubesphere/kubekey/releases/tag/v1.0.0) or use the following link directly.
|
||||
|
||||
```bash
|
||||
wget https://github.com/kubesphere/kubekey/releases/download/v1.0.0/kubekey-v1.0.0-linux-amd64.tar.gz
|
||||
```
|
||||
{{</ tab >}}
|
||||
|
||||
{{</ tabs >}}
|
||||
|
||||
Unzip it.
|
||||
|
||||
```bash
|
||||
tar -zxvf v1.0.0
|
||||
```
|
||||
|
||||
Grant the execution right to `kk`:
|
||||
|
||||
```bash
|
||||
chmod +x kk
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{< tab "Build Binary from Source Code" >}}
|
||||
|
||||
Execute the following command one by one:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/kubesphere/kubekey.git
|
||||
```
|
||||
|
||||
```bash
|
||||
cd kubekey
|
||||
```
|
||||
|
||||
```bash
|
||||
./build.sh
|
||||
```
|
||||
|
||||
Note:
|
||||
|
||||
- Docker needs to be installed before the building.
|
||||
- If you have problems accessing `https://proxy.golang.org/`, execute `build.sh -p` instead.
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{</ tabs >}}
|
||||
|
||||
## Step 3: Create a Cluster
|
||||
|
||||
For multi-node installation, you need to create a cluster by specifying a configuration file.
|
||||
|
|
@ -133,7 +128,7 @@ For multi-node installation, you need to create a cluster by specifying a config
|
|||
Command:
|
||||
|
||||
```bash
|
||||
./kk create config [--with-kubernetes version] [--with-storage plugins] [--with-kubesphere version] [(-f | --file) path]
|
||||
./kk create config [--with-kubernetes version] [--with-kubesphere version] [(-f | --file) path]
|
||||
```
|
||||
|
||||
{{< notice info >}}
|
||||
|
|
@ -150,7 +145,7 @@ Here are some examples for your reference:
|
|||
./kk create config [-f ~/myfolder/abc.yaml]
|
||||
```
|
||||
|
||||
- You can customize the storage plugins (supported: LocalPV, NFS Client, Ceph RBD, and GlusterFS). You can also specify multiple plugins separated by comma. Please note the first one you add will be the default storage class.
|
||||
- You can customize the persistent storage plugins (e.g. NFS Client, Ceph RBD, and GlusterFS) in `sample-config.yaml`.
|
||||
|
||||
```bash
|
||||
./kk create config --with-storage localVolume
|
||||
|
|
@ -158,9 +153,9 @@ Here are some examples for your reference:
|
|||
|
||||
{{< notice note >}}
|
||||
|
||||
KubeKey will install [OpenEBS](https://openebs.io/) to provision LocalPV for development and testing environment by default, which is convenient for new users. For production, please use NFS/Ceph/GlusterFS or commercial products as persistent storage solutions, and install [relevant clients](https://github.com/kubesphere/kubekey/blob/master/docs/storage-client.md) in all nodes. For this example of multi-cluster installation, we will use the default storage class (local volume). For more information, see HA Cluster Configuration and Storage Class Configuration.
|
||||
KubeKey will install [OpenEBS](https://openebs.io/) to provision [LocalPV](https://kubernetes.io/docs/concepts/storage/volumes/#local) for development and testing environment by default, which is convenient for new users. For this example of multi-cluster installation, we will use the default storage class (local volume). For production, please use NFS/Ceph/GlusterFS/CSI or commercial products as persistent storage solutions, you need to specify them in `addons` of `sample-config.yaml`, see [Persistent Storage Configuration](../storage-configuration).
|
||||
|
||||
{{</ notice >}}
|
||||
{{</ notice >}}
|
||||
|
||||
- You can specify a KubeSphere version that you want to install (e.g. `--with-kubesphere v3.0.0`).
|
||||
|
||||
|
|
@ -223,7 +218,7 @@ hosts:
|
|||
|
||||
#### controlPlaneEndpoint (for HA installation only)
|
||||
|
||||
`controlPlaneEndpoint` allows you to define an external load balancer for an HA cluster. You need to prepare and configure an external load balancer if and only if you need to install more than 3 master nodes. Please note that the address and port should be indented by two spaces in `config-sample.yaml`, and the `address` should be VIP. See KubeSphere on QingCloud Instance for more information.
|
||||
`controlPlaneEndpoint` allows you to define an external load balancer for an HA cluster. You need to prepare and configure an external load balancer if and only if you need to install more than 3 master nodes. Please note that the address and port should be indented by two spaces in `config-sample.yaml`, and the `address` should be VIP. See HA Configuration for details.
|
||||
|
||||
{{< notice tip >}}
|
||||
|
||||
|
|
@ -244,7 +239,7 @@ When you finish editing, save the file.
|
|||
|
||||
You need to change `config-sample.yaml` above to your own file if you use a different name.
|
||||
|
||||
{{</ notice >}}
|
||||
{{</ notice >}}
|
||||
|
||||
The whole installation process may take 10-20 minutes, depending on your machine and network.
|
||||
|
||||
|
|
@ -265,7 +260,7 @@ NOTES:
|
|||
1. After logging into the console, please check the
|
||||
monitoring status of service components in
|
||||
the "Cluster Management". If any service is not
|
||||
ready, please wait patiently until all components
|
||||
ready, please wait patiently until all components
|
||||
are ready.
|
||||
2. Please modify the default password after login.
|
||||
|
||||
|
|
@ -280,7 +275,7 @@ Now, you will be able to access the web console of KubeSphere at `http://{IP}:30
|
|||
|
||||
To access the console, you may need to forward the source port to the intranet port of the intranet IP depending on the platform of your cloud providers. Please also make sure port 30880 is opened in the security group.
|
||||
|
||||
{{</ notice >}}
|
||||
{{</ notice >}}
|
||||
|
||||

|
||||
|
||||
|
|
@ -301,4 +296,4 @@ echo 'source <(kubectl completion bash)' >>~/.bashrc
|
|||
kubectl completion bash >/etc/bash_completion.d/kubectl
|
||||
```
|
||||
|
||||
Detailed information can be found [here](https://kubernetes.io/docs/tasks/tools/install-kubectl/#enabling-shell-autocompletion).
|
||||
Detailed information can be found [here](https://kubernetes.io/docs/tasks/tools/install-kubectl/#enabling-shell-autocompletion).
|
||||
|
|
|
|||
|
|
@ -80,7 +80,10 @@ In the Ready to complete page, you review the configuration selections that you
|
|||

|
||||
|
||||
|
||||
## Keepalived+Haproxy
|
||||
## Install a Load Balancer using Keepalived and Haproxy (Optional)
|
||||
|
||||
For production environment, you have to prepare an external Load Balancer. If you do not have a Load Balancer, you can install it using Keepalived and Haproxy. If you are provisioning a development or testing environment, please skip this section.
|
||||
|
||||
### Yum Install
|
||||
|
||||
host lb-0(10.10.71.77) and host lb-1(10.10.71.66)
|
||||
|
|
@ -159,7 +162,7 @@ global_defs {
|
|||
notification_email {
|
||||
}
|
||||
smtp_connect_timeout 30
|
||||
router_id LVS_DEVEL01
|
||||
router_id LVS_DEVEL01
|
||||
vrrp_skip_check_adv_addr
|
||||
vrrp_garp_interval 0
|
||||
vrrp_gna_interval 0
|
||||
|
|
@ -173,10 +176,10 @@ vrrp_instance haproxy-vip {
|
|||
state MASTER
|
||||
priority 100
|
||||
interface ens192
|
||||
virtual_router_id 60
|
||||
advert_int 1
|
||||
virtual_router_id 60
|
||||
advert_int 1
|
||||
authentication {
|
||||
auth_type PASS
|
||||
auth_type PASS
|
||||
auth_pass 1111
|
||||
}
|
||||
unicast_src_ip 10.10.71.77
|
||||
|
|
@ -185,7 +188,7 @@ vrrp_instance haproxy-vip {
|
|||
}
|
||||
virtual_ipaddress {
|
||||
#vip
|
||||
10.10.71.67/24
|
||||
10.10.71.67/24
|
||||
}
|
||||
track_script {
|
||||
chk_haproxy
|
||||
|
|
@ -198,7 +201,7 @@ remarks haproxy 66 lb-1-10.10.71.66 (/etc/keepalived/keepalived.conf)
|
|||
global_defs {
|
||||
notification_email {
|
||||
}
|
||||
router_id LVS_DEVEL02
|
||||
router_id LVS_DEVEL02
|
||||
vrrp_skip_check_adv_addr
|
||||
vrrp_garp_interval 0
|
||||
vrrp_gna_interval 0
|
||||
|
|
@ -209,7 +212,7 @@ vrrp_script chk_haproxy {
|
|||
weight 2
|
||||
}
|
||||
vrrp_instance haproxy-vip {
|
||||
state BACKUP
|
||||
state BACKUP
|
||||
priority 90
|
||||
interface ens192
|
||||
virtual_router_id 60
|
||||
|
|
@ -223,7 +226,7 @@ vrrp_instance haproxy-vip {
|
|||
10.10.71.77
|
||||
}
|
||||
virtual_ipaddress {
|
||||
10.10.71.67/24
|
||||
10.10.71.67/24
|
||||
}
|
||||
track_script {
|
||||
chk_haproxy
|
||||
|
|
@ -243,7 +246,7 @@ systemctl start keepalived
|
|||
Use `ip a s` to view the vip binding status of each lb node
|
||||
|
||||
```bash
|
||||
ip a s
|
||||
ip a s
|
||||
```
|
||||
|
||||
Pause VIP node haproxy:`systemctl stop haproxy`
|
||||
|
|
@ -255,7 +258,7 @@ systemctl stop haproxy
|
|||
Use `ip a s` again to check the vip binding of each lb node, and check whether vip drifts
|
||||
|
||||
```bash
|
||||
ip a s
|
||||
ip a s
|
||||
```
|
||||
|
||||
Or use `systemctl status -l keepalived` command to view
|
||||
|
|
@ -264,31 +267,67 @@ Or use `systemctl status -l keepalived` command to view
|
|||
systemctl status -l keepalived
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Get the Installer Excutable File
|
||||
|
||||
Download Binary
|
||||
Download the Installer for KubeSphere v3.0.0.
|
||||
|
||||
{{< tabs >}}
|
||||
|
||||
{{< tab "For users with poor network to GitHub" >}}
|
||||
|
||||
For users in China, you can download the installer using this link.
|
||||
|
||||
```bash
|
||||
wget https://kubesphere.io/kubekey/releases/v1.0.0
|
||||
```
|
||||
{{</ tab >}}
|
||||
|
||||
{{< tab "For users with good network to GitHub" >}}
|
||||
|
||||
For users with good network to GitHub, you can download it from [GitHub Release Page](https://github.com/kubesphere/kubekey/releases/tag/v1.0.0) or use the following link directly.
|
||||
|
||||
```bash
|
||||
wget https://github.com/kubesphere/kubekey/releases/download/v1.0.0/kubekey-v1.0.0-linux-amd64.tar.gz
|
||||
```
|
||||
{{</ tab >}}
|
||||
|
||||
{{</ tabs >}}
|
||||
|
||||
Unzip it.
|
||||
|
||||
```bash
|
||||
tar -zxvf v1.0.0
|
||||
```
|
||||
|
||||
Grant the execution right to `kk`:
|
||||
|
||||
```bash
|
||||
curl -O -k https://kubernetes.pek3b.qingstor.com/tools/kubekey/kk
|
||||
chmod +x kk
|
||||
```
|
||||
|
||||
## Create a Multi-Node Cluster
|
||||
## Create a Multi-node Cluster
|
||||
|
||||
You have more control to customize parameters or create a multi-node cluster using the advanced installation. Specifically, create a cluster by specifying a configuration file.。
|
||||
|
||||
### With KubeKey, you can install Kubernetes and KubeSphere
|
||||
With KubeKey, you can install Kubernetes and KubeSphere
|
||||
|
||||
Create a Kubernetes cluster with KubeSphere installed (e.g. --with-kubesphere v3.0.0)
|
||||
|
||||
```bash
|
||||
./kk create config --with-kubesphere v3.0.0 -f ~/config-sample.yaml
|
||||
./kk create config --with-kubernetes v1.17.9 --with-kubesphere v3.0.0 -f ~/config-sample.yaml
|
||||
```
|
||||
#### Modify the file config-sample.yaml according to your environment
|
||||
|
||||
vi ~/config-sample.yaml
|
||||
> The following Kubernetes versions has been fully tested with KubeSphere:
|
||||
> - v1.15: v1.15.12
|
||||
> - v1.16: v1.16.13
|
||||
> - v1.17: v1.17.9 (default)
|
||||
> - v1.18: v1.18.6
|
||||
|
||||
Modify the file config-sample.yaml according to your environment
|
||||
|
||||
```bash
|
||||
vi config-sample.yaml
|
||||
```
|
||||
|
||||
```yaml
|
||||
apiVersion: kubekey.kubesphere.io/v1alpha1
|
||||
|
|
@ -308,7 +347,7 @@ spec:
|
|||
- master1
|
||||
- master2
|
||||
- master3
|
||||
master:
|
||||
master:
|
||||
- master1
|
||||
- master2
|
||||
- master3
|
||||
|
|
@ -446,7 +485,7 @@ NOTES:
|
|||
1. After logging into the console, please check the
|
||||
monitoring status of service components in
|
||||
the "Cluster Management". If any service is not
|
||||
ready, please wait patiently until all components
|
||||
ready, please wait patiently until all components
|
||||
are ready.
|
||||
2. Please modify the default password after login.
|
||||
#####################################################
|
||||
|
|
@ -462,4 +501,3 @@ You will be able to use default account and password `admin / P@88w0rd` to log i
|
|||
|
||||
#### Enable Pluggable Components (Optional)
|
||||
The example above demonstrates the process of a default minimal installation. To enable other components in KubeSphere, see [Enable Pluggable Components for more details](https://github.com/kubesphere/ks-installer#enable-pluggable-components).
|
||||
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ Technically, you can either install, administer, and manage Kubernetes yourself
|
|||
|
||||
## Introduction
|
||||
|
||||
In this tutorial, we will use two key features of Azure virtual machines (VMs):
|
||||
In this tutorial, we will use two key features of Azure virtual machines (VMs):
|
||||
|
||||
- Virtual Machine Scale Sets: Azure VMSS let you create and manage a group of load balanced VMs. The number of VM instances can automatically increase or decrease in response to demand or a defined schedule(Kubernates Autoscaler is available, but not covered in this tutorial, see [autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider/azure) for more details), which perfectly fits the Worker Nodes.
|
||||
- Availability sets: An availability set is a logical grouping of VMs within a datacenter that automatically distributed across fault domains. This approach limits the impact of potential physical hardware failures, network outages, or power interruptions. All the Master and ETCD VMs will be placed in an Availability sets to meet our High Availability goals.
|
||||
|
|
@ -88,8 +88,38 @@ ssh -i .ssh/id_rsa2 -p50200 kubesphere@40.81.5.xx
|
|||
|
||||
1. First, download it and generate a configuration file to customize the installation as follows.
|
||||
|
||||
|
||||
{{< tabs >}}
|
||||
|
||||
{{< tab "For users with poor network to GitHub" >}}
|
||||
|
||||
For users in China, you can download the installer using this link.
|
||||
|
||||
```bash
|
||||
wget https://kubesphere.io/kubekey/releases/v1.0.0
|
||||
```
|
||||
curl -O -k https://kubernetes.pek3b.qingstor.com/tools/kubekey/kk
|
||||
{{</ tab >}}
|
||||
|
||||
{{< tab "For users with good network to GitHub" >}}
|
||||
|
||||
For users with good network to GitHub, you can download it from [GitHub Release Page](https://github.com/kubesphere/kubekey/releases/tag/v1.0.0) or use the following link directly.
|
||||
|
||||
```bash
|
||||
wget https://github.com/kubesphere/kubekey/releases/download/v1.0.0/kubekey-v1.0.0-linux-amd64.tar.gz
|
||||
```
|
||||
{{</ tab >}}
|
||||
|
||||
{{</ tabs >}}
|
||||
|
||||
Unzip it.
|
||||
|
||||
```bash
|
||||
tar -zxvf v1.0.0
|
||||
```
|
||||
|
||||
Grant the execution right to `kk`:
|
||||
|
||||
```bash
|
||||
chmod +x kk
|
||||
```
|
||||
|
||||
|
|
@ -98,7 +128,7 @@ chmod +x kk
|
|||
```
|
||||
./kk create config --with-kubesphere v3.0.0 --with-kubernetes v1.17.9
|
||||
```
|
||||
> Kubernetes Versions
|
||||
> The following Kubernetes versions have been fully tested with KubeSphere:
|
||||
> - v1.15: v1.15.12
|
||||
> - v1.16: v1.16.13
|
||||
> - v1.17: v1.17.9 (default)
|
||||
|
|
@ -208,4 +238,3 @@ Since we are using self-hosted Kubernetes solutions on Azure, So the Load Balanc
|
|||

|
||||
2. Create an Inbound Security rule to allow Internet access in the Network Security Group.
|
||||

|
||||
|
||||
|
|
|
|||
|
|
@ -28,7 +28,7 @@ This example prepares six machines of **Ubuntu 16.04.6**. We will create two loa
|
|||
|
||||
The Kubernetes document [Options for Highly Available topology](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/) demonstrates that there are two options for configuring the topology of a highly available (HA) Kubernetes cluster, i.e. stacked etcd topology and external etcd topology. You should carefully consider the advantages and disadvantages of each topology before setting up an HA cluster according to [this document](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/). In this guide, we adopt stacked etcd topology to bootstrap an HA cluster for convenient demonstration.
|
||||
|
||||
{{</ notice >}}
|
||||
{{</ notice >}}
|
||||
|
||||
## Install HA Cluster
|
||||
|
||||
|
|
@ -61,7 +61,7 @@ Click Submit to continue.
|
|||
|
||||
After you create the listener, please check the firewall rules of the load balancer. Make sure that the port `6443` has been added to the firewall rules and the external traffic can pass through `6443`. Otherwise, the installation will fail. If you are using QingCloud platform, you can find the information in **Security Groups** under **Security**.
|
||||
|
||||
{{</ notice >}}
|
||||
{{</ notice >}}
|
||||
|
||||
4. Click **Add Backend**, and choose the VxNet you just selected (in this example, it is `pn`). Click the button **Advanced Search**, choose the three master nodes, and set the port to `6443` which is the default secure port of api-server.
|
||||
|
||||
|
|
@ -75,7 +75,7 @@ Click **Submit** when you finish.
|
|||
|
||||
The status of all masters might show `Not Available` after you added them as backends. This is normal since the port `6443` of api-server is not active on master nodes yet. The status will change to `Active` and the port of api-server will be exposed after the installation finishes, which means the internal load balancer you configured works as expected.
|
||||
|
||||
{{</ notice >}}
|
||||
{{</ notice >}}
|
||||
|
||||

|
||||
|
||||
|
|
@ -89,7 +89,7 @@ You need to create an EIP in advance. To create an EIP, go to **Elastic IPs** un
|
|||
|
||||
Two elastic IPs are needed for this whole tutorial, one for the VPC network and the other for the external load balancer created in this step. You cannot associate the same EIP to the VPC network and the load balancer at the same time.
|
||||
|
||||
{{</ notice >}}
|
||||
{{</ notice >}}
|
||||
|
||||
6. Similarly, create an external load balancer while don't select VxNet for the Network field. Bind the EIP that you created to this load balancer by clicking **Add IPv4**.
|
||||
|
||||
|
|
@ -101,7 +101,7 @@ Two elastic IPs are needed for this whole tutorial, one for the VPC network and
|
|||
|
||||
After you create the listener, please check the firewall rules of the load balancer. Make sure that the port `30880` has been added to the firewall rules and the external traffic can pass through `6443`. Otherwise, the installation will fail. If you are using QingCloud platform, you can find the information in **Security Groups** under **Security**.
|
||||
|
||||
{{</ notice >}}
|
||||
{{</ notice >}}
|
||||
|
||||

|
||||
|
||||
|
|
@ -117,22 +117,47 @@ Click **Submit** when you finish.
|
|||
|
||||
[Kubekey](https://github.com/kubesphere/kubekey) is the next-gen installer which is used for installing Kubernetes and KubeSphere v3.0.0 fastly, flexibly and easily.
|
||||
|
||||
1. Download KubeKey and generate a configuration file to customize the installation as follows.
|
||||
{{< tabs >}}
|
||||
|
||||
{{< tab "For users with poor network to GitHub" >}}
|
||||
|
||||
For users in China, you can download the installer using this link.
|
||||
|
||||
```bash
|
||||
curl -O -k https://kubernetes.pek3b.qingstor.com/tools/kubekey/kk
|
||||
wget https://kubesphere.io/kubekey/releases/v1.0.0
|
||||
```
|
||||
{{</ tab >}}
|
||||
|
||||
{{< tab "For users with good network to GitHub" >}}
|
||||
|
||||
For users with good network to GitHub, you can download it from [GitHub Release Page](https://github.com/kubesphere/kubekey/releases/tag/v1.0.0) or use the following link directly.
|
||||
|
||||
```bash
|
||||
wget https://github.com/kubesphere/kubekey/releases/download/v1.0.0/kubekey-v1.0.0-linux-amd64.tar.gz
|
||||
```
|
||||
{{</ tab >}}
|
||||
|
||||
{{</ tabs >}}
|
||||
|
||||
Unzip it.
|
||||
|
||||
```bash
|
||||
tar -zxvf v1.0.0
|
||||
```
|
||||
|
||||
Grant the execution right to `kk`:
|
||||
|
||||
```bash
|
||||
chmod +x kk
|
||||
```
|
||||
|
||||
2. Then create an example configuration file with default configurations. Here we use Kubernetes v1.17.9 as an example.
|
||||
Then create an example configuration file with default configurations. Here we use Kubernetes v1.17.9 as an example.
|
||||
|
||||
```bash
|
||||
./kk create config --with-kubesphere v3.0.0 --with-kubernetes v1.17.9
|
||||
```
|
||||
|
||||
> Tip: These Kubernetes versions have been fully tested with KubeSphere: *v1.15.12*, *v1.16.13*, *v1.17.9* (default), *v1.18.6*.
|
||||
|
||||
### Cluster Node Planning
|
||||
|
||||
|
|
@ -195,7 +220,7 @@ In addition to the node information, you need to provide the load balancer infor
|
|||
- The address and port should be indented by two spaces in `config-sample.yaml`, and the address should be VIP.
|
||||
- The domain name of the load balancer is `lb.kubesphere.local` by default for internal access. If you need to change the domain name, please uncomment and modify it.
|
||||
|
||||
{{</ notice >}}
|
||||
{{</ notice >}}
|
||||
|
||||
After that, you can enable any components you need by following **Enable Pluggable Components** and start your HA cluster installation.
|
||||
|
||||
|
|
@ -211,7 +236,7 @@ As we mentioned in the prerequisites, considering data persistence in a producti
|
|||
|
||||
For testing or development, you can skip this part. KubeKey will use the integrated OpenEBS to provision LocalPV as the storage service directly.
|
||||
|
||||
{{</ notice >}}
|
||||
{{</ notice >}}
|
||||
|
||||
**Available Storage Plugins & Clients**
|
||||
|
||||
|
|
|
|||
|
|
@ -12,7 +12,7 @@ weight: 2343
|
|||
You have already installed at least two KubeSphere clusters, please refer to [Installing on Linux](../../../installing-on-linux) or [Installing on Kubernetes](../../../installing-on-kubernetes) if not yet.
|
||||
|
||||
{{< notice note >}}
|
||||
Multi-cluster management requires Kubesphere to be installed on the target clusters. If you have an existing cluster, please install a minimal KubeSphere on it as an agent, see [Installing Minimal KubeSphere on Kubernetes](../../../installing-on-kubernetes/minimal-kubesphere-on-k8s) for details.
|
||||
Multi-cluster management requires Kubesphere to be installed on the target clusters. If you have an existing cluster, please install a minimal KubeSphere on it as an agent, see [Installing Minimal KubeSphere on Kubernetes](../../installing-on-kubernetes/minimal-kubesphere-on-k8s) for details.
|
||||
{{</ notice >}}
|
||||
|
||||
## Agent Connection
|
||||
|
|
|
|||
|
|
@ -12,7 +12,7 @@ weight: 2340
|
|||
You have already installed at least two KubeSphere clusters, please refer to [Installing on Linux](../../../installing-on-linux) or [Installing on Kubernetes](../../../installing-on-kubernetes) if not yet.
|
||||
|
||||
{{< notice note >}}
|
||||
Multi-cluster management requires Kubesphere to be installed on the target clusters. If you have an existing cluster, please install a minimal KubeSphere on it as an agent, see [Installing Minimal KubeSphere on Kubernetes](../../../installing-on-kubernetes/minimal-kubesphere-on-k8s) for details.
|
||||
Multi-cluster management requires Kubesphere to be installed on the target clusters. If you have an existing cluster, please install a minimal KubeSphere on it as an agent, see [Installing Minimal KubeSphere on Kubernetes](../../installing-on-kubernetes/minimal-kubesphere-on-k8s) for details.
|
||||
{{</ notice >}}
|
||||
|
||||
## Direct Connection
|
||||
|
|
|
|||
|
|
@ -50,15 +50,15 @@ openpitrix:
|
|||
|
||||
### **Installing on Kubernetes**
|
||||
|
||||
When you install KubeSphere on Kubernetes, you need to download the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml) for cluster setting. If you want to install App Store, do not use `kubectl apply -f` directly for this file.
|
||||
When you install KubeSphere on Kubernetes, you need to download the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) for cluster setting. If you want to install App Store, do not use `kubectl apply -f` directly for this file.
|
||||
|
||||
1. In the tutorial of [Installing KubeSphere on Kubernetes](https://kubesphere-v3.netlify.app/docs/installing-on-kubernetes/introduction/overview/), you execute `kubectl apply -f` first for the file [kubesphere-installer.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/kubesphere-installer.yaml). After that, to enable App Store, create a local file cluster-configuration.yaml.
|
||||
1. In the tutorial of [Installing KubeSphere on Kubernetes](https://kubesphere-v3.netlify.app/docs/installing-on-kubernetes/introduction/overview/), you execute `kubectl apply -f` first for the file [kubesphere-installer.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml). After that, to enable App Store, create a local file cluster-configuration.yaml.
|
||||
|
||||
```bash
|
||||
vi cluster-configuration.yaml
|
||||
```
|
||||
|
||||
2. Copy all the content in the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml) and paste it to the local file just created.
|
||||
2. Copy all the content in the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) and paste it to the local file just created.
|
||||
3. In this local cluster-configuration.yaml file, navigate to `openpitrix` and enable App Store by changing `false` to `true` for `enabled`. Save the file after you finish.
|
||||
|
||||
```bash
|
||||
|
|
|
|||
|
|
@ -64,15 +64,15 @@ es: # Storage backend for logging, tracing, events and auditing.
|
|||
|
||||
### **Installing on Kubernetes**
|
||||
|
||||
When you install KubeSphere on Kubernetes, you need to download the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml) for cluster setting. If you want to install Auditing, do not use `kubectl apply -f` directly for this file.
|
||||
When you install KubeSphere on Kubernetes, you need to download the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) for cluster setting. If you want to install Auditing, do not use `kubectl apply -f` directly for this file.
|
||||
|
||||
1. In the tutorial of [Installing KubeSphere on Kubernetes](https://kubesphere-v3.netlify.app/docs/installing-on-kubernetes/introduction/overview/), you execute `kubectl apply -f` first for the file [kubesphere-installer.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/kubesphere-installer.yaml). After that, to enable Auditing, create a local file cluster-configuration.yaml.
|
||||
1. In the tutorial of [Installing KubeSphere on Kubernetes](https://kubesphere-v3.netlify.app/docs/installing-on-kubernetes/introduction/overview/), you execute `kubectl apply -f` first for the file [kubesphere-installer.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml). After that, to enable Auditing, create a local file cluster-configuration.yaml.
|
||||
|
||||
```bash
|
||||
vi cluster-configuration.yaml
|
||||
```
|
||||
|
||||
2. Copy all the content in the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml) and paste it to the local file just created.
|
||||
2. Copy all the content in the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) and paste it to the local file just created.
|
||||
3. In this local cluster-configuration.yaml file, navigate to `auditing` and enable Auditing by changing `false` to `true` for `enabled`. Save the file after you finish.
|
||||
|
||||
```bash
|
||||
|
|
|
|||
|
|
@ -48,15 +48,15 @@ devops:
|
|||
|
||||
### **Installing on Kubernetes**
|
||||
|
||||
When you install KubeSphere on Kubernetes, you need to download the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml) for cluster setting. If you want to install DevOps, do not use `kubectl apply -f` directly for this file.
|
||||
When you install KubeSphere on Kubernetes, you need to download the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) for cluster setting. If you want to install DevOps, do not use `kubectl apply -f` directly for this file.
|
||||
|
||||
1. In the tutorial of [Installing KubeSphere on Kubernetes](https://kubesphere-v3.netlify.app/docs/installing-on-kubernetes/introduction/overview/), you execute `kubectl apply -f` first for the file [kubesphere-installer.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/kubesphere-installer.yaml). After that, to enable DevOps, create a local file cluster-configuration.yaml.
|
||||
1. In the tutorial of [Installing KubeSphere on Kubernetes](https://kubesphere-v3.netlify.app/docs/installing-on-kubernetes/introduction/overview/), you execute `kubectl apply -f` first for the file [kubesphere-installer.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml). After that, to enable DevOps, create a local file cluster-configuration.yaml.
|
||||
|
||||
```bash
|
||||
vi cluster-configuration.yaml
|
||||
```
|
||||
|
||||
2. Copy all the content in the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml) and paste it to the local file just created.
|
||||
2. Copy all the content in the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) and paste it to the local file just created.
|
||||
3. In this local cluster-configuration.yaml file, navigate to `devops` and enable DevOps by changing `false` to `true` for `enabled`. Save the file after you finish.
|
||||
|
||||
```bash
|
||||
|
|
|
|||
|
|
@ -63,15 +63,15 @@ es: # Storage backend for logging, tracing, events and auditing.
|
|||
|
||||
### **Installing on Kubernetes**
|
||||
|
||||
When you install KubeSphere on Kubernetes, you need to download the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml) for cluster setting. If you want to install Logging, do not use `kubectl apply -f` directly for this file.
|
||||
When you install KubeSphere on Kubernetes, you need to download the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) for cluster setting. If you want to install Logging, do not use `kubectl apply -f` directly for this file.
|
||||
|
||||
1. In the tutorial of [Installing KubeSphere on Kubernetes](https://kubesphere-v3.netlify.app/docs/installing-on-kubernetes/introduction/overview/), you execute `kubectl apply -f` first for the file [kubesphere-installer.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/kubesphere-installer.yaml). After that, to enable Logging, create a local file cluster-configuration.yaml.
|
||||
1. In the tutorial of [Installing KubeSphere on Kubernetes](https://kubesphere-v3.netlify.app/docs/installing-on-kubernetes/introduction/overview/), you execute `kubectl apply -f` first for the file [kubesphere-installer.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml). After that, to enable Logging, create a local file cluster-configuration.yaml.
|
||||
|
||||
```bash
|
||||
vi cluster-configuration.yaml
|
||||
```
|
||||
|
||||
2. Copy all the content in the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml) and paste it to the local file just created.
|
||||
2. Copy all the content in the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) and paste it to the local file just created.
|
||||
3. In this local cluster-configuration.yaml file, navigate to `logging` and enable Logging by changing `false` to `true` for `enabled`. Save the file after you finish.
|
||||
|
||||
```bash
|
||||
|
|
|
|||
|
|
@ -46,15 +46,15 @@ servicemesh:
|
|||
|
||||
### **Installing on Kubernetes**
|
||||
|
||||
When you install KubeSphere on Kubernetes, you need to download the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml) for cluster setting. If you want to install Service Mesh, do not use `kubectl apply -f` directly for this file.
|
||||
When you install KubeSphere on Kubernetes, you need to download the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) for cluster setting. If you want to install Service Mesh, do not use `kubectl apply -f` directly for this file.
|
||||
|
||||
1. In the tutorial of [Installing KubeSphere on Kubernetes](https://kubesphere-v3.netlify.app/docs/installing-on-kubernetes/introduction/overview/), you execute `kubectl apply -f` first for the file [kubesphere-installer.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/kubesphere-installer.yaml). After that, to enable Service Mesh, create a local file cluster-configuration.yaml.
|
||||
1. In the tutorial of [Installing KubeSphere on Kubernetes](https://kubesphere-v3.netlify.app/docs/installing-on-kubernetes/introduction/overview/), you execute `kubectl apply -f` first for the file [kubesphere-installer.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml). After that, to enable Service Mesh, create a local file cluster-configuration.yaml.
|
||||
|
||||
```bash
|
||||
vi cluster-configuration.yaml
|
||||
```
|
||||
|
||||
2. Copy all the content in the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml) and paste it to the local file just created.
|
||||
2. Copy all the content in the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) and paste it to the local file just created.
|
||||
3. In this local cluster-configuration.yaml file, navigate to `servicemesh` and enable Service Mesh by changing `false` to `true` for `enabled`. Save the file after you finish.
|
||||
|
||||
```bash
|
||||
|
|
|
|||
|
|
@ -27,11 +27,11 @@ See the requirements for hardware and operating system shown below. To get start
|
|||
| **Red Hat Enterprise Linux 7** | CPU: 2 Cores, Memory: 4 G, Disk Space: 40 G |
|
||||
| **SUSE Linux Enterprise Server 15/openSUSE Leap 15.2** | CPU: 2 Cores, Memory: 4 G, Disk Space: 40 G |
|
||||
|
||||
{{< notice note >}}
|
||||
{{< notice note >}}
|
||||
|
||||
The system requirements above and the instructions below are for the default minimal installation without any optional components enabled. If your machine has at least 8 cores and 16G memory, it is recommended that you enable all components. For more information, see Enable Pluggable Components.
|
||||
|
||||
{{</ notice >}}
|
||||
{{</ notice >}}
|
||||
|
||||
### Node Requirements
|
||||
|
||||
|
|
@ -54,49 +54,40 @@ The system requirements above and the instructions below are for the default min
|
|||
|
||||
## Step 2: Download KubeKey
|
||||
|
||||
As below, you can either download the binary file or build the binary package from source code.
|
||||
|
||||
{{< tabs >}}
|
||||
|
||||
{{< tab "Download Binary" >}}
|
||||
{{< tab "For users with poor network to GitHub" >}}
|
||||
|
||||
Execute the following command:
|
||||
For users in China, you can download the installer using this link.
|
||||
|
||||
```bash
|
||||
curl -O -k https://kubernetes.pek3b.qingstor.com/tools/kubekey/kk
|
||||
wget https://kubesphere.io/kubekey/releases/v1.0.0
|
||||
```
|
||||
{{</ tab >}}
|
||||
|
||||
{{< tab "For users with good network to GitHub" >}}
|
||||
|
||||
For users with good network to GitHub, you can download it from [GitHub Release Page](https://github.com/kubesphere/kubekey/releases/tag/v1.0.0) or use the following link directly.
|
||||
|
||||
```bash
|
||||
wget https://github.com/kubesphere/kubekey/releases/download/v1.0.0/kubekey-v1.0.0-linux-amd64.tar.gz
|
||||
```
|
||||
{{</ tab >}}
|
||||
|
||||
{{</ tabs >}}
|
||||
|
||||
Unzip it.
|
||||
|
||||
```bash
|
||||
tar -zxvf v1.0.0
|
||||
```
|
||||
|
||||
Grant the execution right to `kk`:
|
||||
|
||||
```bash
|
||||
chmod +x kk
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{< tab "Build Binary from Source Code" >}}
|
||||
|
||||
Execute the following command one by one:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/kubesphere/kubekey.git
|
||||
```
|
||||
|
||||
```bash
|
||||
cd kubekey
|
||||
```
|
||||
|
||||
```bash
|
||||
./build.sh
|
||||
```
|
||||
|
||||
Note:
|
||||
|
||||
- Docker needs to be installed before the building.
|
||||
- If you have problems accessing `https://proxy.golang.org/`, execute `build.sh -p` instead.
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{</ tabs >}}
|
||||
|
||||
{{< notice info >}}
|
||||
|
||||
Developed in Go language, KubeKey represents a brand-new installation tool as a replacement for the ansible-based installer used before. KubeKey provides users with flexible installation choices, as they can install KubeSphere and Kubernetes separately or install them at one time, which is convenient and efficient.
|
||||
|
|
@ -111,24 +102,11 @@ In this QuickStart tutorial, you only need to execute one command for installati
|
|||
./kk create cluster [--with-kubernetes version] [--with-kubesphere version]
|
||||
```
|
||||
|
||||
Here are some examples for your reference:
|
||||
Create a Kubernetes cluster with KubeSphere installed (e.g. `--with-kubesphere v3.0.0`), this is an example for your reference:
|
||||
|
||||
- Create a Kubernetes cluster with the default version.
|
||||
|
||||
```bash
|
||||
./kk create cluster
|
||||
```
|
||||
|
||||
- Create a Kubernetes cluster with a specified version.
|
||||
|
||||
```bash
|
||||
./kk create cluster --with-kubernetes v1.18.6
|
||||
```
|
||||
|
||||
- Create a Kubernetes cluster with KubeSphere installed (e.g. `--with-kubesphere v3.0.0`).
|
||||
|
||||
```bash
|
||||
./kk create cluster --with-kubesphere [version]
|
||||
./kk create cluster --with-kubernetes v1.17.9 --with-kubesphere [version]
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
|
@ -137,7 +115,7 @@ Here are some examples for your reference:
|
|||
- For all-in-one installation, generally speaking, you do not need to change any configuration.
|
||||
- KubeKey will install [OpenEBS](https://openebs.io/) to provision LocalPV for development and testing environment by default, which is convenient for new users. For other storage classes, see Storage Class Configuration.
|
||||
|
||||
{{</ notice >}}
|
||||
{{</ notice >}}
|
||||
|
||||
After you execute the command, you will see a table as below for environment check.
|
||||
|
||||
|
|
@ -145,11 +123,11 @@ After you execute the command, you will see a table as below for environment che
|
|||
|
||||
Make sure the above components marked with `y` are installed and input `yes` to continue.
|
||||
|
||||
{{< notice note >}}
|
||||
{{< notice note >}}
|
||||
|
||||
If you download the binary file directly in Step 2, you do not need to install `docker` as KubeKey will install it automatically.
|
||||
|
||||
{{</ notice >}}
|
||||
{{</ notice >}}
|
||||
|
||||
## Step 4: Verify the Installation
|
||||
|
||||
|
|
@ -178,7 +156,7 @@ NOTES:
|
|||
1. After logging into the console, please check the
|
||||
monitoring status of service components in
|
||||
the "Cluster Management". If any service is not
|
||||
ready, please wait patiently until all components
|
||||
ready, please wait patiently until all components
|
||||
are ready.
|
||||
2. Please modify the default password after login.
|
||||
|
||||
|
|
@ -191,9 +169,9 @@ https://kubesphere.io 20xx-xx-xx xx:xx:xx
|
|||
|
||||
You may need to bind EIP and configure port forwarding in your environment for external users to access the console. Besides, make sure the port 30880 is opened in your security groups.
|
||||
|
||||
{{</ notice >}}
|
||||
{{</ notice >}}
|
||||
|
||||
After logging in the console, you can check the status of different components in **Components**. You may need to wait for some components to be up and running if you want to use related services.
|
||||
After logging in the console, you can check the status of different components in **Components**. You may need to wait for some components to be up and running if you want to use related services. You can also use `kubectl get pod --all-namespaces` to inspect the running status of KubeSphere workloads.
|
||||
|
||||

|
||||
|
||||
|
|
|
|||
|
|
@ -59,15 +59,15 @@ If you adopt [All-in-one Installation](https://kubesphere-v3.netlify.app/docs/qu
|
|||
|
||||
### Installing on Kubernetes
|
||||
|
||||
When you install KubeSphere on Kubernetes, you need to download the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml) for cluster setting. If you want to install pluggable components, do not use `kubectl apply -f` directly for this file.
|
||||
When you install KubeSphere on Kubernetes, you need to download the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) for cluster setting. If you want to install pluggable components, do not use `kubectl apply -f` directly for this file.
|
||||
|
||||
1. In the tutorial of [Installing KubeSphere on Kubernetes](https://kubesphere-v3.netlify.app/docs/installing-on-kubernetes/introduction/overview/), you execute `kubectl apply -f` first for the file [kubesphere-installer.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/kubesphere-installer.yaml). After that, to enable pluggable components, create a local file cluster-configuration.yaml.
|
||||
1. In the tutorial of [Installing KubeSphere on Kubernetes](https://kubesphere-v3.netlify.app/docs/installing-on-kubernetes/introduction/overview/), you execute `kubectl apply -f` first for the file [kubesphere-installer.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml). After that, to enable pluggable components, create a local file cluster-configuration.yaml.
|
||||
|
||||
```bash
|
||||
vi cluster-configuration.yaml
|
||||
```
|
||||
|
||||
2. Copy all the content in the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml) and paste it to the local file just created.
|
||||
2. Copy all the content in the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) and paste it to the local file just created.
|
||||
3. In this local cluster-configuration.yaml file, enable the pluggable components you want to install by changing `false` to `true` for `enabled`. Here is [an example file](https://github.com/kubesphere/ks-installer/blob/master/deploy/cluster-configuration.yaml) for your reference. Save the file after you finish.
|
||||
4. Execute the following command to start installation:
|
||||
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@ In addition to installing KubeSphere on a Linux machine, you can also deploy it
|
|||
- The CSR signing feature is activated in kube-apiserver when it is started with the `--cluster-signing-cert-file` and `--cluster-signing-key-file` parameters. See [RKE installation issue](https://github.com/kubesphere/kubesphere/issues/1925#issuecomment-591698309).
|
||||
- For more information about the prerequisites of installing KubeSphere on Kubernetes, see [Prerequisites](https://kubesphere-v3.netlify.app/docs/installing-on-kubernetes/introduction/prerequisites/).
|
||||
|
||||
{{</ notice >}}
|
||||
{{</ notice >}}
|
||||
|
||||
## Deploy KubeSphere
|
||||
|
||||
|
|
@ -25,19 +25,19 @@ After you make sure your machine meets the prerequisites, you can follow the ste
|
|||
|
||||
- Please read the note below before you execute the commands to start installation:
|
||||
|
||||
{{< notice note >}}
|
||||
{{< notice note >}}
|
||||
|
||||
- If your server has trouble accessing GitHub, you can copy the content in [kubesphere-installer.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/kubesphere-installer.yaml) and [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml) respectively and past it to local files. You then can use `kubectl apply -f` for the local files to install KubeSphere.
|
||||
- If your server has trouble accessing GitHub, you can copy the content in [kubesphere-installer.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml) and [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) respectively and past it to local files. You then can use `kubectl apply -f` for the local files to install KubeSphere.
|
||||
- In cluster-configuration.yaml, you need to disable `metrics_server` manually by changing `true` to `false` if the component has already been installed in your environment, especially for cloud-hosted Kubernetes clusters.
|
||||
|
||||
{{</ notice >}}
|
||||
{{</ notice >}}
|
||||
|
||||
```bash
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/kubesphere-installer.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml
|
||||
```
|
||||
|
||||
```bash
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml
|
||||
```
|
||||
|
||||
- Inspect the logs of installation:
|
||||
|
|
@ -59,4 +59,4 @@ kubectl get svc/ks-console -n kubesphere-system
|
|||
|
||||
## Enable Pluggable Components (Optional)
|
||||
|
||||
The guide above is used only for minimal installation by default. To enable other components in KubeSphere, see Enable Pluggable Components for more details.
|
||||
The guide above is used only for minimal installation by default. To enable other components in KubeSphere, see Enable Pluggable Components for more details.
|
||||
|
|
|
|||
|
|
@ -1,10 +1,30 @@
|
|||
---
|
||||
title: "文档"
|
||||
title: "Documentation"
|
||||
css: "scss/docs.scss"
|
||||
|
||||
LinkTitle: "Documentation"
|
||||
|
||||
|
||||
section1:
|
||||
title: KubeSphere Documentation
|
||||
content: Learn how to build and manage cloud native applications using KubeSphere Container Platform. Get documentation, example code, tutorials, and more.
|
||||
image: /images/docs/banner.png
|
||||
---
|
||||
|
||||
section3:
|
||||
title: Run KubeSphere and Kubernetes Stack from the Cloud Service
|
||||
description: Cloud Providers are providing KubeSphere as a cloud-hosted service for users, help you to create an highly available cluster within minutes via several clicks. These services will be available in September, 2020.
|
||||
list:
|
||||
- image: /images/docs/aws.jpg
|
||||
content: AWS Quickstart
|
||||
link:
|
||||
- image: /images/docs/qingcloud.svg
|
||||
content: QingCloud QKE
|
||||
link:
|
||||
- image: /images/docs/radore.jpg
|
||||
content: Radore RCD
|
||||
link:
|
||||
|
||||
titleRight: Want to host KubeSphere on your cloud?
|
||||
btnContent: Partner with us
|
||||
btnLink: /partner/
|
||||
---
|
||||
|
|
|
|||
|
|
@ -1,9 +1,9 @@
|
|||
---
|
||||
title: "Installing on Kubernetes"
|
||||
title: "Installing KubeSphere on Kubernetes"
|
||||
description: "Help you to better understand KubeSphere with detailed graphics and contents"
|
||||
layout: "single"
|
||||
|
||||
linkTitle: "Installing on Kubernetes"
|
||||
linkTitle: "Installing KubeSphere on Kubernetes"
|
||||
weight: 2500
|
||||
|
||||
icon: "/images/docs/docs.svg"
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
linkTitle: "Install on Linux"
|
||||
linkTitle: "Installing on Hosted Kubernetes"
|
||||
weight: 2200
|
||||
|
||||
_build:
|
||||
|
|
|
|||
|
|
@ -1,116 +0,0 @@
|
|||
---
|
||||
title: "All-in-One Installation"
|
||||
keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus'
|
||||
description: 'The guide for installing all-in-one KubeSphere for developing or testing'
|
||||
|
||||
linkTitle: "All-in-One"
|
||||
weight: 2210
|
||||
---
|
||||
|
||||
For those who are new to KubeSphere and looking for a quick way to discover the platform, the all-in-one mode is your best choice to install it since it is one-click and hassle-free configuration installation with provisioning KubeSphere and Kubernetes on your machine.
|
||||
|
||||
- <font color=red>The following instructions are for the default installation without enabling any optional components as we have made them pluggable since v2.1.0. If you want to enable any one, please see the section [Enable Pluggable Components](../all-in-one#enable-pluggable-components) below.</font>
|
||||
- <font color=red>If your machine has >= 8 cores and >= 16G memory, we recommend you to install the full package of KubeSphere by [enabling optional components](../complete-installation)</font>.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirement](../port-firewall) for more information.
|
||||
|
||||
## Step 1: Prepare Linux Machine
|
||||
|
||||
The following describes the requirements of hardware and operating system.
|
||||
|
||||
- For `Ubuntu 16.04` OS, it is recommended to select the latest `16.04.5`.
|
||||
- If you are using Ubuntu 18.04, you need to use the root user to install.
|
||||
- If the Debian system does not have the sudo command installed, you need to execute the `apt update && apt install sudo` command using root before installation.
|
||||
|
||||
### Hardware Recommendation
|
||||
|
||||
| System | Minimum Requirements |
|
||||
| ------- | ----------- |
|
||||
| CentOS 7.4 ~ 7.7 (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:100 G |
|
||||
| Ubuntu 16.04/18.04 LTS (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:100 G |
|
||||
| Red Hat Enterprise Linux Server 7.4 (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:100 G |
|
||||
| Debian Stretch 9.5 (64 bit)| CPU:2 Core, Memory:4 G, Disk Space:100 G |
|
||||
|
||||
## Step 2: Download Installer Package
|
||||
|
||||
Execute the following commands to download Installer 2.1.1 and unpack it.
|
||||
|
||||
```bash
|
||||
curl -L https://kubesphere.io/download/stable/latest > installer.tar.gz \
|
||||
&& tar -zxf installer.tar.gz && cd kubesphere-all-v2.1.1/scripts
|
||||
```
|
||||
|
||||
## Step 3: Get Started with Installation
|
||||
|
||||
You should not do anything except executing one command as follows. The installer will complete all things for you automatically including installing/updating dependency packages, installing Kubernetes with default version 1.16.7, storage service and so on.
|
||||
|
||||
> Note:
|
||||
>
|
||||
> - Generally speaking, do not modify any configuration.
|
||||
> - KubeSphere installs `calico` by default. If you would like to use a different network plugin, you are allowed to change the configuration in `conf/common.yaml`. You are also allowed to modify other configurations such as storage class, pluggable components, etc.
|
||||
> - The default storage class is [OpenEBS](https://openebs.io/) which is a kind of [Local Volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) to provision persistence storage service. OpenEBS supports [dynamic provisioning PV](https://docs.openebs.io/docs/next/uglocalpv.html#Provision-OpenEBS-Local-PV-based-on-hostpath). It will be installed automatically for your testing purpose.
|
||||
> - Please refer [storage configurations](../storage-configuration) for supported storage class.
|
||||
> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts.
|
||||
|
||||
**1.** Execute the following command:
|
||||
|
||||
```bash
|
||||
./install.sh
|
||||
```
|
||||
|
||||
**2.** Enter `1` to select `All-in-one` mode and type `yes` if your machine satisfies the requirements to start:
|
||||
|
||||
```bash
|
||||
################################################
|
||||
KubeSphere Installer Menu
|
||||
################################################
|
||||
* 1) All-in-one
|
||||
* 2) Multi-node
|
||||
* 3) Quit
|
||||
################################################
|
||||
https://kubesphere.io/ 2020-02-24
|
||||
################################################
|
||||
Please input an option: 1
|
||||
```
|
||||
|
||||
**3.** Verify if KubeSphere is installed successfully or not:
|
||||
|
||||
**(1).** If you see "Successful" returned after completed, it means the installation is successful. The console service is exposed through nodeport 30880 by default. You may need to bind EIP and configure port forwarding in your environment for outside users to access. Make sure you disable the related firewall.
|
||||
|
||||
```bash
|
||||
successsful!
|
||||
#####################################################
|
||||
### Welcome to KubeSphere! ###
|
||||
#####################################################
|
||||
|
||||
Console: http://192.168.0.8:30880
|
||||
Account: admin
|
||||
Password: P@88w0rd
|
||||
|
||||
NOTE:Please modify the default password after login.
|
||||
#####################################################
|
||||
```
|
||||
|
||||
> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components).
|
||||
|
||||
**(2).** You will be able to use default account and password to log in the console to take a tour of KubeSphere.
|
||||
|
||||
<font color=red>Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up.</font>
|
||||
|
||||

|
||||
|
||||
## Enable Pluggable Components
|
||||
|
||||
The guide above is only used for minimal installation by default. You can execute the following command to open the configure map and enable pluggable components. Make sure your cluster has enough CPU and memory in advance, see [Enable Pluggable Components](../pluggable-components).
|
||||
|
||||
```bash
|
||||
kubectl edit cm -n kubesphere-system ks-installer
|
||||
```
|
||||
|
||||
## FAQ
|
||||
|
||||
The installer has been tested on Aliyun, AWS, Huawei Cloud, QingCloud and Tencent Cloud. Please check the [results](https://github.com/kubesphere/ks-installer/issues/23) for details. Also please read the [FAQ of installation](../../faq/faq-install).
|
||||
|
||||
If you have any further questions please do not hesitate to file issues on [GitHub](https://github.com/kubesphere/kubesphere/issues).
|
||||
|
|
@ -1,76 +0,0 @@
|
|||
---
|
||||
title: "Install All Optional Components"
|
||||
keywords: 'kubesphere, kubernetes, docker, devops, service mesh, openpitrix'
|
||||
description: 'Install KubeSphere with all optional components enabled on Linux machine'
|
||||
|
||||
|
||||
weight: 2260
|
||||
---
|
||||
|
||||
The installer only installs required components (i.e. minimal installation) by default since v2.1.0. Other components are designed to be pluggable, which means you can enable any of them before or after installation. If your machine meets the following minimum requirements, we recommend you to **enable all components before installation**. A complete installation gives you an opportunity to comprehensively discover the container platform.
|
||||
|
||||
<font color="red">
|
||||
Minimum Requirements
|
||||
|
||||
- CPU: 8 cores in total of all machines
|
||||
- Memory: 16 GB in total of all machines
|
||||
|
||||
</font>
|
||||
|
||||
> Note:
|
||||
>
|
||||
> - If your machines do not meet the minimum requirements of a complete installation, you can enable any of components at your will. Please refer to [Enable Pluggable Components Installation](../pluggable-components).
|
||||
> - It works for [All-in-One](../all-in-one) and [Multi-Node](../multi-node).
|
||||
|
||||
This tutorial will walk you through how to enable all components of KubeSphere.
|
||||
|
||||
## Download Installer Package
|
||||
|
||||
If you do not have the package yet, please run the following commands to download Installer 2.1.1 and unpack it, then enter `conf` folder.
|
||||
|
||||
```bash
|
||||
$ curl -L https://kubesphere.io/download/stable/v2.1.1 > installer.tar.gz \
|
||||
&& tar -zxf installer.tar.gz && cd kubesphere-all-v2.1.1/conf
|
||||
```
|
||||
|
||||
## Enable All Components
|
||||
|
||||
Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default.
|
||||
|
||||
```yaml
|
||||
# LOGGING CONFIGURATION
|
||||
# logging is an optional component when installing KubeSphere, and
|
||||
# Kubernetes builtin logging APIs will be used if logging_enabled is set to false.
|
||||
# Builtin logging only provides limited functions, so recommend to enable logging.
|
||||
logging_enabled: true # Whether to install logging system
|
||||
elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number
|
||||
elasticsearch_data_replica: 2 # total number of data nodes
|
||||
elasticsearch_volume_size: 20Gi # Elasticsearch volume size
|
||||
log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default.
|
||||
elk_prefix: logstash # the string making up index names. The index name will be formatted as ks-<elk_prefix>-log
|
||||
kibana_enabled: false # Kibana Whether to install built-in Grafana
|
||||
#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption.
|
||||
#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port
|
||||
|
||||
#DevOps Configuration
|
||||
devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image)
|
||||
jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default
|
||||
jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default
|
||||
jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default
|
||||
jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters
|
||||
jenkinsJavaOpts_Xmx: 6g
|
||||
jenkinsJavaOpts_MaxRAM: 8g
|
||||
sonarqube_enabled: true # Whether to install built-in SonarQube
|
||||
#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption.
|
||||
#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token
|
||||
|
||||
# Following components are all optional for KubeSphere,
|
||||
# Which could be turned on to install it before installation or later by updating its value to true
|
||||
openpitrix_enabled: true # KubeSphere application store
|
||||
metrics_server_enabled: true # For KubeSphere HPA to use
|
||||
servicemesh_enabled: true # KubeSphere service mesh system(Istio-based)
|
||||
notification_enabled: true # KubeSphere notification system
|
||||
alerting_enabled: true # KubeSphere alerting system
|
||||
```
|
||||
|
||||
Save it, then you can continue the installation process.
|
||||
|
|
@ -1,29 +1,29 @@
|
|||
---
|
||||
title: "在华为云 CCE 安装 KubeSphere"
|
||||
title: "Install KubeSphere on Huawei CCE"
|
||||
keywords: "kubesphere, kubernetes, docker, huawei, cce"
|
||||
description: "介绍如何在华为云 CCE 容器引擎上部署 KubeSphere 3.0"
|
||||
description: "It is to introduce how to install KubeSphere 3.0 on Huaiwei CCE."
|
||||
---
|
||||
|
||||
本指南将介绍如果在[华为云 CCE 容器引擎](https://support.huaweicloud.com/cce/)上部署并使用 KubeSphere 3.0.0 平台。
|
||||
This instruction is about how to install KubeSphere 3.0.0 on [Huaiwei CCE](https://support.huaweicloud.com/en-us/qs-cce/cce_qs_0001.html).
|
||||
|
||||
## 华为云 CCE 环境准备
|
||||
## Preparation for Huawei CCE
|
||||
|
||||
### 创建 Kubernetes 集群
|
||||
### Create Kubernetes Cluster
|
||||
|
||||
首先按使用环境的资源需求创建 Kubernetes 集群,满足以下一些条件即可(如已有环境并满足条件可跳过本节内容):
|
||||
First, create a Kubernetes Cluster according to the resources. Meet the requirements below (ignore this part if your environment is as required).
|
||||
|
||||
- KubeSphere 3.0.0 默认支持的 Kubernetes 版本为 `1.15.x`, `1.16.x`, `1.17.x`, `1.18.x`,需要选择其中支持的版本进行集群创建(如 `v1.15.11`, `v1.17.9`);
|
||||
- 需要确保 Kubernetes 集群所使用的云主机的网络可以,可以通过在创建集群的同时 “自动创建” 或 “使用已有” 弹性 IP;或者在集群创建后自行配置网络(如配置 [NAT 网关](https://support.huaweicloud.com/natgateway/));
|
||||
- 工作节点规格方面建议选择 `s3.xlarge.2` 的 `4核|8GB` 配置,并按需扩展工作节点数量(通常生产环境需要 3 个及以上工作节点)。
|
||||
- KubeSphere 3.0.0 supports Kubernetes `1.15.x`, `1.16.x`, `1.17.x`, and `1.18.x` by default. Select a version and create the cluster, e.g. `v1.15.11`, `v1.17.9`.
|
||||
- Ensure the cloud computing network for your Kubernetes cluster works, or use an elastic IP when “Ato Create”or “Select Existing”; or confiure the network after the cluster is created. Refer to Configure [NAT Gateway](https://support.huaweicloud.com/en-us/productdesc-natgateway/en-us_topic_0086739762.html).
|
||||
- Select `s3.xlarge.2` `4-core|8GB` for nodes and add more if necessary (3 and more nodes are required for production environment).
|
||||
|
||||
### 创建公网 kubectl 证书
|
||||
### Create a public key for kubectl
|
||||
|
||||
- 创建完集群后,进入 `资源管理` > `集群管理` 界面,在 `基本信息` > `网络` 面板中,绑定 `公网apiserver地址`;
|
||||
- 而后在右侧面板中,选择 `kubectl` 标签页,并在 `下载kubectl配置文件` 列表项中 `点击此处下载`,即可获取公用可用的 kubectl 证书。
|
||||
- Go to `Resource Management` > `Cluster Management` > `Basic Information` > `Network`, and bind `Public apiserver`.
|
||||
- Select `kubectl` on the right column, go to `Download kubectl configuration file`, and click `Click here to download`, then you will get a public key for kubectl.
|
||||
|
||||

|
||||

|
||||
|
||||
获取 kubectl 配置文件后,可通过 kubectl 命令行工具来验证集群连接:
|
||||
After you get the configuration file for kubectl, use kubectl command lines to verify the connection to the cluster.
|
||||
|
||||
```bash
|
||||
$ kubectl version
|
||||
|
|
@ -32,13 +32,13 @@ Server Version: version.Info{Major:"1", Minor:"17+", GitVersion:"v1.17.9-r0-CCE2
|
|||
|
||||
```
|
||||
|
||||
## KubeSphere 平台部署
|
||||
## KubeSphere Deployment
|
||||
|
||||
### 创建自定义 StorageClass
|
||||
### Create a custom StorageClass
|
||||
|
||||
> 由于华为 CCE 自带的 Everest CSI 组件所提供的 StorageClass `csi-disk` 默认指定的是 SATA 磁盘(即普通 I/O 磁盘),但实际创建的 Kubernetes 集群所配置的磁盘基本只有 SAS(高 I/O)和 SSD (超高 I/O),因此建议额外创建对应的 StorageClass(并设定为默认)以方便后续部署使用。参见官方文档 - [使用 kubectl 创建云硬盘](https://support.huaweicloud.com/usermanual-cce/cce_01_0044.html#section7)。
|
||||
> Huawei CCE built-in Everest CSI provides StorageClass `csi-disk` which uses SATA (normal I/O) by default, but the actual disk that is for Kubernetes clusters is either SAS (high I/O) or SSD (extremely high I/O). So it is suggested that create an extra StorageClass and set it as default for later. Refer to the official document - [Use kubectl to create a cloud storage](https://support.huaweicloud.com/en-us/usermanual-cce/cce_01_0044.html).
|
||||
|
||||
以下示例展示如何创建一个 SAS(高 I/O)磁盘对应的 StorageClass:
|
||||
Below is an example to create a SAS(high I/O) for its corresponding StorageClass.
|
||||
|
||||
```yaml
|
||||
# csi-disk-sas.yaml
|
||||
|
|
@ -54,7 +54,7 @@ metadata:
|
|||
parameters:
|
||||
csi.storage.k8s.io/csi-driver-name: disk.csi.everest.io
|
||||
csi.storage.k8s.io/fstype: ext4
|
||||
# 绑定华为 “高I/O” 磁盘,如需 “超高I/O“ 则此值改为 SSD
|
||||
# Bind Huawei “high I/O storage. If use “extremely high I/O, change it to SSD.
|
||||
everest.io/disk-volume-type: SAS
|
||||
everest.io/passthrough: "true"
|
||||
provisioner: everest-csi-provisioner
|
||||
|
|
@ -64,48 +64,48 @@ volumeBindingMode: Immediate
|
|||
|
||||
```
|
||||
|
||||
关于如何设定/取消默认 StorageClass,可参考 Kubernetes 官方文档 - [改变默认 StorageClass](https://kubernetes.io/zh/docs/tasks/administer-cluster/change-default-storage-class/)。
|
||||
For how to set up or cancel a default StorageClass, refer to Kubernetes official document - [Change Default StorageClass](https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/)。
|
||||
|
||||
### 通过 ks-installer 执行最小化部署
|
||||
### Use ks-installer to minimize the deployment
|
||||
|
||||
接下来就可以使用 [ks-installer](https://github.com/kubesphere/ks-installer) 在已有的 Kubernetes 集群上来执行 KubeSphere 部署,建议首先还是以最小功能集进行安装,可执行以下命令:
|
||||
Use [ks-installer](https://github.com/kubesphere/ks-installer) to deploy KubeSphere on an existing Kubernetes cluster. It is suggested that you install it in minimal size.
|
||||
|
||||
```bash
|
||||
$ kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/kubesphere-installer.yaml
|
||||
$ kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml
|
||||
$ kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml
|
||||
$ kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml
|
||||
|
||||
```
|
||||
|
||||
执行部署命令后,可以通过进入 `工作负载` > `容器组 Pod` 界面,在右侧面板中查询 `kubesphere-system` 命名空间下的 Pod 运行状态了解 KubeSphere 平台最小功能集的部署状态;通过该命名空间下 `ks-console-xxxx` 容器的状态来了解 KubeSphere 控制台应用的可用状态。
|
||||
Go to `Workload` > `Pod`, and check the running status of the pod in `kubesphere-system` of its namespace to understand the minimal deployment of KubeSphere. `ks-console-xxxx` of the namespace to understand the app availability of KubeSphere console.
|
||||
|
||||

|
||||

|
||||
|
||||
### 开启 KubeSphere 外网访问
|
||||
### Expose KubeSphere Console
|
||||
|
||||
通过 `kubesphere-system` 命名空间下的 Pod 运行状态确认 KubeSphere 基础组件都已进入运行状态后,我们需要为 KubeSphere 控制台开启外网访问。
|
||||
Check the running status of Pod in `kubesphere-system` namespace and make sure the basic components of KubeSphere are running. Then expose KubeSphere console.
|
||||
|
||||
进入 `资源管理` > `网络管理`,在右侧面板中选择 `ks-console` 更改网络访问方式,建议选用 `负载均衡(``LoadBalancer)` 访问方式(需绑定弹性公网 IP),配置完成后如下图:
|
||||
Go to `Resource Management` > `Network` and choose the service in `ks-console`. It is suggested that you choose `LoadBalancer` (Public IP is required). The configuration is shown below.
|
||||
|
||||

|
||||

|
||||
|
||||
服务细节配置基本上选用默认选项即可,当然也可以按需进行调整:
|
||||
Default settings are OK for other detailed configurations. You can also set it as you need.
|
||||
|
||||

|
||||

|
||||
|
||||
通过负载均衡绑定公网访问后,即可使用给定的访问地址进行访问,进入到 KubeSphere 的登陆界面并使用默认账号(用户名 `admin`,密码 `P@88w0rd`)即可登陆平台:
|
||||
After you set LoadBalancer for KubeSphere console, you can visit it via the given address. Go to KubeSphere login page and use the default account (username `admin` and pw `P@88w0rd`) to log in.
|
||||
|
||||

|
||||

|
||||
|
||||
### 通过 KubeSphere 开启附加组件
|
||||
### Start add-ons via KubeSphere
|
||||
|
||||
KubeSphere 平台外网可访问后,接下来的操作即可都在平台内完成。开启附加组件的操作可以参考社区文档 - `KubeSphere 3.0 界面开启可插拔组件安装`。
|
||||
When KubeSphere can be visited via the Internet, all the actions can be done on the console. Refer to the document - `Start add-ons in KubeSphere 3.0`.
|
||||
|
||||
💡 需要留意:在开启 Istio 组件之前,由于自定义资源定义(CRD)冲突的问题,需要先删除华为 CCE 自带的 `applications.app.k8s.io` ,最直接的方式是通过 kubectl 工具来完成:
|
||||
💡 Notes: Before you start Istio, you have to delete `applications.app.k8s.io` built in Huawei CCE due to the CRD conflict. The simple way to do it is to use kubectl.
|
||||
|
||||
```bash
|
||||
$ kubectl delete crd applications.app.k8s.io
|
||||
```
|
||||
|
||||
全部附加组件开启并安装成功后,进入集群管理界面,可以得到如下界面呈现效果,特别是在 `服务组件` 部分可以看到已经开启的各个基础和附加组件:
|
||||
After all add-ons are installed, go to the Cluster Management, and you will see the interface below. You can see all the started add-ons in `Add-Ons`.
|
||||
|
||||

|
||||

|
||||
|
|
|
|||
|
|
@ -1,103 +0,0 @@
|
|||
---
|
||||
title: "在腾讯云 TKE 安装 KubeSphere"
|
||||
keywords: "kubesphere, kubernetes, docker, tencent, tke"
|
||||
description: "介绍如何在腾讯云 TKE 上部署 KubeSphere 3.0"
|
||||
---
|
||||
|
||||
本指南将介绍如何在[腾讯云 TKE](https://cloud.tencent.com/document/product/457/6759) 上部署并使用 KubeSphere 3.0.0 平台。
|
||||
|
||||
## 腾讯云 TKE 环境准备
|
||||
|
||||
### 创建 Kubernetes 集群
|
||||
首先按使用环境的资源需求[创建 Kubernetes 集群](https://cloud.tencent.com/document/product/457/32189),满足以下一些条件即可(如已有环境并满足条件可跳过本节内容):
|
||||
|
||||
- KubeSphere 3.0.0 默认支持的 Kubernetes 版本为 `1.15.x`, `1.16.x`, `1.17.x`, `1.18.x`,需要选择其中支持的版本进行集群创建(如 `1.16.3`, `1.18.4`);
|
||||
- 工作节点机型配置规格方面选择 `SA2.LARGE8` 的 `4核|8GB` 配置即可,并按需扩展工作节点数量(通常生产环境需要 3 个及以上工作节点)。
|
||||
|
||||
|
||||
|
||||
### 创建公网 kubectl 证书
|
||||
|
||||
- 创建完集群后,进入 `容器服务` > `集群` 界面,选择刚创建的集群,在 `基本信息` 面板中, `集群APIServer信息` 中开启 `外网访问` 。
|
||||
- 然后在下方 `kubeconfig` 列表项中点击 `下载`,即可获取公用可用的 kubectl 证书。
|
||||
|
||||

|
||||
|
||||
- 获取 kubectl 配置文件后,可通过 kubectl 命令行工具来验证集群连接:
|
||||
|
||||
|
||||
|
||||
```bash
|
||||
$ kubectl version
|
||||
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.4", GitCommit:"c96aede7b5205121079932896c4ad89bb93260af", GitTreeState:"clean", BuildDate:"2020-06-17T11:41:22Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
|
||||
Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.4-tke.2", GitCommit:"f6b0517bc6bc426715a9ff86bd6aef39c81fd64a", GitTreeState:"clean", BuildDate:"2020-08-12T02:18:32Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
|
||||
```
|
||||
|
||||
|
||||
## KubeSphere 平台部署
|
||||
|
||||
### 通过 ks-installer 执行最小化部署
|
||||
接下来就可以使用 [ks-installer](https://github.com/kubesphere/ks-installer) 在已有的 Kubernetes 集群上来执行 KubeSphere 部署,建议首先还是以最小功能集进行安装。
|
||||
|
||||
- 使用 kubectl 执行以下命令安装 KubeSphere:
|
||||
```bash
|
||||
$ kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/kubesphere-installer.yaml
|
||||
```
|
||||
|
||||
- 本地创建名为 `cluster-configuration.yaml` 的文件:
|
||||
```bash
|
||||
$ vim cluster-configuration.yaml
|
||||
```
|
||||
|
||||
- 复制此[文件](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml)中的内容到 `cluster-configuration.yaml` 中,并将 `metrics_server.enabled` 字段设为 `false`,修改完成后执行以下命令:
|
||||
|
||||

|
||||
```bash
|
||||
$ kubectl apply -f cluster-configuration.yaml
|
||||
```
|
||||
|
||||
<font color=red>Note:
|
||||
腾讯云 TKE 托管集群已默认部署 `hpa-metrics-server`,若 `cluster-configuration.yaml` 文件中未禁用,则会导致 KubeSphere 部署失败。</font>
|
||||
|
||||
|
||||
- 执行以下命令查看部署日志,当日志输出如以下图片内容时则表示部署完成:
|
||||
```bash
|
||||
$ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
|
||||
```
|
||||

|
||||
|
||||
### 访问 KubeSphere 控制台
|
||||
部署完成后,您可以通过以下步骤访问 KubeSphere 控制台。
|
||||
|
||||
#### NodePort 方式访问
|
||||
|
||||
- 在 `容器服务` > `集群` 界面中,选择创建好的集群,在 `节点管理` > `节点` 面板中,查看任意一个节点的 `公网 IP`(集群安装时默认会免费为每个节点绑定公网 IP)。
|
||||
|
||||

|
||||
|
||||
- 由于服务安装时默认开启 NodePort 且端口为 30880,浏览器输入 `<公网 IP>:30880` ,并以默认账号(用户名 `admin`,密码 `P@88w0rd`)即可登录控制台。
|
||||
|
||||

|
||||
|
||||
#### LoadBalancer 方式访问
|
||||
|
||||
- 在 `容器服务` > `集群` 界面中,选择创建好的集群,在 `服务与路由` > `service` 面板中,点击 `ks-console` 一行中 `更新访问方式`。
|
||||
|
||||

|
||||
|
||||
- `服务访问方式` 选择 `提供公网访问`,`端口映射` 中 `服务端口` 填写您希望的端口号,点击 `更新访问方式`。
|
||||
|
||||

|
||||
|
||||
- 此时界面您将会看到 LoadBalancer 公网 IP:
|
||||
|
||||

|
||||
|
||||
- 浏览器输入 `<LoadBalancer 公网 IP>:<映射端口>`,并以默认账号(用户名 `admin`,密码 `P@88w0rd`)即可登录控制台。
|
||||
|
||||

|
||||
|
||||
### 通过 KubeSphere 开启附加组件
|
||||
KubeSphere 平台外网可访问后,接下来的操作即可都在平台内完成。开启附加组件的操作可以参考社区文档 - `KubeSphere 3.0 界面开启可插拔组件安装`。
|
||||
全部附加组件开启并安装成功后,进入集群管理界面,可以得到如下界面呈现效果,特别是在 `服务组件` 部分可以看到已经开启的各个基础和附加组件:
|
||||

|
||||
|
|
@ -0,0 +1,131 @@
|
|||
---
|
||||
title: "Deploy KubeSphere on AKS"
|
||||
keywords: "KubeSphere, Kubernetes, Installation, Azure, AKS"
|
||||
description: "How to deploy KubeSphere on AKS"
|
||||
|
||||
weight: 2270
|
||||
---
|
||||
|
||||
This guide walks you through the steps of deploying KubeSphere on [Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/).
|
||||
|
||||
## Prepare a AKS cluster
|
||||
|
||||
Azure can help you implement infrastructure as code by providing resource deployment automation options. Commonly adopted tools include [ARM templates](https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/overview) and [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/what-is-azure-cli?view=azure-cli-latest). In this guide, we will use Azure CLI to create all the resources that are needed for the installation of KubeSphere.
|
||||
|
||||
### Use Azure Cloud Shell
|
||||
You don't have to install Azure CLI on your machine as Azure provides a web-based terminal. Click the Cloud Shell button on the menu bar at the upper right corner in Azure portal.
|
||||
|
||||

|
||||
|
||||
Select **Bash** Shell.
|
||||
|
||||

|
||||
### Create a Resource Group
|
||||
|
||||
An Azure resource group is a logical group in which Azure resources are deployed and managed. The following example creates a resource group named `KubeSphereRG` in the location `westus`.
|
||||
|
||||
```bash
|
||||
az group create --name KubeSphereRG --location westus
|
||||
```
|
||||
|
||||
### Create a AKS Cluster
|
||||
Use the command `az aks create` to create an AKS cluster. The following example creates a cluster named `KuberSphereCluster` with three nodes. This will take several minutes to complete.
|
||||
|
||||
```bash
|
||||
az aks create --resource-group KubeSphereRG --name KuberSphereCluster --node-count 3 --enable-addons monitoring --generate-ssh-keys
|
||||
```
|
||||
{{< notice note >}}
|
||||
|
||||
You can use `--node-vm-size` or `-s` option to change the size of Kubernetes nodes. Default: Standard_DS2_v2 (2vCPU, 7GB memory). For more options, see [az aks create](https://docs.microsoft.com/en-us/cli/azure/aks?view=azure-cli-latest#az-aks-create).
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
### Connect to the Cluster
|
||||
|
||||
To configure kubectl to connect to the Kubernetes cluster, use the command `az aks get-credentials`. This command downloads credentials and configures the Kubernetes CLI to use them.
|
||||
|
||||
```bash
|
||||
az aks get-credentials --resource-group KubeSphereRG --name KuberSphereCluster
|
||||
```
|
||||
|
||||
```bash
|
||||
kebesphere@Azure:~$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
aks-nodepool1-23754246-vmss000000 Ready agent 38m v1.16.13
|
||||
```
|
||||
### Check Azure Resources in the Portal
|
||||
After you execute all the commands above, you can see there are 2 Resource Groups created in Azure Portal.
|
||||
|
||||

|
||||
|
||||
Azure Kubernetes Services itself will be placed in KubeSphereRG.
|
||||
|
||||

|
||||
|
||||
All the other Resources will be placed in MC_KubeSphereRG_KuberSphereCluster_westus, such as VMs, Load Balancer and Virtual Network.
|
||||
|
||||

|
||||
|
||||
## Deploy KubeSphere on AKS
|
||||
To start deploying KubeSphere, use the following command.
|
||||
```bash
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml
|
||||
```
|
||||
Download the cluster-configuration.yaml as below and you can customize the configuration. You can also enable pluggable components by setting the `enabled` property to `true` in this file.
|
||||
```bash
|
||||
wget https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml
|
||||
```
|
||||
As `metrics-server` is already installed on AKS, you need to disable the component in the cluster-configuration.yaml file by changing `true` to `false` for `enabled`.
|
||||
```bash
|
||||
kebesphere@Azure:~$ vim ./cluster-configuration.yaml
|
||||
---
|
||||
metrics_server: # (CPU: 56 m, Memory: 44.35 MiB) Whether to install metrics-server. IT enables HPA (Horizontal Pod Autoscaler).
|
||||
enabled: false
|
||||
---
|
||||
```
|
||||
The installation process will start after the cluster configuration is applied through the following command:
|
||||
```bash
|
||||
kubectl apply -f ./cluster-configuration.yaml
|
||||
```
|
||||
|
||||
You can inspect the logs of installation through the following command:
|
||||
|
||||
```bash
|
||||
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
|
||||
```
|
||||
|
||||
## Access KubeSphere Console
|
||||
|
||||
To access KubeSphere console from a public IP address, you need to change the service type to `LoadBalancer`.
|
||||
```bash
|
||||
kubectl edit service ks-console -n kubesphere-system
|
||||
```
|
||||
Find the following section and change the type to `LoadBalancer`.
|
||||
```bash
|
||||
spec:
|
||||
clusterIP: 10.0.78.113
|
||||
externalTrafficPolicy: Cluster
|
||||
ports:
|
||||
- name: nginx
|
||||
nodePort: 30880
|
||||
port: 80
|
||||
protocol: TCP
|
||||
targetPort: 8000
|
||||
selector:
|
||||
app: ks-console
|
||||
tier: frontend
|
||||
version: v3.0.0
|
||||
sessionAffinity: None
|
||||
type: LoadBalancer # Change NodePort to LoadBalancer
|
||||
status:
|
||||
loadBalancer: {}
|
||||
```
|
||||
After saving the configuration of ks-console service, you can use the following command to get the public IP address (under `EXTERNAL-IP`). Use the IP address to access the console with the default account and password (`admin/P@88w0rd`).
|
||||
```bash
|
||||
kebesphere@Azure:~$ kubectl get svc/ks-console -n kubesphere-system
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
ks-console LoadBalancer 10.0.181.93 13.86.xxx.xxx 80:30194/TCP 13m 6379/TCP 10m
|
||||
```
|
||||
## Enable Pluggable Components (Optional)
|
||||
|
||||
The example above demonstrates the process of a default minimal installation. For pluggable components, you can enable them either before or after the installation. See [Enable Pluggable Components](https://github.com/kubesphere/ks-installer#enable-pluggable-components) for details.
|
||||
|
|
@ -0,0 +1,126 @@
|
|||
---
|
||||
title: "Deploy KubeSphere on DigitalOcean"
|
||||
keywords: 'Kubernetes, KubeSphere, DigitalOcean, Installation'
|
||||
description: 'How to install KubeSphere on DigitalOcean'
|
||||
|
||||
weight: 2265
|
||||
---
|
||||
|
||||

|
||||
|
||||
This guide walks you through the steps of deploying KubeSphere on [ DigitalOcean Kubernetes](https://www.digitalocean.com/products/kubernetes/).
|
||||
|
||||
## Prepare a DOKS Cluster
|
||||
|
||||
A Kubernetes cluster in DO is a prerequisite for installing KubeSphere. Go to your [DO account](https://cloud.digitalocean.com/) and, in the navigation menu, refer to the image below to create a cluster.
|
||||
|
||||

|
||||
|
||||
You need to select:
|
||||
1. Kubernetes version (e.g. *1.18.6-do.0*)
|
||||
2. Datacenter region (e.g. *Frankfurt*)
|
||||
3. VPC network (e.g. *default-fra1*)
|
||||
4. Cluster capacity (e.g. 2 standard nodes with 2 vCPUs and 4GB of RAM each)
|
||||
5. A name for the cluster (e.g. *kubesphere-3*)
|
||||
|
||||

|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- Supported Kubernetes versions for KubeSphere 3.0.0: 1.15.x, 1.16.x, 1.17.x, 1.18.x.
|
||||
- 2 nodes are included in this example. You can add more nodes based on your own needs especially in a production environment.
|
||||
- The machine type Standard / 4 GB / 2 vCPUs is for minimal installation. If you plan to enable several pluggable components or use the cluster for production, we recommend to upgrade your nodes to a more powerfull type (such as CPU-Optimized / 8 GB / 4 vCPUs). It seems that DigitalOcean provisions the master nodes based on the type of the worker nodes, and for Standard ones the API server can become unresponsive quite fast.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
When the cluster is ready, you can download the config file for kubectl.
|
||||
|
||||

|
||||
|
||||
## Install KubeSphere on DOKS
|
||||
|
||||
Now that the cluster is ready, you can install KubeSphere following this steps:
|
||||
|
||||
- Install KubeSphere using kubectl. The following command is only for the default minimal installation.
|
||||
|
||||
```bash
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml
|
||||
```
|
||||
|
||||
- Create a local cluster-configuration.yaml.
|
||||
|
||||
```bash
|
||||
vi cluster-configuration.yaml
|
||||
```
|
||||
|
||||
- Copy all the content in this [file](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) and paste it to your local cluster-configuration.yaml.
|
||||
|
||||
- Save the file when you finish. Execute the following command to start installation:
|
||||
|
||||
```bash
|
||||
kubectl apply -f cluster-configuration.yaml
|
||||
```
|
||||
|
||||
- Inspect the logs of installation:
|
||||
|
||||
```bash
|
||||
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
|
||||
```
|
||||
|
||||
When the installation finishes, you can see the following message:
|
||||
|
||||
```bash
|
||||
#####################################################
|
||||
### Welcome to KubeSphere! ###
|
||||
#####################################################
|
||||
Console: http://10.XXX.XXX.XXX:30880
|
||||
Account: admin
|
||||
Password: P@88w0rd
|
||||
NOTES:
|
||||
1. After logging into the console, please check the
|
||||
monitoring status of service components in
|
||||
the "Cluster Management". If any service is not
|
||||
ready, please wait patiently until all components
|
||||
are ready.
|
||||
2. Please modify the default password after login.
|
||||
#####################################################
|
||||
https://kubesphere.io 2020-xx-xx xx:xx:xx
|
||||
```
|
||||
|
||||
## Access KubeSphere Console
|
||||
|
||||
Now that KubeSphere is installed, you can access the web console of KubeSphere by following the steps below.
|
||||
|
||||
- Go to the Kubernetes Dashboard provided by DigitalOcean.
|
||||
|
||||

|
||||
|
||||
- Select the **kubesphere-system** namespace
|
||||
|
||||

|
||||
|
||||
- In **Service -> Services**, edit the service **ks-console**.
|
||||
|
||||

|
||||
|
||||
- Change the type from `NodePort` to `LoadBalancer`. Save the file when you finish.
|
||||
|
||||

|
||||
|
||||
- Access the KubeSphere's web console using the endpoint generated by DO.
|
||||
|
||||

|
||||
|
||||
{{< notice tip >}}
|
||||
|
||||
Instead of changing the service type to `LoadBalancer`, you can also access KubeSphere console via `NodeIP:NodePort` (service type set to `NodePort`). You need to get the pulic IP of anyone of your nodes.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
- Log in the console with the default account and password (`admin/P@88w0rd`). In the cluster overview page, you can see the dashboard as shown in the following image.
|
||||
|
||||

|
||||
|
||||
## Enable Pluggable Components (Optional)
|
||||
|
||||
The example above demonstrates the process of a default minimal installation. To enable other components in KubeSphere, see Enable Pluggable Components for more details.
|
||||
|
|
@ -0,0 +1,172 @@
|
|||
---
|
||||
title: "Deploy KubeSphere on AWS EKS"
|
||||
keywords: 'Kubernetes, KubeSphere, EKS, Installation'
|
||||
description: 'How to install KubeSphere on EKS'
|
||||
|
||||
weight: 2265
|
||||
---
|
||||
|
||||
This guide walks you through the steps of deploying KubeSphere on [AWS EKS](https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html).
|
||||
## Install the AWS CLI
|
||||
Tht aws EKS does not have a web terminal like GKE, so we must install aws cli first. Take a example for macOS and other operating system can according [Getting Started EKS](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html)
|
||||
```shell
|
||||
pip3 install awscli --upgrade --user
|
||||
```
|
||||
Check it with `aws --version`
|
||||

|
||||
|
||||
## Prepare a EKS Cluster
|
||||
|
||||
- A standard Kubernetes cluster in AWS is a prerequisite of installing KubeSphere. Go to the navigation menu and refer to the image below to create a cluster.
|
||||
|
||||

|
||||
|
||||
- On the Configure cluster page, fill in the following fields:
|
||||

|
||||
|
||||
- Name – A unique name for your cluster.
|
||||
|
||||
- Kubernetes version – The version of Kubernetes to use for your cluster.
|
||||
|
||||
- Cluster service role – Select the IAM role that you created with [Create your Amazon EKS cluster IAM role](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html#role-create).
|
||||
|
||||
- Secrets encryption – (Optional) Choose to enable envelope encryption of Kubernetes secrets using the AWS Key Management Service (AWS KMS). If you enable envelope encryption, the Kubernetes secrets are encrypted using the customer master key (CMK) that you select. The CMK must be symmetric, created in the same region as the cluster, and if the CMK was created in a different account, the user must have access to the CMK. For more information, see [Allowing users in other accounts to use a CMK](https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-modifying-external-accounts.html) in the AWS Key Management Service Developer Guide.
|
||||
|
||||
- Kubernetes secrets encryption with an AWS KMS CMK requires Kubernetes version 1.13 or later. If no keys are listed, you must create one first. For more information, see [Creating keys](https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html).
|
||||
|
||||
- Tags – (Optional) Add any tags to your cluster. For more information, see [Tagging your Amazon EKS resources](https://docs.aws.amazon.com/eks/latest/userguide/eks-using-tags.html).
|
||||
|
||||
- Select Next.
|
||||
|
||||
- On the Specify networking page, select values for the following fields:
|
||||

|
||||
|
||||
- VPC – The VPC that you created previously in [Create your Amazon EKS cluster VPC](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html#vpc-create). You can find the name of your VPC in the drop-down list.
|
||||
|
||||
- Subnets – By default, the available subnets in the VPC specified in the previous field are preselected. Select any subnet that you don't want to host cluster resources, such as worker nodes or load balancers.
|
||||
|
||||
- Security groups – The SecurityGroups value from the AWS CloudFormation output that you generated with [Create your Amazon EKS cluster VPC](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html#vpc-create). This security group has ControlPlaneSecurityGroup in the drop-down name.
|
||||
- For Cluster endpoint access – Choose one of the following options:
|
||||

|
||||
- Public – Enables only public access to your cluster's Kubernetes API server endpoint. Kubernetes API requests that originate from outside of your cluster's VPC use the public endpoint. By default, access is allowed from any source IP address. You can optionally restrict access to one or more CIDR ranges such as 192.168.0.0/16, for example, by selecting Advanced settings and then selecting Add source.
|
||||
|
||||
- Private – Enables only private access to your cluster's Kubernetes API server endpoint. Kubernetes API requests that originate from within your cluster's VPC use the private VPC endpoint.
|
||||
|
||||
> Important
|
||||
If you created a VPC without outbound internet access, then you must enable private access.
|
||||
|
||||
- Public and private – Enables public and private access.
|
||||
- Select Next.
|
||||

|
||||
- On the Configure logging page, you can optionally choose which log types that you want to enable. By default, each log type is Disabled. For more information, see [Amazon EKS control plane logging](https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html).
|
||||
|
||||
- Select Next.
|
||||

|
||||
- On the Review and create page, review the information that you entered or selected on the previous pages. Select Edit if you need to make changes to any of your selections. Once you're satisfied with your settings, select Create. The Status field shows CREATING until the cluster provisioning process completes.
|
||||
For more information about the previous options, see Modifying cluster endpoint access.
|
||||
When your cluster provisioning is complete (usually between 10 and 15 minutes), note the API server endpoint and Certificate authority values. These are used in your kubectl configuration.
|
||||

|
||||
- Create **Node Group**, define 2 nodes in this cluster.
|
||||

|
||||
- Config node group
|
||||

|
||||
|
||||
{{< notice note >}}
|
||||
- Supported Kubernetes versions for KubeSphere 3.0.0: 1.15.x, 1.16.x, 1.17.x, 1.18.x.
|
||||
- Ubuntu is used for the operating system here as an example. For more information on supported systems, see Overview.
|
||||
- 3 nodes are included in this example. You can add more nodes based on your own needs especially in a production environment.
|
||||
- The machine type t3.medium (2 vCPU, 4GB memory) is for minimal installation. If you want to enable pluggable components or use the cluster for production, please select a machine type with more resources.
|
||||
- For other settings, you can change them as well based on your own needs or use the default value.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
- When the EKS cluster is ready, you can connect to the cluster with kubectl.
|
||||
## configure kubectl
|
||||
We will uses the kubectl command-line utility for communicating with the cluster API server. Firstly, we should get the kubeconfig of the eks cluster which created just now.
|
||||
- Configure your AWS CLI credentials
|
||||
```shell
|
||||
$ aws configure
|
||||
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
|
||||
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
|
||||
Default region name [None]: region-code
|
||||
Default output format [None]: json
|
||||
```
|
||||
- To create your kubeconfig file with the AWS CLI
|
||||
|
||||
```shell
|
||||
aws eks --region us-west-2 update-kubeconfig --name cluster_name
|
||||
```
|
||||
- By default, the resulting configuration file is created at the default kubeconfig path (.kube/config) in your home directory or merged with an existing kubeconfig at that location. You can specify another path with the --kubeconfig option.
|
||||
|
||||
- You can specify an IAM role ARN with the --role-arn option to use for authentication when you issue kubectl commands. Otherwise, the IAM entity in your default AWS CLI or SDK credential chain is used. You can view your default AWS CLI or SDK identity by running the aws sts get-caller-identity command.
|
||||
|
||||
For more information, see the help page with the aws eks update-kubeconfig help command or see update-kubeconfig in the [AWS CLI Command Reference](https://docs.aws.amazon.com/eks/latest/userguide/security_iam_id-based-policy-examples.html).
|
||||
- Test your configuration.
|
||||
```shell
|
||||
kubectl get svc
|
||||
```
|
||||
|
||||
## Install KubeSphere on EKS
|
||||
|
||||
- Install KubeSphere using kubectl. The following command is only for the default minimal installation.
|
||||
|
||||
```bash
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml
|
||||
```
|
||||

|
||||
|
||||
- Create a local cluster-configuration.yaml.
|
||||
```shell
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml
|
||||
```
|
||||

|
||||
|
||||
- Inspect the logs of installation:
|
||||
|
||||
```bash
|
||||
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
|
||||
```
|
||||
|
||||
- When the installation finishes, you can see the following message:
|
||||
|
||||
```bash
|
||||
#####################################################
|
||||
### Welcome to KubeSphere! ###
|
||||
#####################################################
|
||||
Account: admin
|
||||
Password: P@88w0rd
|
||||
NOTES:
|
||||
1. After logging into the console, please check the
|
||||
monitoring status of service components in
|
||||
the "Cluster Management". If any service is not
|
||||
ready, please wait patiently until all components
|
||||
are ready.
|
||||
2. Please modify the default password after login.
|
||||
#####################################################
|
||||
https://kubesphere.io 2020-xx-xx xx:xx:xx
|
||||
```
|
||||
|
||||
## Access KubeSphere Console
|
||||
|
||||
Now that KubeSphere is installed, you can access the web console of KubeSphere by following the step below.
|
||||
|
||||
- Select the service **ks-console**.
|
||||
```shell
|
||||
kubectl get svc -nkubesphere-system
|
||||
```
|
||||
|
||||
- `kubectl edit ks-console` and change the type from `NodePort` to `LoadBalancer`. Save the file when you finish.
|
||||

|
||||
|
||||
- `kubectl get svc -nkubesphere-system` and get your external ip
|
||||

|
||||
|
||||
- Access the web console of KubeSphere using the external-ip generated by EKS.
|
||||
|
||||
- Log in the console with the default account and password (`admin/P@88w0rd`). In the cluster overview page, you can see the dashboard as shown in the following image.
|
||||
|
||||

|
||||
|
||||
## Enable Pluggable Components (Optional)
|
||||
|
||||
The example above demonstrates the process of a default minimal installation. To enable other components in KubeSphere, see Enable Pluggable Components for more details.
|
||||
|
|
@ -0,0 +1,132 @@
|
|||
---
|
||||
title: "Deploy KubeSphere on GKE"
|
||||
keywords: 'Kubernetes, KubeSphere, GKE, Installation'
|
||||
description: 'How to install KubeSphere on GKE'
|
||||
|
||||
weight: 2265
|
||||
---
|
||||
|
||||

|
||||
|
||||
This guide walks you through the steps of deploying KubeSphere on [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/).
|
||||
|
||||
## Prepare a GKE Cluster
|
||||
|
||||
- A standard Kubernetes cluster in GKE is a prerequisite of installing KubeSphere. Go to the navigation menu and refer to the image below to create a cluster.
|
||||
|
||||

|
||||
|
||||
- In **Cluster basics**, select a Master version. The static version `1.15.12-gke.2` is used here as an example.
|
||||
|
||||

|
||||
|
||||
- In **default-pool** under **Node Pools**, define 3 nodes in this cluster.
|
||||
|
||||

|
||||
|
||||
- Go to **Nodes**, select the image type and set the Machine Configuration as below. When you finish, click **Create**.
|
||||
|
||||

|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- Supported Kubernetes versions for KubeSphere 3.0.0: 1.15.x, 1.16.x, 1.17.x, 1.18.x.
|
||||
- Ubuntu is used for the operating system here as an example. For more information on supported systems, see Overview.
|
||||
- 3 nodes are included in this example. You can add more nodes based on your own needs especially in a production environment.
|
||||
- The machine type e2-medium (2 vCPU, 4GB memory) is for minimal installation. If you want to enable pluggable components or use the cluster for production, please select a machine type with more resources.
|
||||
- For other settings, you can change them as well based on your own needs or use the default value.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
- When the GKE cluster is ready, you can connect to the cluster with Cloud Shell.
|
||||
|
||||
|
||||

|
||||
|
||||
## Install KubeSphere on GKE
|
||||
|
||||
- Install KubeSphere using kubectl. The following command is only for the default minimal installation.
|
||||
|
||||
```bash
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml
|
||||
```
|
||||
|
||||
- Create a local cluster-configuration.yaml.
|
||||
|
||||
```bash
|
||||
vi cluster-configuration.yaml
|
||||
```
|
||||
|
||||
- Copy all the content in this [file](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) and paste it to your local cluster-configuration.yaml. Navigate to `metrics_server`, and change `true` to `false` for `enabled`.
|
||||
|
||||

|
||||
|
||||
{{< notice warning >}}
|
||||
|
||||
Metrics Server is already installed on GKE. If you do not disable `metrics_server` in the cluster-configuration.yaml file, it will cause an issue and stop the installation process.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
- Save the file when you finish. Execute the following command to start installation:
|
||||
|
||||
```bash
|
||||
kubectl apply -f cluster-configuration.yaml
|
||||
```
|
||||
|
||||
- Inspect the logs of installation:
|
||||
|
||||
```bash
|
||||
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
|
||||
```
|
||||
|
||||
- When the installation finishes, you can see the following message:
|
||||
|
||||
```bash
|
||||
#####################################################
|
||||
### Welcome to KubeSphere! ###
|
||||
#####################################################
|
||||
Console: http://10.128.0.44:30880
|
||||
Account: admin
|
||||
Password: P@88w0rd
|
||||
NOTES:
|
||||
1. After logging into the console, please check the
|
||||
monitoring status of service components in
|
||||
the "Cluster Management". If any service is not
|
||||
ready, please wait patiently until all components
|
||||
are ready.
|
||||
2. Please modify the default password after login.
|
||||
#####################################################
|
||||
https://kubesphere.io 2020-xx-xx xx:xx:xx
|
||||
```
|
||||
|
||||
## Access KubeSphere Console
|
||||
|
||||
Now that KubeSphere is installed, you can access the web console of KubeSphere by following the step below.
|
||||
|
||||
- In **Services & Ingress**, select the service **ks-console**.
|
||||
|
||||

|
||||
|
||||
- In **Service details**, click **Edit** and change the type from `NodePort` to `LoadBalancer`. Save the file when you finish.
|
||||
|
||||

|
||||
|
||||
- Access the web console of KubeSphere using the endpoint generated by GKE.
|
||||
|
||||
|
||||

|
||||
|
||||
{{< notice tip >}}
|
||||
|
||||
Instead of changing the service type to `LoadBalancer`, you can also access KubeSphere console via `NodeIP:NodePort` (service type set to `NodePort`). You may need to open port `30880` in firewall rules.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
- Log in the console with the default account and password (`admin/P@88w0rd`). In the cluster overview page, you can see the dashboard as shown in the following image.
|
||||
|
||||

|
||||
|
||||
## Enable Pluggable Components (Optional)
|
||||
|
||||
The example above demonstrates the process of a default minimal installation. To enable other components in KubeSphere, see Enable Pluggable Components for more details.
|
||||
|
||||
|
|
@ -0,0 +1,9 @@
|
|||
---
|
||||
title: "Install KubeSphere on Huaweicloud CCE"
|
||||
keywords: 'Kubernetes, KubeSphere, CCE, Installation, Huaweicloud'
|
||||
description: 'How to install KubeSphere on Huaweicloud CCE'
|
||||
|
||||
weight: 2268
|
||||
---
|
||||
|
||||
TBD
|
||||
|
|
@ -0,0 +1,152 @@
|
|||
---
|
||||
title: "Deploy KubeSphere on Oracle OKE"
|
||||
keywords: 'Kubernetes, KubeSphere, OKE, Installation, Oracle-cloud'
|
||||
description: 'How to install KubeSphere on Oracle OKE'
|
||||
|
||||
weight: 2247
|
||||
---
|
||||
|
||||
This guide walks you through the steps of deploying KubeSphere on [Oracle Kubernetes Engine](https://www.oracle.com/cloud/compute/container-engine-kubernetes.html).
|
||||
|
||||
## Create a Kubernetes Cluster
|
||||
|
||||
- A standard Kubernetes cluster in OKE is a prerequisite of installing KubeSphere. Go to the navigation menu and refer to the image below to create a cluster.
|
||||
|
||||

|
||||
|
||||
- In the pop-up window, select **Quick Create** and click **Launch Workflow**.
|
||||
|
||||

|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
In this example, **Quick Create** is used for demonstration which will automatically create all the resources necessary for a cluster in Oracle Cloud. If you select **Custom Create**, you need to create all the resources (such as VCN and LB Subnets) yourself.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
- Next, you need to set the cluster with basic information. Here is an example for your reference. When you finish, click **Next**.
|
||||
|
||||

|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- Supported Kubernetes versions for KubeSphere 3.0.0: 1.15.x, 1.16.x, 1.17.x, 1.18.x.
|
||||
- It is recommended that you should select **Public** for **Visibility Type**, which will assign a public IP address for every node. The IP address can be used later to access the web console of KubeSphere.
|
||||
- In Oracle Cloud, a Shape is a template that determines the number of CPUs, amount of memory, and other resources that are allocated to an instance. `VM.Standard.E2.2 (2 CPUs and 16G Memory)` is used in this example. For more information, see [Standard Shapes](https://docs.cloud.oracle.com/en-us/iaas/Content/Compute/References/computeshapes.htm#vmshapes__vm-standard).
|
||||
- 3 nodes are included in this example. You can add more nodes based on your own needs especially in a production environment.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
- Review cluster information and click **Create Cluster** if no adjustment is needed.
|
||||
|
||||

|
||||
|
||||
- After the cluster is created, click **Close**.
|
||||
|
||||

|
||||
|
||||
- Make sure the Cluster Status is **Active** and click **Access Cluster**.
|
||||
|
||||

|
||||
|
||||
- In the pop-up window, select **Cloud Shell Access** to access the cluster. Click **Launch Cloud Shell** and copy the code provided by Oracle Cloud.
|
||||
|
||||

|
||||
|
||||
- In Cloud Shell, paste the command so that we can execute the installation command later.
|
||||
|
||||

|
||||
|
||||
{{< notice warning >}}
|
||||
|
||||
If you do not copy and execute the command above, you cannot proceed with the steps below.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
## Install KubeSphere on OKE
|
||||
|
||||
- Install KubeSphere using kubectl. The following command is only for the default minimal installation.
|
||||
|
||||
```bash
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml
|
||||
```
|
||||
|
||||
```bash
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml
|
||||
```
|
||||
|
||||
- Inspect the logs of installation:
|
||||
|
||||
```bash
|
||||
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
|
||||
```
|
||||
|
||||
- When the installation finishes, you can see the following message:
|
||||
|
||||
```bash
|
||||
#####################################################
|
||||
### Welcome to KubeSphere! ###
|
||||
#####################################################
|
||||
|
||||
Console: http://10.0.10.2:30880
|
||||
Account: admin
|
||||
Password: P@88w0rd
|
||||
|
||||
NOTES:
|
||||
1. After logging into the console, please check the
|
||||
monitoring status of service components in
|
||||
the "Cluster Management". If any service is not
|
||||
ready, please wait patiently until all components
|
||||
are ready.
|
||||
2. Please modify the default password after login.
|
||||
|
||||
#####################################################
|
||||
https://kubesphere.io 20xx-xx-xx xx:xx:xx
|
||||
```
|
||||
|
||||
## Access KubeSphere Console
|
||||
|
||||
Now that KubeSphere is installed, you can access the web console of KubeSphere either through `NodePort` or `LoadBalancer`.
|
||||
|
||||
- Check the service of KubeSphere console through the following command:
|
||||
|
||||
```bash
|
||||
kubectl get svc -n kubesphere-system
|
||||
```
|
||||
|
||||
- The output may look as below. You can change the type to `LoadBalancer` so that the external IP address can be exposed.
|
||||
|
||||

|
||||
|
||||
{{< notice tip >}}
|
||||
|
||||
It can be seen above that the service `ks-console` is being exposed through NodePort, which means you can access the console directly via `NodeIP:NodePort` (the public IP address of any node is applicable). You may need to open port `30880` in firewall rules.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
- Execute the command to edit the service configuration.
|
||||
|
||||
```bash
|
||||
kubectl edit svc ks-console -o yaml -n kubesphere-system
|
||||
```
|
||||
|
||||
- Navigate to `type` and change `NodePort` to `LoadBalancer`. Save the configuration after you finish.
|
||||
|
||||

|
||||
|
||||
- Execute the following command again and you can see the IP address displayed as below.
|
||||
|
||||
```bash
|
||||
kubectl get svc -n kubesphere-system
|
||||
```
|
||||
|
||||

|
||||
|
||||
- Log in the console through the external IP address with the default account and password (`admin/P@88w0rd`). In the cluster overview page, you can see the dashboard shown below:
|
||||
|
||||

|
||||
|
||||
## Enable Pluggable Components (Optional)
|
||||
|
||||
The example above demonstrates the process of a default minimal installation. To enable other components in KubeSphere, see Enable Pluggable Components for more details.
|
||||
|
||||
|
|
@ -1,152 +0,0 @@
|
|||
---
|
||||
title: "High Availability Configuration"
|
||||
keywords: "kubesphere, kubernetes, docker,installation, HA, high availability"
|
||||
description: "The guide for installing a high availability of KubeSphere cluster"
|
||||
|
||||
weight: 2230
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
[Multi-node installation](../multi-node) can help you to quickly set up a single-master cluster on multiple machines for development and testing. However, we need to consider the high availability of the cluster for production. Since the key components on the master node, i.e. kube-apiserver, kube-scheduler, and kube-controller-manager are running on a single master node, Kubernetes and KubeSphere will be unavailable during the master being down. Therefore we need to set up a high availability cluster by provisioning load balancers and multiple masters. You can use any cloud load balancer, or any hardware load balancer (e.g. F5). In addition, keepalved and Haproxy is also an alternative for creating such high-availability cluster.
|
||||
|
||||
This document walks you through an example how to create two [QingCloud Load Balancer](https://docs.qingcloud.com/product/network/loadbalancer), serving as internal load balancer and external load balancer respectively, and how to configure the high availability of masters and Etcd using the load balancers.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Please make sure that you already read [Multi-Node installation](../multi-node). This document only demonstrates how to configure load balancers.
|
||||
- You need a [QingCloud](https://console.qingcloud.com/login) account to create load balancers, or follow the guide of any other cloud provider to create load balancers.
|
||||
|
||||
## Architecture
|
||||
|
||||
This example prepares six machines of CentOS 7.5. We will create two load balancers, and deploy three masters and Etcd nodes on three of the machines. You can configure these masters and Etcd nodes in `conf/hosts.ini`.
|
||||
|
||||

|
||||
|
||||
## Install HA Cluster
|
||||
|
||||
### Step 1: Create Load Balancers
|
||||
|
||||
This step briefly shows an example of creating a load balancer on QingCloud platform.
|
||||
|
||||
#### Create an Internal Load Balancer
|
||||
|
||||
1.1. Log in [QingCloud Console](https://console.qingcloud.com/login) and select **Network & CDN → Load Balancers**, then click on the create button and fill in the basic information.
|
||||
|
||||
1.2. Choose the VxNet that your machines are created within from the **Network** dropdown list. Here is `kube`. Other settings can be default values as follows. Click **Submit** to complete the creation.
|
||||
|
||||

|
||||
|
||||
1.3. Drill into the detail page of the load balancer, then create a listener that listens to the port `6443` of the `TCP` protocol.
|
||||
|
||||
- Name: Define a name for this Listener
|
||||
- Listener Protocol: Select `TCP` protocol
|
||||
- Port: `6443`
|
||||
- Load mode: `Poll`
|
||||
|
||||
> Note: After creating the listener, please check the firewall rules of the load balancer. Make sure that the port `6443` has been added to the firewall rules and the external traffic can pass through `6443`. Otherwise, the installation will fail.
|
||||
|
||||

|
||||
|
||||
1.4. Click **Add Backend**, choose the VxNet `kube` that we chose. Then click on the button **Advanced Search** and choose the three master nodes under the VxNet and set the port to `6443` which is the default secure port of api-server.
|
||||
|
||||
Click **Submit** when you are done.
|
||||
|
||||

|
||||
|
||||
1.5. Click on the button **Apply Changes** to activate the configurations. At this point, you can find the three masters have been added as the backend servers of the listener that is behind the internal load balancer.
|
||||
|
||||
> Please note: The status of all masters might shows `Not available` after you added them as backends. This is normal since the port `6443` of api-server are not active in masters yet. The status will change to `Active` and the port of api-server will be exposed after installation complete, which means the internal load balancer you configured works as expected.
|
||||
|
||||

|
||||
|
||||
#### Create an External Load Balancer
|
||||
|
||||
You need to create an EIP in advance.
|
||||
|
||||
1.6. Similarly, create an external load balancer without joining any network, but associate the EIP that you created to this load balancer.
|
||||
|
||||
1.7. Enter the load balancer detail page, create a listener that listens to the port `30880` of the `HTTP` protocol which is the nodeport of KubeSphere console..
|
||||
|
||||
> Note: After creating the listener, please check the firewall rules of the load balancer. Make sure that the port `30880` has been added to the firewall rules and the external traffic can pass through `6443`. Otherwise, the installation will fail.
|
||||
|
||||

|
||||
|
||||
1.8. Click **Add Backend**, then choose the `six` machines that we are going to install KubeSphere within the VxNet `Kube`, and set the port to `30880`.
|
||||
|
||||
Click **Submit** when you are done.
|
||||
|
||||
1.9. Click on the button **Apply Changes** to activate the configurations. At this point, you can find the six machines have been added as the backend servers of the listener that is behind the external load balancer.
|
||||
|
||||

|
||||
|
||||
### Step 2: Modify the host.ini
|
||||
|
||||
Go to the taskbox where you downloaded the installer by following the [Multi-node Installation](../multi-node) and complete the following configurations.
|
||||
|
||||
| **Parameter** | **Description** |
|
||||
|--------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| `[all]` | node information. Use the following syntax if you run installation as `root` user: <br> - `<node_name> ansible_connection=<host> ip=<ip_address>` <br> - `<node_name> ansible_host=<ip_address> ip=<ip_address> ansible_ssh_pass=<pwd>` <br> If you log in as a non-root user, use the syntax: <br> - `<node_name> ansible_connection=<host> ip=<ip_address> ansible_user=<user> ansible_become_pass=<pwd>` |
|
||||
| `[kube-master]` | master node names |
|
||||
| `[kube-node]` | worker node names |
|
||||
| `[etcd]` | etcd node names. The number of `etcd` nodes needs to be odd. |
|
||||
| `[k8s-cluster:children]` | group names of `[kube-master]` and `[kube-node]` |
|
||||
|
||||
|
||||
We use **CentOS 7.5** with `root` user to install an HA cluster. Please see the following configuration as an example:
|
||||
|
||||
> Note:
|
||||
> <br>
|
||||
> If the _taskbox_ cannot establish `ssh` connection with the rest nodes, try to use the non-root user configuration.
|
||||
|
||||
#### host.ini example
|
||||
|
||||
```ini
|
||||
[all]
|
||||
master1 ansible_connection=local ip=192.168.0.1
|
||||
master2 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD
|
||||
master3 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD
|
||||
node1 ansible_host=192.168.0.4 ip=192.168.0.4 ansible_ssh_pass=PASSWORD
|
||||
node2 ansible_host=192.168.0.5 ip=192.168.0.5 ansible_ssh_pass=PASSWORD
|
||||
node3 ansible_host=192.168.0.6 ip=192.168.0.6 ansible_ssh_pass=PASSWORD
|
||||
|
||||
[kube-master]
|
||||
master1
|
||||
master2
|
||||
master3
|
||||
|
||||
[kube-node]
|
||||
node1
|
||||
node2
|
||||
node3
|
||||
|
||||
[etcd]
|
||||
master1
|
||||
master2
|
||||
master3
|
||||
|
||||
[k8s-cluster:children]
|
||||
kube-node
|
||||
kube-master
|
||||
```
|
||||
|
||||
### Step 3: Configure the Load Balancer Parameters
|
||||
|
||||
Besides configuring the `common.yaml` by following the [Multi-node Installation](../multi-node), you need to modify the load balancer information in the `common.yaml`. Assume the **VIP** address and listening port of the **internal load balancer** are `192.168.0.253` and `6443`, then you can refer to the following example.
|
||||
|
||||
> - Note that address and port should be indented by two spaces in `common.yaml`, and the address should be VIP.
|
||||
> - The domain name of the load balancer is "lb.kubesphere.local" by default for internal access. If you need to change the domain name, please uncomment and modify it.
|
||||
|
||||
#### The configuration sample in common.yaml
|
||||
|
||||
```yaml
|
||||
## External LB example config
|
||||
## apiserver_loadbalancer_domain_name: "lb.kubesphere.local"
|
||||
loadbalancer_apiserver:
|
||||
address: 192.168.0.253
|
||||
port: 6443
|
||||
```
|
||||
|
||||
Finally, please refer to the [guide](../storage-configuration) to configure the persistent storage service in `common.yaml` and start your HA cluster installation.
|
||||
|
||||
Then it is ready to install the high availability KubeSphere cluster.
|
||||
|
|
@ -1,176 +0,0 @@
|
|||
---
|
||||
title: "Multi-node Installation"
|
||||
keywords: 'kubesphere, kubernetes, docker, kubesphere installer'
|
||||
description: 'The guide for installing KubeSphere on Multi-Node in development or testing environment'
|
||||
|
||||
weight: 2220
|
||||
---
|
||||
|
||||
`Multi-Node` installation enables installing KubeSphere on multiple nodes. Typically, use any one node as _taskbox_ to run the installation task. Please note `ssh` communication is required to be established between taskbox and other nodes.
|
||||
|
||||
- <font color=red>The following instructions are for the default installation without enabling any optional components as we have made them pluggable since v2.1.0. If you want to enable any one, please read [Enable Pluggable Components](../pluggable-components).</font>
|
||||
- <font color=red>If your machines in total have >= 8 cores and >= 16G memory, we recommend you to install the full package of KubeSphere by [Enabling Optional Components](../complete-installation)</font>.
|
||||
- <font color=red> The installation time depends on your network bandwidth, your computer configuration, the number of nodes, etc. </font>
|
||||
|
||||
## Prerequisites
|
||||
|
||||
If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information.
|
||||
|
||||
## Step 1: Prepare Linux Hosts
|
||||
|
||||
The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements.
|
||||
|
||||
- Time synchronization is required across all nodes, otherwise the installation may not succeed;
|
||||
- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`;
|
||||
- If you are using `Ubuntu 18.04`, you need to use the user `root`;
|
||||
- If the Debian system does not have the sudo command installed, you need to execute `apt update && apt install sudo` command using root before installation.
|
||||
|
||||
### Hardware Recommendation
|
||||
|
||||
- KubeSphere can be installed on any cloud platform.
|
||||
- The installation speed can be accelerated by increasing network bandwidth.
|
||||
- If you choose air-gapped installation, ensure your disk of each node is at least 100G.
|
||||
|
||||
| System | Minimum Requirements (Each node) |
|
||||
| --- | --- |
|
||||
| CentOS 7.4 ~ 7.7 (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:40 G |
|
||||
| Ubuntu 16.04/18.04 LTS (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:40 G |
|
||||
| Red Hat Enterprise Linux Server 7.4 (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:40 G |
|
||||
| Debian Stretch 9.5 (64 bit)| CPU:2 Core, Memory:4 G, Disk Space:40 G |
|
||||
|
||||
The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes.
|
||||
|
||||
> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide.
|
||||
|
||||
| Host IP | Host Name | Role |
|
||||
| --- | --- | --- |
|
||||
|192.168.0.1|master|master, etcd|
|
||||
|192.168.0.2|node1|node|
|
||||
|192.168.0.3|node2|node|
|
||||
|
||||
### Cluster Architecture
|
||||
|
||||
#### Single Master, Single Etcd, Two Nodes
|
||||
|
||||

|
||||
|
||||
## Step 2: Download Installer Package
|
||||
|
||||
**1.** Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`.
|
||||
|
||||
```bash
|
||||
curl -L https://kubesphere.io/download/stable/latest > installer.tar.gz \
|
||||
&& tar -zxf installer.tar.gz && cd kubesphere-all-v2.1.1/conf
|
||||
```
|
||||
|
||||
**2.** Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file.
|
||||
|
||||
> Note:
|
||||
>
|
||||
> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`.
|
||||
> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`.
|
||||
> - master, node1 and node2 are the host names of each node and all host names should be in lowercase.
|
||||
|
||||
### hosts.ini
|
||||
|
||||
```ini
|
||||
[all]
|
||||
master ansible_connection=local ip=192.168.0.1
|
||||
node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD
|
||||
node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD
|
||||
|
||||
[kube-master]
|
||||
master
|
||||
|
||||
[kube-node]
|
||||
node1
|
||||
node2
|
||||
|
||||
[etcd]
|
||||
master
|
||||
|
||||
[k8s-cluster:children]
|
||||
kube-node
|
||||
kube-master
|
||||
```
|
||||
|
||||
> Note:
|
||||
>
|
||||
> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here.
|
||||
> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively.
|
||||
> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`.
|
||||
>
|
||||
> Parameters Specification:
|
||||
>
|
||||
> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection.
|
||||
> - `ansible_host`: The name of the host to be connected.
|
||||
> - `ip`: The ip of the host to be connected.
|
||||
> - `ansible_user`: The default ssh user name to use.
|
||||
> - `ansible_become_pass`: Allows you to set the privilege escalation password.
|
||||
> - `ansible_ssh_pass`: The password of the host to be connected using root.
|
||||
|
||||
## Step 3: Install KubeSphere to Linux Machines
|
||||
|
||||
> Note:
|
||||
>
|
||||
> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default.
|
||||
> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions.
|
||||
> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation.
|
||||
> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts.
|
||||
|
||||
**1.** Enter `scripts` folder, and execute `install.sh` using `root` user:
|
||||
|
||||
```bash
|
||||
cd ../cripts
|
||||
./install.sh
|
||||
```
|
||||
|
||||
**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume.
|
||||
|
||||
```bash
|
||||
################################################
|
||||
KubeSphere Installer Menu
|
||||
################################################
|
||||
* 1) All-in-one
|
||||
* 2) Multi-node
|
||||
* 3) Quit
|
||||
################################################
|
||||
https://kubesphere.io/ 2020-02-24
|
||||
################################################
|
||||
Please input an option: 2
|
||||
|
||||
```
|
||||
|
||||
**3.** Verify the multi-node installation:
|
||||
|
||||
**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go.
|
||||
|
||||
```bash
|
||||
successsful!
|
||||
#####################################################
|
||||
### Welcome to KubeSphere! ###
|
||||
#####################################################
|
||||
|
||||
Console: http://192.168.0.1:30880
|
||||
Account: admin
|
||||
Password: P@88w0rd
|
||||
|
||||
NOTE:Please modify the default password after login.
|
||||
#####################################################
|
||||
```
|
||||
|
||||
> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components).
|
||||
|
||||
**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in.
|
||||
|
||||

|
||||
|
||||
<font color=red>Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up.</font>
|
||||
|
||||

|
||||
|
||||
## FAQ
|
||||
|
||||
The installer has been tested on Aliyun, AWS, Huawei Cloud, QingCloud, Tencent Cloud. Please check the [results](https://github.com/kubesphere/ks-installer/issues/23) for details. Also please read the [FAQ of installation](../../faq/faq-install).
|
||||
|
||||
If you have any further questions please do not hesitate to file issues on [GitHub](https://github.com/kubesphere/kubesphere/issues).
|
||||
|
|
@ -1,157 +0,0 @@
|
|||
---
|
||||
title: "StorageClass Configuration"
|
||||
keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus'
|
||||
description: 'Instructions for Setting up StorageClass for KubeSphere'
|
||||
|
||||
weight: 2250
|
||||
---
|
||||
|
||||
Currently, Installer supports the following [Storage Class](https://kubernetes.io/docs/concepts/storage/storage-classes/), providing persistent storage service for KubeSphere (more storage classes will be supported soon).
|
||||
|
||||
- NFS
|
||||
- Ceph RBD
|
||||
- GlusterFS
|
||||
- QingCloud Block Storage
|
||||
- QingStor NeonSAN
|
||||
- Local Volume (for development and test only)
|
||||
|
||||
The versions of storage systems and corresponding CSI plugins in the table listed below have been well tested.
|
||||
|
||||
| **Name** | **Version** | **Reference** |
|
||||
| ----------- | --- |---|
|
||||
Ceph RBD Server | v0.94.10 | For development and testing, refer to [Install Ceph Storage Server](/zh-CN/appendix/ceph-ks-install/) for details. Please refer to [Ceph Documentation](http://docs.ceph.com/docs/master/) for production. |
|
||||
Ceph RBD Client | v12.2.5 | Before installing KubeSphere, you need to configure the corresponding parameters in `common.yaml`. Please refer to [Ceph RBD](../storage-configuration/#ceph-rbd) |
|
||||
GlusterFS Server | v3.7.6 | For development and testing, refer to [Deploying GlusterFS Storage Server](/zh-CN/appendix/glusterfs-ks-install/) for details. Please refer to [Gluster Documentation](https://www.gluster.org/install/) or [Gluster Documentation](http://gluster.readthedocs.io/en/latest/Install-Guide/Install/) for production. Note you need to install [Heketi Manager (V3.0.0)](https://github.com/heketi/heketi/tree/master/docs/admin). |
|
||||
|GlusterFS Client |v3.12.10|Before installing KubeSphere, you need to configure the corresponding parameters in `common.yaml`. Please refer to [GlusterFS](../storage-configuration/#glusterfs)|
|
||||
|NFS Client | v3.1.0 | Before installing KubeSphere, you need to configure the corresponding parameters in `common.yaml`. Make sure you have prepared NFS storage server. Please see [NFS Client](../storage-configuration/#nfs) |
|
||||
QingCloud-CSI|v0.2.0.1|You need to configure the corresponding parameters in `common.yaml` before installing KubeSphere. Please refer to [QingCloud CSI](../storage-configuration/#qingcloud-csi) for details|
|
||||
NeonSAN-CSI|v0.3.0| Before installing KubeSphere, you need to configure the corresponding parameters in `common.yaml`. Make sure you have prepared QingStor NeonSAN storage server. Please see [Neonsan-CSI](../storage-configuration/#neonsan-csi) |
|
||||
|
||||
> Note: You are only allowed to set ONE default storage classes in the cluster. To specify a default storage class, make sure there is no default storage class already exited in the cluster.
|
||||
|
||||
## Storage Configuration
|
||||
|
||||
After preparing the storage server, you need to refer to the parameters description in the following table. Then modify the corresponding configurations in `conf/common.yaml` accordingly.
|
||||
|
||||
The following describes the storage configuration in `common.yaml`.
|
||||
|
||||
> Note: Local Volume is configured as the default storage class in `common.yaml` by default. If you are going to set other storage class as the default, disable the Local Volume and modify the configuration for other storage class.
|
||||
|
||||
### Local Volume (For developing or testing only)
|
||||
|
||||
A [Local Volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) represents a mounted local storage device such as a disk, partition or directory. Local volumes can only be used as a statically created PersistentVolume. We recommend you to use Local volume in testing or development only since it is quick and easy to install KubeSphere without the struggle to set up persistent storage server. Refer to following table for the definition in `conf/common.yaml`.
|
||||
|
||||
| **Local volume** | **Description** |
|
||||
| --- | --- |
|
||||
| local\_volume\_provisioner\_enabled | Whether to use Local as the persistent storage, defaults to true |
|
||||
| local\_volume\_provisioner\_storage\_class | Storage class name, default value:local |
|
||||
| local\_volume\_is\_default\_class | Whether to set Local as the default storage class, defaults to true.|
|
||||
|
||||
### NFS
|
||||
|
||||
An NFS volume allows an existing NFS (Network File System) share to be mounted into your Pod. NFS can be configured in `conf/common.yaml`. Note you need to prepare NFS server in advance.
|
||||
|
||||
| **NFS** | **Description** |
|
||||
| --- | --- |
|
||||
| nfs\_client\_enable | Whether to use NFS as the persistent storage, defaults to false |
|
||||
| nfs\_client\_is\_default\_class | Whether to set NFS as default storage class, defaults to false. |
|
||||
| nfs\_server | The NFS server address, either IP or Hostname |
|
||||
| nfs\_path | NFS shared directory, which is the file directory shared on the server, see [Kubernetes Documentation](https://kubernetes.io/docs/concepts/storage/volumes/#nfs) |
|
||||
|nfs\_vers3\_enabled | Specifies which version of the NFS protocol to use, defaults to false which means v4. True means v4 |
|
||||
|nfs_archiveOnDelete | Archive PVC when deleting. It will automatically remove data from `oldPath` when it sets to false |
|
||||
|
||||
### Ceph RBD
|
||||
|
||||
The open source [Ceph RBD](https://ceph.com/) distributed storage system can be configured to use in `conf/common.yaml`. You need to prepare Ceph storage server in advance. Please refer to [Kubernetes Documentation](https://kubernetes.io/docs/concepts/storage/storage-classes/#ceph-rbd) for more details.
|
||||
|
||||
| **Ceph\_RBD** | **Description** |
|
||||
| --- | --- |
|
||||
| ceph\_rbd\_enabled | Whether to use Ceph RBD as the persistent storage, defaults to false |
|
||||
| ceph\_rbd\_storage\_class | Storage class name |
|
||||
| ceph\_rbd\_is\_default\_class | Whether to set Ceph RBD as default storage class, defaults to false |
|
||||
| ceph\_rbd\_monitors | Ceph monitors, comma delimited. This parameter is required, which depends on Ceph RBD server parameters |
|
||||
| ceph\_rbd\_admin\_id | Ceph client ID that is capable of creating images in the pool. Defaults to “admin” |
|
||||
| ceph\_rbd\_admin\_secret | Admin_id's secret, secret name for "adminId". This parameter is required. The provided secret must have type “kubernetes.io/rbd” |
|
||||
| ceph\_rbd\_pool | Ceph RBD pool. Default is “rbd” |
|
||||
| ceph\_rbd\_user\_id | Ceph client ID that is used to map the RBD image. Default is the same as adminId |
|
||||
| ceph\_rbd\_user\_secret | Secret for User_id, it is required to create this secret in namespace which used rbd image |
|
||||
| ceph\_rbd\_fsType | fsType that is supported by Kubernetes. Default: "ext4"|
|
||||
| ceph\_rbd\_imageFormat | Ceph RBD image format, “1” or “2”. Default is “1” |
|
||||
|ceph\_rbd\_imageFeatures| This parameter is optional and should only be used if you set imageFormat to “2”. Currently supported features are layering only. Default is “”, and no features are turned on|
|
||||
|
||||
> Note:
|
||||
>
|
||||
> The ceph secret, which is created in storage class, like "ceph_rbd_admin_secret" and "ceph_rbd_user_secret", is retrieved using following command in Ceph storage server.
|
||||
|
||||
```bash
|
||||
ceph auth get-key client.admin
|
||||
```
|
||||
|
||||
### GlusterFS
|
||||
|
||||
[GlusterFS](https://docs.gluster.org/en/latest/) is a scalable network filesystem suitable for data-intensive tasks such as cloud storage and media streaming. You need to prepare GlusterFS storage server in advance. Please refer to [Kubernetes Documentation](https://kubernetes.io/docs/concepts/storage/storage-classes/#glusterfs) for further information.
|
||||
|
||||
| **GlusterFS(It requires glusterfs cluster which is managed by heketi)**|**Description** |
|
||||
| --- | --- |
|
||||
| glusterfs\_provisioner\_enabled | Whether to use GlusterFS as the persistent storage, defaults to false |
|
||||
| glusterfs\_provisioner\_storage\_class | Storage class name |
|
||||
| glusterfs\_is\_default\_class | Whether to set GlusterFS as default storage class, defaults to false |
|
||||
| glusterfs\_provisioner\_restauthenabled | Gluster REST service authentication boolean that enables authentication to the REST server |
|
||||
| glusterfs\_provisioner\_resturl | Gluster REST service/Heketi service url which provision gluster volumes on demand. The general format should be "IP address:Port" and this is a mandatory parameter for GlusterFS dynamic provisioner|
|
||||
| glusterfs\_provisioner\_clusterid | Optional, for example, 630372ccdc720a92c681fb928f27b53f is the ID of the cluster which will be used by Heketi when provisioning the volume. It can also be a list of clusterids |
|
||||
| glusterfs\_provisioner\_restuser | Gluster REST service/Heketi user who has access to create volumes in the Gluster Trusted Pool |
|
||||
| glusterfs\_provisioner\_secretName | Optional, identification of Secret instance that contains user password to use when talking to Gluster REST service, Installer will automatically create this secret in kube-system |
|
||||
| glusterfs\_provisioner\_gidMin | The minimum value of GID range for the storage class |
|
||||
| glusterfs\_provisioner\_gidMax |The maximum value of GID range for the storage class |
|
||||
| glusterfs\_provisioner\_volumetype | The volume type and its parameters can be configured with this optional value, for example: ‘Replica volume’: volumetype: replicate:3 |
|
||||
| jwt\_admin\_key | "jwt.admin.key" field is from "/etc/heketi/heketi.json" in Heketi server |
|
||||
|
||||
**Attention:**
|
||||
|
||||
> Please note: `"glusterfs_provisioner_clusterid"` could be returned from glusterfs server by running the following command:
|
||||
|
||||
```bash
|
||||
export HEKETI_CLI_SERVER=http://localhost:8080
|
||||
heketi-cli cluster list
|
||||
```
|
||||
|
||||
### QingCloud Block Storage
|
||||
|
||||
[QingCloud Block Storage](https://docs.qingcloud.com/product/Storage/volume/) is supported in KubeSphere as the persistent storage service. If you would like to experience dynamic provisioning when creating volume, we recommend you to use it as your persistent storage solution. KubeSphere integrates [QingCloud-CSI](https://github.com/yunify/qingcloud-csi/blob/master/README_zh.md), and allows you to use various block storage services of QingCloud. With simple configuration, you can quickly expand, clone PVCs and view the topology of PVCs, create/delete snapshot, as well as restore volume from snapshot.
|
||||
|
||||
QingCloud-CSI plugin has implemented the standard CSI. You can easily create and manage different types of volumes in KubeSphere, which are provided by QingCloud. The corresponding PVCs will created with ReadWriteOnce access mode and mounted to running Pods.
|
||||
|
||||
QingCloud-CSI supports create the following five types of volume in QingCloud:
|
||||
|
||||
- High capacity
|
||||
- Standard
|
||||
- SSD Enterprise
|
||||
- Super high performance
|
||||
- High performance
|
||||
|
||||
|**QingCloud-CSI** | **Description**|
|
||||
| --- | ---|
|
||||
| qingcloud\_csi\_enabled|Whether to use QingCloud-CSI as the persistent storage volume, defaults to false |
|
||||
| qingcloud\_csi\_is\_default\_class| Whether to set QingCloud-CSI as default storage class, defaults to false |
|
||||
qingcloud\_access\_key\_id , <br> qingcloud\_secret\_access\_key| Please obtain it from [QingCloud Console](https://console.qingcloud.com/login) |
|
||||
|qingcloud\_zone| Zone should be the same as the zone where the Kubernetes cluster is installed, and the CSI plugin will operate on the storage volumes for this zone. For example: zone can be set to these values, such as sh1a (Shanghai 1-A), sh1b (Shanghai 1-B), pek2 (Beijing 2), pek3a (Beijing 3-A), pek3b (Beijing 3-B), pek3c (Beijing 3-C), gd1 (Guangdong 1), gd2a (Guangdong 2-A), ap1 (Asia Pacific 1), ap2a (Asia Pacific 2-A) |
|
||||
| type | The type of volume in QingCloud platform. In QingCloud platform, 0 represents high performance volume. 3 represents super high performance volume. 1 or 2 represents high capacity volume depending on cluster‘s zone, see [QingCloud Documentation](https://docs.qingcloud.com/product/api/action/volume/create_volumes.html)|
|
||||
| maxSize, minSize | Limit the range of volume size in GiB|
|
||||
| stepSize | Set the increment of volumes size in GiB|
|
||||
| fsType | The file system of the storage volume, which supports ext3, ext4, xfs. The default is ext4|
|
||||
|
||||
### QingStor NeonSAN
|
||||
|
||||
The NeonSAN-CSI plugin supports the enterprise-level distributed storage [QingStor NeonSAN](https://www.qingcloud.com/products/qingstor-neonsan/) as the persistent storage solution. You need prepare the NeonSAN server, then configure the NeonSAN-CSI plugin to connect to its storage server in `conf/common.yaml`. Please refer to [NeonSAN-CSI Reference](https://github.com/wnxn/qingstor-csi/blob/master/docs/reference_zh.md#storageclass-%E5%8F%82%E6%95%B0) for further information.
|
||||
|
||||
| **NeonSAN** | **Description** |
|
||||
| --- | --- |
|
||||
| neonsan\_csi\_enabled | Whether to use NeonSAN as the persistent storage, defaults to false |
|
||||
| neonsan\_csi\_is\_default\_class | Whether to set NeonSAN-CSI as the default storage class, defaults to false|
|
||||
Neonsan\_csi\_protocol | transportation protocol, user must set the option, such as TCP or RDMA|
|
||||
| neonsan\_server\_address | NeonSAN server address |
|
||||
| neonsan\_cluster\_name| NeonSAN server cluster name|
|
||||
| neonsan\_server\_pool|A comma separated list of pools. Tell plugin to manager these pools. User must set the option, the default value is kube|
|
||||
| neonsan\_server\_replicas|NeonSAN image replica count. Default: 1|
|
||||
| neonsan\_server\_stepSize|set the increment of volumes size in GiB. Default: 1|
|
||||
| neonsan\_server\_fsType|The file system to use for the volume. Default: ext4|
|
||||
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
linkTitle: "Installation"
|
||||
linkTitle: "Introduction"
|
||||
weight: 2100
|
||||
|
||||
_build:
|
||||
|
|
|
|||
|
|
@ -1,93 +0,0 @@
|
|||
---
|
||||
title: "Introduction"
|
||||
keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus'
|
||||
description: 'KubeSphere Installation Overview'
|
||||
|
||||
linkTitle: "Introduction"
|
||||
weight: 2110
|
||||
---
|
||||
|
||||
[KubeSphere](https://kubesphere.io/) is an enterprise-grade multi-tenant container platform built on [Kubernetes](https://kubernetes.io). It provides an easy-to-use UI for users to manage application workloads and computing resources with a few clicks, which greatly reduces the learning curve and the complexity of daily work such as development, testing, operation and maintenance. KubeSphere aims to alleviate the pain points of Kubernetes including storage, network, security and ease of use, etc.
|
||||
|
||||
KubeSphere supports installing on cloud-hosted and on-premises Kubernetes cluster, e.g. native K8s, GKE, EKS, RKE, etc. It also supports installing on Linux host including virtual machine and bare metal with provisioning fresh Kubernetes cluster. Both of the two methods are easy and friendly to install KubeSphere. Meanwhile, KubeSphere offers not only online installer, but air-gapped installer for such environment with no access to the internet.
|
||||
|
||||
KubeSphere is open source project on [GitHub](https://github.com/kubesphere). There are thousands of users are using KunbeSphere, and many of them are running KubeSphere for their production workloads.
|
||||
|
||||
In summary, there are several installation options you can choose. Please note not all options are mutually exclusive. For instance, you can deploy KubeSphere with minimal packages on existing K8s cluster on multiple nodes in air-gapped environment. Here is the decision tree shown in the following graph you may reference for your own situation.
|
||||
|
||||
- [All-in-One](../all-in-one): Intall KubeSphere on a singe node. It is only for users to quickly get familar with KubeSphere.
|
||||
- [Multi-Node](../multi-node): Install KubeSphere on multiple nodes. It is for testing or development.
|
||||
- [Install KubeSphere on Air Gapped Linux](../install-ks-on-linux-airgapped): All images of KubeSphere have been encapsulated into a package, it is convenient for air gapped installation on Linux machines.
|
||||
- [High Availability Multi-Node](../master-ha): Install high availability KubeSphere on multiple nodes which is used for production environment.
|
||||
- [KubeSphere on Existing K8s](../install-on-k8s): Deploy KubeSphere on your Kubernetes cluster including cloud-hosted services such as GKE, EKS, etc.
|
||||
- [KubeSphere on Air-Gapped K8s](../install-on-k8s-airgapped): Install KubeSphere on a disconnected Kubernetes cluster.
|
||||
- Minimal Packages: Only install minimal required system components of KubeSphere. The minimum of resource requirement is down to 1 core and 2G memory.
|
||||
- [Full Packages](../complete-installation): Install all available system components of KubeSphere including DevOps, service mesh, application store, etc.
|
||||
|
||||

|
||||
|
||||
## Before Installation
|
||||
|
||||
- As the installation will pull images and update operating system from the internet, your environment must have the internet access. If not, then you need to use the air-gapped installer instead.
|
||||
- For all-in-one installation, the only one node is both the master and the worker.
|
||||
- For multi-node installation, you are asked to specify the node roles in the configuration file before installation.
|
||||
- Your linux host must have OpenSSH Server installed.
|
||||
- Please check the [ports requirements](../port-firewall) before installation.
|
||||
|
||||
## Quick Install For Development and Testing
|
||||
|
||||
KubeSphere has decoupled some components since v2.1.0. The installer only installs required components by default which brings the benefits of fast installation and minimal resource consumption. If you want to install any optional component, please check the following section [Pluggable Components Overview](../intro#pluggable-components-overview) for details.
|
||||
|
||||
The quick install of KubeSphere is only for development or testing since it uses local volume for storage by default. If you want a production install please refer to the section [High Availability Installation for Production Environment](../intro#high-availability-installation-for-production-environment).
|
||||
|
||||
### 1. Install KubeSphere on Linux
|
||||
|
||||
- [All-in-One](../all-in-one): It means a single-node hassle-free configuration installation with one-click.
|
||||
- [Multi-Node](../multi-node): It allows you to install KubeSphere on multiple instances using local volume, which means it is not required to install storage server such as Ceph, GlusterFS.
|
||||
|
||||
> Note:With regard to air-gapped installation please refer to [Install KubeSphere on Air Gapped Linux Machines](../install-ks-on-linux-airgapped).
|
||||
|
||||
### 2. Install KubeSphere on Existing Kubernetes
|
||||
|
||||
You can install KubeSphere on your existing Kubernetes cluster. Please refer [Install KubeSphere on Kubernetes](../install-on-k8s) for instructions.
|
||||
|
||||
## High Availability Installation for Production Environment
|
||||
|
||||
### 1. Install HA KubeSphere on Linux
|
||||
|
||||
KubeSphere installer supports installing a highly available cluster for production with the prerequisites being a load balancer and persistent storage service set up in advance.
|
||||
|
||||
- [Persistent Service Configuration](../storage-configuration): By default, KubeSphere Installer uses [Local Volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning in Kubernetes cluster. It is convenient for quick install of testing environment. In production environment, it must have a storage server set up. Please refer [Persistent Service Configuration](../storage-configuration) for details.
|
||||
- [Load Balancer Configuration for HA install](../master-ha): Before you get started with multi-node installation in production environment, you need to configure a load balancer. Either cloud LB or `HAproxy + keepalived` works for the installation.
|
||||
|
||||
### 2. Install HA KubeSphere on Existing Kubernetes
|
||||
|
||||
Before you install KubeSphere on existing Kubernetes, please check the prerequisites of the installation on Linux described above, and verify the existing Kubernetes to see if it satisfies these prerequisites or not, i.e., a load balancer and persistent storage service.
|
||||
|
||||
If your Kubernetes is ready, please refer [Install KubeSphere on Kubernetes](../install-on-k8s) for instructions.
|
||||
|
||||
> You can install KubeSphere on cloud Kubernetes service such as [Installing KubeSphere on GKE cluster](../install-on-gke)
|
||||
|
||||
## Pluggable Components Overview
|
||||
|
||||
KubeSphere has decoupled some core feature components since v2.1.0. These components are designed to be pluggable, which means you can enable any of them before or after installation. The installer by default does not install the pluggable components. Please check the guide [Enable Pluggable Components Installation](../pluggable-components) for your requirement.
|
||||
|
||||

|
||||
|
||||
## Storage Configuration Instruction
|
||||
|
||||
The following links explain how to configure different types of persistent storage services. Please refer to [Storage Configuration Instruction](../storage-configuration) for detailed instructions regarding how to configure the storage class in KubeSphere.
|
||||
|
||||
- [NFS](https://kubernetes.io/docs/concepts/storage/volumes/#nfs)
|
||||
- [GlusterFS](https://www.gluster.org/)
|
||||
- [Ceph RBD](https://ceph.com/)
|
||||
- [QingCloud Block Storage](https://docs.qingcloud.com/product/storage/volume/)
|
||||
- [QingStor NeonSAN](https://docs.qingcloud.com/product/storage/volume/super_high_performance_shared_volume/)
|
||||
|
||||
## Add New Nodes
|
||||
|
||||
KubeSphere Installer allows you to scale the number of nodes, see [Add New Nodes](../add-nodes).
|
||||
|
||||
## Uninstall
|
||||
|
||||
Uninstall will remove KubeSphere from the machines. This operation is irreversible and dangerous. Please check [Uninstall](../uninstall).
|
||||
|
|
@ -0,0 +1,76 @@
|
|||
---
|
||||
title: "Overview"
|
||||
keywords: "KubeSphere, Kubernetes, Installation"
|
||||
description: "Overview of KubeSphere Installation on Kubernetes"
|
||||
|
||||
linkTitle: "Overview"
|
||||
weight: 2105
|
||||
---
|
||||
|
||||

|
||||
|
||||
As part of KubeSphere's commitment to provide a plug-and-play architecture for users, it can be easily installed on existing Kubernetes clusters. More specifically, KubeSphere can be deployed on Kubernetes either hosted on clouds (e.g. AWS EKS, QingCloud QKE and Google GKE) or on-premises. This is because KubeSphere does not hack Kubernetes itself. It only interacts with the Kubernetes API to manage Kubernetes cluster resources. In other words, KubeSphere can be installed on any native Kubernetes cluster and Kubernetes distribution.
|
||||
|
||||
This section gives you an overview of the general steps of installing KubeSphere on Kubernetes. For more information about the specific way of installation in different environments, see Installing on Hosted Kubernetes and Installing on On-premises Kubernetes.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
Please read the prerequisites before you install KubeSphere on existing Kubernetes clusters.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
## Deploy KubeSphere
|
||||
|
||||
After you make sure your existing Kubernetes cluster meets all the requirements, you can use kubectl to trigger the default minimal installation of KubeSphere.
|
||||
|
||||
- Execute the following commands to start installation:
|
||||
|
||||
```bash
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml
|
||||
```
|
||||
|
||||
```bash
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
If your server has trouble accessing GitHub, you can copy the content in [kubesphere-installer.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml) and [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) respectively and past it to local files. You then can use `kubectl apply -f` for the local files to install KubeSphere.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
- Inspect the logs of installation:
|
||||
|
||||
```bash
|
||||
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
|
||||
```
|
||||
|
||||
{{< notice tip >}}
|
||||
|
||||
In some environments, you may find the installation process stopped by issues related to `metrics_server`, as some cloud providers have already installed metrics server in their platform. In this case, please manually create a local cluster-configuration.yaml file (copy the [content](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) to it). In this file, disable `metrics_server` by changing `true` to `false` for `enabled`, and use `kubectl apply -f cluster-configuration.yaml` to execute it.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
- Use `kubectl get pod --all-namespaces` to see whether all pods are running normally in relevant namespaces of KubeSphere. If they are, check the port (30880 by default) of the console through the following command:
|
||||
|
||||
```bash
|
||||
kubectl get svc/ks-console -n kubesphere-system
|
||||
```
|
||||
|
||||
- Make sure port 30880 is opened in security groups and access the web console through the NodePort (`IP:30880`) with the default account and password (`admin/P@88w0rd`).
|
||||
|
||||

|
||||
|
||||
## Enable Pluggable Components (Optional)
|
||||
|
||||
If you start with a default minimal installation, refer to Enable Pluggable Components to install other components.
|
||||
|
||||
{{< notice tip >}}
|
||||
|
||||
- Pluggable components can be enabled either before or after the installation. Please refer to the example file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/blob/master/deploy/cluster-configuration.yaml) for more details.
|
||||
- Make sure there is enough CPU and memory available in your cluster.
|
||||
- It is highly recommended that you install these pluggable components to discover the full-stack features and capabilities provided by KubeSphere.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
|
||||
|
|
@ -1,33 +0,0 @@
|
|||
---
|
||||
title: "Port Requirements"
|
||||
keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus'
|
||||
description: ''
|
||||
|
||||
linkTitle: "Requirements"
|
||||
weight: 2120
|
||||
---
|
||||
|
||||
|
||||
KubeSphere requires certain ports to communicate among services, so you need to make sure the following ports open for use.
|
||||
|
||||
| Service | Protocol | Action | Start Port | End Port | Notes |
|
||||
|---|---|---|---|---|---|
|
||||
| ssh | TCP | allow | 22 | | |
|
||||
| etcd | TCP | allow | 2379 | 2380 | |
|
||||
| apiserver | TCP | allow | 6443 | | |
|
||||
| calico | TCP | allow | 9099 | 9100 | |
|
||||
| bgp | TCP | allow | 179 | | |
|
||||
| nodeport | TCP | allow | 30000 | 32767 | |
|
||||
| master | TCP | allow | 10250 | 10258 | |
|
||||
| dns | TCP | allow | 53 | | |
|
||||
| dns | UDP | allow | 53 | | |
|
||||
| local-registry | TCP | allow | 5000 | | Required for air gapped environment|
|
||||
| local-apt | TCP | allow | 5080 | | Required for air gapped environment|
|
||||
| rpcbind | TCP | allow | 111 | | When using NFS as storage server |
|
||||
| ipip | IPIP | allow | | | Calico network requires ipip protocol |
|
||||
|
||||
**Note**
|
||||
|
||||
Please note when you use Calico network plugin and run your cluster in classic network in cloud environment, you need to open IPIP protocol for souce IP. For instance, the following is the sample on QingCloud showing how to open IPIP protocol.
|
||||
|
||||

|
||||
|
|
@ -0,0 +1,54 @@
|
|||
---
|
||||
title: "Prerequisites"
|
||||
keywords: "KubeSphere, Kubernetes, Installation, Prerequisites"
|
||||
description: "The prerequisites of installing KubeSphere on existing Kubernetes"
|
||||
|
||||
linkTitle: "Prerequisites"
|
||||
weight: 2125
|
||||
---
|
||||
|
||||
|
||||
|
||||
Not only can KubeSphere be installed on virtual machines and bare metal with provisioned Kubernetes, but also supports installing on cloud-hosted and on-premises existing Kubernetes clusters as long as your Kubernetes cluster meets the prerequisites below.
|
||||
|
||||
- Kubernetes version: `1.15.x, 1.16.x, 1.17.x, 1.18.x`;
|
||||
- CPU > 1 Core; Memory > 2 G;
|
||||
- A default Storage Class in your Kubernetes cluster is configured; use `kubectl get sc` to verify it.
|
||||
- The CSR signing feature is activated in kube-apiserver when it is started with the `--cluster-signing-cert-file` and `--cluster-signing-key-file` parameters. See [RKE installation issue](https://github.com/kubesphere/kubesphere/issues/1925#issuecomment-591698309).
|
||||
|
||||
## Pre-checks
|
||||
|
||||
1. Make sure your Kubernetes version is compatible by running `kubectl version` in your cluster node. The output may look as below:
|
||||
|
||||
```bash
|
||||
$ kubectl version
|
||||
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.1", GitCommit:"4485c6f18cee9a5d3c3b4e523bd27972b1b53892", GitTreeState:"clean", BuildDate:"2019-07-18T09:09:21Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
|
||||
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.1", GitCommit:"4485c6f18cee9a5d3c3b4e523bd27972b1b53892", GitTreeState:"clean", BuildDate:"2019-07-18T09:09:21Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
Pay attention to the `Server Version` line. If `GitVersion` shows an older one, you need to upgrade Kubernetes first. Please refer to [Upgrading kubeadm clusters from v1.14 to v1.15](https://v1-15.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-15/).
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
2. Check if the available resources in your cluster meet the minimum requirements.
|
||||
|
||||
```bash
|
||||
$ free -g
|
||||
total used free shared buff/cache available
|
||||
Mem: 16 4 10 0 3 2
|
||||
Swap: 0 0 0
|
||||
```
|
||||
|
||||
3. Check if there is a default Storage Class in your cluster. An existing Storage Class is a prerequisite for KubeSphere installation.
|
||||
|
||||
```bash
|
||||
$ kubectl get sc
|
||||
NAME PROVISIONER AGE
|
||||
glusterfs (default) kubernetes.io/glusterfs 3d4h
|
||||
```
|
||||
|
||||
If your Kubernetes cluster environment meets all the requirements above, then you are ready to deploy KubeSphere on your existing Kubernetes cluster.
|
||||
|
||||
For more information, see Overview of Installing on Kubernetes.
|
||||
|
|
@ -1,107 +0,0 @@
|
|||
---
|
||||
title: "Common Configurations"
|
||||
keywords: 'KubeSphere, kubernetes, docker, helm, jenkins, istio, prometheus'
|
||||
description: 'Configure cluster parameters before installing'
|
||||
|
||||
linkTitle: "Kubernetes Cluster Configuration"
|
||||
weight: 2130
|
||||
---
|
||||
|
||||
This tutorial explains how to customize KubeSphere configurations in `conf/common.yaml`. You can reference the following section to understand each parameter.
|
||||
|
||||
```yaml
|
||||
######################### Kubernetes #########################
|
||||
# The default k8s version will be installed
|
||||
kube_version: v1.16.7
|
||||
|
||||
# The default etcd version will be installed
|
||||
etcd_version: v3.2.18
|
||||
|
||||
# Configure a cron job to backup etcd data, which is running on etcd machines.
|
||||
# Period of running backup etcd job, the unit is minutes.
|
||||
# The default value 30 means backup etcd every 30 minutes.
|
||||
etcd_backup_period: 30
|
||||
|
||||
# How many backup replicas to keep.
|
||||
# The default value5 means to keep latest 5 backups, older ones will be deleted by order.
|
||||
keep_backup_number: 5
|
||||
|
||||
# The location to store etcd backups files on etcd machines.
|
||||
etcd_backup_dir: "/var/backups/kube_etcd"
|
||||
|
||||
# Add other registry. (For users who need to accelerate image download)
|
||||
docker_registry_mirrors:
|
||||
- https://docker.mirrors.ustc.edu.cn
|
||||
- https://registry.docker-cn.com
|
||||
- https://mirror.aliyuncs.com
|
||||
|
||||
# Kubernetes network plugin, Calico will be installed by default. Note that Calico and flannel are recommended, which are tested and verified by KubeSphere.
|
||||
kube_network_plugin: calico
|
||||
|
||||
# A valid CIDR range for Kubernetes services,
|
||||
# 1. should not overlap with node subnet
|
||||
# 2. should not overlap with Kubernetes pod subnet
|
||||
kube_service_addresses: 10.233.0.0/18
|
||||
|
||||
# A valid CIDR range for Kubernetes pod subnet,
|
||||
# 1. should not overlap with node subnet
|
||||
# 2. should not overlap with Kubernetes services subnet
|
||||
kube_pods_subnet: 10.233.64.0/18
|
||||
|
||||
# Kube-proxy proxyMode configuration, either ipvs, or iptables
|
||||
kube_proxy_mode: ipvs
|
||||
|
||||
# Maximum pods allowed to run on every node.
|
||||
kubelet_max_pods: 110
|
||||
|
||||
# Enable nodelocal dns cache, see https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.md#nodelocal-dns-cache for further information
|
||||
enable_nodelocaldns: true
|
||||
|
||||
# Highly Available loadbalancer example config
|
||||
# apiserver_loadbalancer_domain_name: "lb.kubesphere.local" # Loadbalancer domain name
|
||||
# loadbalancer_apiserver: # Loadbalancer apiserver configuration, please uncomment this line when you prepare HA install
|
||||
# address: 192.168.0.10 # Loadbalancer apiserver IP address
|
||||
# port: 6443 # apiserver port
|
||||
|
||||
######################### KubeSphere #########################
|
||||
|
||||
# Version of KubeSphere
|
||||
ks_version: v2.1.0
|
||||
|
||||
# KubeSphere console port, range 30000-32767,
|
||||
# but 30180/30280/30380 are reserved for internal service
|
||||
console_port: 30880 # KubeSphere console nodeport
|
||||
|
||||
#CommonComponent
|
||||
mysql_volume_size: 20Gi # MySQL PVC size
|
||||
minio_volume_size: 20Gi # Minio PVC size
|
||||
etcd_volume_size: 20Gi # etcd PVC size
|
||||
openldap_volume_size: 2Gi # openldap PVC size
|
||||
redis_volume_size: 2Gi # Redis PVC size
|
||||
|
||||
|
||||
# Monitoring
|
||||
prometheus_replica: 2 # Prometheus replicas with 2 by default which are responsible for monitoring different segments of data source and provide high availability as well.
|
||||
prometheus_memory_request: 400Mi # Prometheus request memory
|
||||
prometheus_volume_size: 20Gi # Prometheus PVC size
|
||||
grafana_enabled: true # enable grafana or not
|
||||
|
||||
|
||||
## Container Engine Acceleration
|
||||
## Use nvidia gpu acceleration in containers
|
||||
# nvidia_accelerator_enabled: true # enable Nvidia GPU accelerator or not. It supports hybrid node with GPU and CPU installed.
|
||||
# nvidia_gpu_nodes: # The GPU nodes specified in hosts.ini. FOr now we only support Ubuntu 16.04
|
||||
# - kube-gpu-001 # The host name of the GPU node specified in hosts.ini
|
||||
```
|
||||
|
||||
## How to Configure a GPU Node
|
||||
|
||||
You may want to use GPU nodes for special purpose such as machine learning. Let's say you have a GPU node called `node2` in `hosts.ini`, then in the file `common.yaml` specify the following configuration. Please be aware the `- node2` has two spaces indent.
|
||||
|
||||
```yaml
|
||||
nvidia_accelerator_enabled: true
|
||||
nvidia_gpu_nodes:
|
||||
- node2
|
||||
```
|
||||
|
||||
> Note: The GPU node now only supports Ubuntu 16.04.
|
||||
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
linkTitle: "Install on Linux"
|
||||
weight: 2200
|
||||
linkTitle: "Installing on On-premises Kubernetes"
|
||||
weight: 2300
|
||||
|
||||
_build:
|
||||
render: false
|
||||
---
|
||||
---
|
||||
|
|
|
|||
|
|
@ -7,218 +7,4 @@ description: 'How to install KubeSphere on air-gapped Linux machines'
|
|||
weight: 2240
|
||||
---
|
||||
|
||||
The air-gapped installation is almost the same as the online installation except it creates a local registry to host the Docker images. We will demonstrate how to install KubeSphere and Kubernetes on air-gapped environment.
|
||||
|
||||
> Note: The dependencies in different operating systems may cause upexpected problems. If you encounter any installation problems on air-gapped environment, please describe your OS information and error logs on [GitHub](https://github.com/kubesphere/kubesphere/issues).
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information.
|
||||
> - Installer will use `/var/lib/docker` as the default directory where all Docker related files, including the images, are stored. We recommend you to add additional storage to a disk with at least 100G mounted at `/var/lib/docker` and `/mnt/registry` respectively, use the [fdisk](https://www.computerhope.com/unix/fdisk.htm) command for reference.
|
||||
- Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [OpenEBS](https://openebs.io/) to provide storage service with dynamic provisioning. It is convenient for testing and development. For production, please [configure supported persistent storage service](../storage-configuration) and prepare [high availability configuration](../master-ha) before installation.
|
||||
- Since the air-gapped machines cannot connect to apt or yum source, please use clean Linux machine to avoid this problem.
|
||||
|
||||
## Step 1: Prepare Linux Hosts
|
||||
|
||||
The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements.
|
||||
|
||||
- Supported OSes: CentOS 7.4 ~ 7.7 (64-bit), Ubuntu 16.04.5/16.04.6/18.04.1/18.04.2/18.04.3 LTS (64-bit)
|
||||
- Time synchronization is required across all nodes, otherwise the installation may not succeed;
|
||||
- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`;
|
||||
- If you are using `Ubuntu 18.04`, you need to use the user `root`.
|
||||
- Ensure your disk of each node is at least 100G.
|
||||
- CPU and memory in total of all machines: 2 cores and 4 GB for minimal installation; 8 cores and 16 GB for complete installation.
|
||||
|
||||
|
||||
The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes.
|
||||
|
||||
> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide.
|
||||
|
||||
| Host IP | Host Name | Role |
|
||||
| --- | --- | --- |
|
||||
|192.168.0.1|master|master, etcd|
|
||||
|192.168.0.2|node1|node|
|
||||
|192.168.0.3|node2|node|
|
||||
|
||||
### Cluster Architecture
|
||||
|
||||
#### Single Master, Single Etcd, Two Nodes
|
||||
|
||||

|
||||
|
||||
## Step 2: Download Installer Package
|
||||
|
||||
Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`.
|
||||
|
||||
```bash
|
||||
curl -L https://kubesphere.io/download/offline/latest > kubesphere-all-offline-v2.1.1.tar.gz \
|
||||
&& tar -zxf kubesphere-all-offline-v2.1.1.tar.gz && cd kubesphere-all-offline-v2.1.1/conf
|
||||
```
|
||||
|
||||
## Step 3: Configure Host Template
|
||||
|
||||
> This step is only for multi-node installation, you can skip this step if you choose all-in-one installation.
|
||||
|
||||
Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file.
|
||||
|
||||
> Note:
|
||||
>
|
||||
> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`.
|
||||
> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`.
|
||||
> - master, node1 and node2 are the host names of each node and all host names should be in lowercase.
|
||||
|
||||
### hosts.ini
|
||||
|
||||
```ini
|
||||
[all]
|
||||
master ansible_connection=local ip=192.168.0.1
|
||||
node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD
|
||||
node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD
|
||||
|
||||
[local-registry]
|
||||
master
|
||||
|
||||
[kube-master]
|
||||
master
|
||||
|
||||
[kube-node]
|
||||
node1
|
||||
node2
|
||||
|
||||
[etcd]
|
||||
master
|
||||
|
||||
[k8s-cluster:children]
|
||||
kube-node
|
||||
kube-master
|
||||
```
|
||||
|
||||
> Note:
|
||||
>
|
||||
> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here.
|
||||
> - Installer will use a node as the local registry for docker images, defaults to "master" in the group `[local-registry]`.
|
||||
> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively.
|
||||
> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`.
|
||||
>
|
||||
> Parameters Specification:
|
||||
>
|
||||
> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection.
|
||||
> - `ansible_host`: The name of the host to be connected.
|
||||
> - `ip`: The ip of the host to be connected.
|
||||
> - `ansible_user`: The default ssh user name to use.
|
||||
> - `ansible_become_pass`: Allows you to set the privilege escalation password.
|
||||
> - `ansible_ssh_pass`: The password of the host to be connected using root.
|
||||
|
||||
## Step 4: Enable All Components
|
||||
|
||||
> This is step is complete installation. You can skip this step if you choose a minimal installation.
|
||||
|
||||
Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default.
|
||||
|
||||
```yaml
|
||||
# LOGGING CONFIGURATION
|
||||
# logging is an optional component when installing KubeSphere, and
|
||||
# Kubernetes builtin logging APIs will be used if logging_enabled is set to false.
|
||||
# Builtin logging only provides limited functions, so recommend to enable logging.
|
||||
logging_enabled: true # Whether to install logging system
|
||||
elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number
|
||||
elasticsearch_data_replica: 2 # total number of data nodes
|
||||
elasticsearch_volume_size: 20Gi # Elasticsearch volume size
|
||||
log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default.
|
||||
elk_prefix: logstash # the string making up index names. The index name will be formatted as ks-<elk_prefix>-log
|
||||
kibana_enabled: false # Kibana Whether to install built-in Grafana
|
||||
#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption.
|
||||
#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port
|
||||
|
||||
#DevOps Configuration
|
||||
devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image)
|
||||
jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default
|
||||
jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default
|
||||
jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default
|
||||
jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters
|
||||
jenkinsJavaOpts_Xmx: 6g
|
||||
jenkinsJavaOpts_MaxRAM: 8g
|
||||
sonarqube_enabled: true # Whether to install built-in SonarQube
|
||||
#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption.
|
||||
#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token
|
||||
|
||||
# Following components are all optional for KubeSphere,
|
||||
# Which could be turned on to install it before installation or later by updating its value to true
|
||||
openpitrix_enabled: true # KubeSphere application store
|
||||
metrics_server_enabled: true # For KubeSphere HPA to use
|
||||
servicemesh_enabled: true # KubeSphere service mesh system(Istio-based)
|
||||
notification_enabled: true # KubeSphere notification system
|
||||
alerting_enabled: true # KubeSphere alerting system
|
||||
```
|
||||
|
||||
## Step 5: Install KubeSphere to Linux Machines
|
||||
|
||||
> Note:
|
||||
>
|
||||
> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default.
|
||||
> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions.
|
||||
> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation.
|
||||
> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts.
|
||||
|
||||
**1.** Enter `scripts` folder, and execute `install.sh` using `root` user:
|
||||
|
||||
```bash
|
||||
cd ../cripts
|
||||
./install.sh
|
||||
```
|
||||
|
||||
**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume.
|
||||
|
||||
```bash
|
||||
################################################
|
||||
KubeSphere Installer Menu
|
||||
################################################
|
||||
* 1) All-in-one
|
||||
* 2) Multi-node
|
||||
* 3) Quit
|
||||
################################################
|
||||
https://kubesphere.io/ 2020-02-24
|
||||
################################################
|
||||
Please input an option: 2
|
||||
|
||||
```
|
||||
|
||||
**3.** Verify the multi-node installation:
|
||||
|
||||
**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go.
|
||||
|
||||
```bash
|
||||
successsful!
|
||||
#####################################################
|
||||
### Welcome to KubeSphere! ###
|
||||
#####################################################
|
||||
|
||||
Console: http://192.168.0.1:30880
|
||||
Account: admin
|
||||
Password: P@88w0rd
|
||||
|
||||
NOTE:Please modify the default password after login.
|
||||
#####################################################
|
||||
```
|
||||
|
||||
> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components).
|
||||
|
||||
**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in.
|
||||
|
||||

|
||||
|
||||
<font color=red>Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up.</font>
|
||||
|
||||

|
||||
|
||||
## Enable Pluggable Components
|
||||
|
||||
If you already have set up minimal installation, you still can edit the ConfigMap of ks-installer using the following command. Please make sure there is enough resource in your machines, see [Pluggable Components Overview](/en/installation/pluggable-components/).
|
||||
|
||||
```bash
|
||||
kubectl edit cm -n kubesphere-system ks-installer
|
||||
```
|
||||
|
||||
## FAQ
|
||||
|
||||
If you have further questions please do not hesitate to raise issues on [GitHub](https://github.com/kubesphere/kubesphere/issues).
|
||||
TBD
|
||||
|
|
|
|||
|
|
@ -0,0 +1,7 @@
|
|||
---
|
||||
linkTitle: "Uninstalling"
|
||||
weight: 2300
|
||||
|
||||
_build:
|
||||
render: false
|
||||
---
|
||||
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: "Air-Gapped Installation"
|
||||
title: "Uninstalling KubeSphere from Kubernetes"
|
||||
keywords: 'kubernetes, kubesphere, air gapped, installation'
|
||||
description: 'How to install KubeSphere on air-gapped Linux machines'
|
||||
description: 'How to uninstalling KubeSphere from Kubernetes'
|
||||
|
||||
|
||||
weight: 2240
|
||||
|
|
@ -18,6 +18,6 @@ In this chapter, we will demonstrate how to use KubeKey to provision a new Kuber
|
|||
|
||||
Below you will find some of the most common and helpful pages from this chapter. We highly recommend you to review them at first.
|
||||
|
||||
{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}}
|
||||
{{< popularPage icon="/images/docs/qingcloud-2.svg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}}
|
||||
|
||||
{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}}
|
||||
|
|
|
|||
|
|
@ -0,0 +1,7 @@
|
|||
---
|
||||
linkTitle: "Cluster Operation"
|
||||
weight: 2445
|
||||
|
||||
_build:
|
||||
render: false
|
||||
---
|
||||
|
|
@ -0,0 +1,66 @@
|
|||
---
|
||||
title: "Add New Nodes"
|
||||
keywords: 'kubernetes, kubesphere, scale, add-nodes'
|
||||
description: 'How to add new nodes in an existing cluster'
|
||||
|
||||
|
||||
weight: 2340
|
||||
---
|
||||
|
||||
When you use KubeSphere for a certain time, most likely you need to scale out your cluster with workloads increasing. In this scenario, KubeSphere provides script to add new nodes to the cluster. Fundamentally the operation is based on Kubelet's registration mechanism, i.e., the new nodes will automatically join the existing Kubernetes cluster.
|
||||
|
||||
{{< notice tip >}}
|
||||
From v3.0.0, the brand-new installer [KubeKey](https://github.com/kubesphere/kubekey) supports scale master amd worker node from a sing-node (all-in-one) cluster.
|
||||
{{</ notice >}}
|
||||
|
||||
### Step 1: Modify the Host Configuration
|
||||
|
||||
KubeSphere supports hybrid environment, that is, the newly added host OS can be CentOS or Ubuntu. When new machines are ready, add the configurations about the new machine information in the `hosts` and `roleGroups` of the file `config-sample.yaml`.
|
||||
|
||||
{{< notice warning >}}
|
||||
Do not allowed to modify the host name of the original nodes (e.g. master1) when adding new nodes.
|
||||
{{</ notice >}}
|
||||
|
||||
For example, if you started the installation with [all-in-one](../../quick-start/all-in-one-on-linux) and you want to add new nodes for the single-node cluster, you can create a configuration file use KubeKey.
|
||||
|
||||
```
|
||||
# Assume your original Kubernetes cluster is v1.17.9
|
||||
./kk create config --with-kubesphere --with-kubernetes v1.17.9
|
||||
```
|
||||
|
||||
The following section demonstrates how to add two nodes (i.e. `node1` and `node2`) using `root` user as an example, it assumes your host name of the first machine is `master1` (Replace the following host name with yours).
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
hosts:
|
||||
- {name: master1, address: 192.168.0.3, internalAddress: 192.168.0.3, user: root, password: Qcloud@123}
|
||||
- {name: node1, address: 192.168.0.4, internalAddress: 192.168.0.4, user: root, password: Qcloud@123}
|
||||
- {name: node2, address: 192.168.0.5, internalAddress: 192.168.0.5, user: root, password: Qcloud@123}
|
||||
roleGroups:
|
||||
etcd:
|
||||
- master1
|
||||
master:
|
||||
- master1
|
||||
worker:
|
||||
- node1
|
||||
- node2
|
||||
···
|
||||
```
|
||||
|
||||
### Step 2: Execute the Add-node Command
|
||||
|
||||
Execute the following command to apply the changes:
|
||||
|
||||
```bash
|
||||
./kk add nodes -f config-sample.yaml
|
||||
```
|
||||
|
||||
Finally, you will be able to see the new nodes and their information on the KubeSphere console after a successful return. Select **Nodes → Cluster Nodes** from the left menu, or using `kubectl get node` command can also see the changes.
|
||||
|
||||
```
|
||||
kubectl get node
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
master1 Ready master,worker 20d v1.17.9
|
||||
node1 Ready worker 31h v1.17.9
|
||||
node2 Ready worker 31h v1.17.9
|
||||
```
|
||||
|
|
@ -0,0 +1,28 @@
|
|||
---
|
||||
title: "Remove Nodes"
|
||||
keywords: 'kubernetes, kubesphere, scale, add-nodes'
|
||||
description: 'How to add new nodes in an existing cluster'
|
||||
|
||||
|
||||
weight: 2345
|
||||
---
|
||||
|
||||
## Cordon a Node
|
||||
|
||||
Marking a node as unschedulable prevents the scheduler from placing new pods onto that Node, but does not affect existing Pods on the Node. This is useful as a preparatory step before a node reboot or other maintenance.
|
||||
|
||||
To mark a Node unschedulable, you can choose **Nodes → Cluster Nodes** from the menu, then find a node you want to remove from the cluster and click the **Cordon** button. It takes the same effect with the command `kubectl cordon $NODENAME`, you can see the [Kubernetes Nodes](https://kubernetes.io/docs/concepts/architecture/nodes/) for more details.
|
||||
|
||||

|
||||
|
||||
{{< notice note >}}
|
||||
Note: Pods that are part of a DaemonSet tolerate being run on an unschedulable Node. DaemonSets typically provide node-local services that should run on the Node even if it is being drained of workload applications.
|
||||
{{</ notice >}}
|
||||
|
||||
## Delete a Node
|
||||
|
||||
You can delete the node by the following command:
|
||||
|
||||
```
|
||||
./kk delete node <nodeName> -f config-sample.yaml
|
||||
```
|
||||
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
linkTitle: "Installation"
|
||||
linkTitle: "Introduction"
|
||||
weight: 2100
|
||||
|
||||
_build:
|
||||
render: false
|
||||
---
|
||||
---
|
||||
|
|
|
|||
|
|
@ -1,76 +1,81 @@
|
|||
---
|
||||
title: "Introduction"
|
||||
keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus'
|
||||
description: 'KubeSphere Installation Overview'
|
||||
title: "Overview"
|
||||
keywords: 'Kubernetes, KubeSphere, Linux, Installation'
|
||||
description: 'Overview of Installing KubeSphere on Linux'
|
||||
|
||||
linkTitle: "Introduction"
|
||||
linkTitle: "Overview"
|
||||
weight: 2110
|
||||
---
|
||||
|
||||
[KubeSphere](https://kubesphere.io/) is an enterprise-grade multi-tenant container platform built on [Kubernetes](https://kubernetes.io). It provides an easy-to-use UI for users to manage application workloads and computing resources with a few clicks, which greatly reduces the learning curve and the complexity of daily work such as development, testing, operation and maintenance. KubeSphere aims to alleviate the pain points of Kubernetes including storage, network, security and ease of use, etc.
|
||||
For the installation on Linux, KubeSphere can be installed both in clouds and in on-premises environments, such as AWS EC2, Azure VM and bare metal. Users can install KubeSphere on Linux hosts as they provision fresh Kubernetes clusters. The installation process is easy and friendly. Meanwhile, KubeSphere offers not only the online installer, or [KubeKey](https://github.com/kubesphere/kubekey), but also an air-gapped installation solution for the environment with no Internet access.
|
||||
|
||||
KubeSphere supports installing on cloud-hosted and on-premises Kubernetes cluster, e.g. native K8s, GKE, EKS, RKE, etc. It also supports installing on Linux host including virtual machine and bare metal with provisioning fresh Kubernetes cluster. Both of the two methods are easy and friendly to install KubeSphere. Meanwhile, KubeSphere offers not only online installer, but air-gapped installer for such environment with no access to the internet.
|
||||
As an open-source project on [GitHub](https://github.com/kubesphere), KubeSphere is home to a community with thousands of users. Many of them are running KubeSphere for their production workloads.
|
||||
|
||||
KubeSphere is open source project on [GitHub](https://github.com/kubesphere). There are thousands of users are using KunbeSphere, and many of them are running KubeSphere for their production workloads.
|
||||
Users are provided with multiple installation options. Please note not all options are mutually exclusive. For instance, you can deploy KubeSphere with minimal packages on multiple nodes in an air-gapped environment.
|
||||
|
||||
In summary, there are several installation options you can choose. Please note not all options are mutually exclusive. For instance, you can deploy KubeSphere with minimal packages on existing K8s cluster on multiple nodes in air-gapped environment. Here is the decision tree shown in the following graph you may reference for your own situation.
|
||||
|
||||
- [All-in-One](../all-in-one): Intall KubeSphere on a singe node. It is only for users to quickly get familar with KubeSphere.
|
||||
- [All-in-One](../all-in-one): Install KubeSphere on a single node. It is only for users to quickly get familiar with KubeSphere.
|
||||
- [Multi-Node](../multi-node): Install KubeSphere on multiple nodes. It is for testing or development.
|
||||
- [Install KubeSphere on Air Gapped Linux](../install-ks-on-linux-airgapped): All images of KubeSphere have been encapsulated into a package, it is convenient for air gapped installation on Linux machines.
|
||||
- [High Availability Multi-Node](../master-ha): Install high availability KubeSphere on multiple nodes which is used for production environment.
|
||||
- [KubeSphere on Existing K8s](../install-on-k8s): Deploy KubeSphere on your Kubernetes cluster including cloud-hosted services such as GKE, EKS, etc.
|
||||
- [KubeSphere on Air-Gapped K8s](../install-on-k8s-airgapped): Install KubeSphere on a disconnected Kubernetes cluster.
|
||||
- Minimal Packages: Only install minimal required system components of KubeSphere. The minimum of resource requirement is down to 1 core and 2G memory.
|
||||
- [Full Packages](../complete-installation): Install all available system components of KubeSphere including DevOps, service mesh, application store, etc.
|
||||
- [Install KubeSphere on Air-gapped Linux](../install-ks-on-linux-airgapped): All images of KubeSphere have been encapsulated into a package. It is convenient for air-gapped installation on Linux machines.
|
||||
- [High Availability Installation](../master-ha): Install high availability KubeSphere on multiple nodes which is used for the production environment.
|
||||
- Minimal Packages: Only install the minimum required system components of KubeSphere. Here is the minimum resource requirement:
|
||||
- 2vCPUs
|
||||
- 4GB RAM
|
||||
- 40GB Storage
|
||||
- [Full Packages](../complete-installation): Install all available system components of KubeSphere such as DevOps, service mesh, and alerting.
|
||||
|
||||

|
||||
For the installation on Kubernetes, see Overview of Installing on Kubernetes.
|
||||
|
||||
## Before Installation
|
||||
|
||||
- As the installation will pull images and update operating system from the internet, your environment must have the internet access. If not, then you need to use the air-gapped installer instead.
|
||||
- As images will be pulled and operating systems will be downloaded from the Internet, your environment must have Internet access. Otherwise, you need to use the air-gapped installer instead.
|
||||
- For all-in-one installation, the only one node is both the master and the worker.
|
||||
- For multi-node installation, you are asked to specify the node roles in the configuration file before installation.
|
||||
- For multi-node installation, you need to specify the node roles in the configuration file before installation.
|
||||
- Your linux host must have OpenSSH Server installed.
|
||||
- Please check the [ports requirements](../port-firewall) before installation.
|
||||
|
||||
## Quick Install For Development and Testing
|
||||
## KubeKey
|
||||
|
||||
KubeSphere has decoupled some components since v2.1.0. The installer only installs required components by default which brings the benefits of fast installation and minimal resource consumption. If you want to install any optional component, please check the following section [Pluggable Components Overview](../intro#pluggable-components-overview) for details.
|
||||
Developed in Go language, KubeKey represents a brand-new installation tool as a replacement for the ansible-based installer used before. KubeKey provides users with flexible installation choices, as they can install KubeSphere and Kubernetes separately or install them at one time, which is convenient and efficient.
|
||||
|
||||
The quick install of KubeSphere is only for development or testing since it uses local volume for storage by default. If you want a production install please refer to the section [High Availability Installation for Production Environment](../intro#high-availability-installation-for-production-environment).
|
||||
Three scenarios to use KubeKey:
|
||||
|
||||
### 1. Install KubeSphere on Linux
|
||||
- Install Kubernetes only;
|
||||
- Install Kubernetes and KubeSphere together in one command;
|
||||
- Install Kubernetes first, and deploy KubeSphere on it using [ks-installer](https://github.com/kubesphere/ks-installer).
|
||||
|
||||
- [All-in-One](../all-in-one): It means a single-node hassle-free configuration installation with one-click.
|
||||
- [Multi-Node](../multi-node): It allows you to install KubeSphere on multiple instances using local volume, which means it is not required to install storage server such as Ceph, GlusterFS.
|
||||
{{< notice note >}}
|
||||
|
||||
> Note:With regard to air-gapped installation please refer to [Install KubeSphere on Air Gapped Linux Machines](../install-ks-on-linux-airgapped).
|
||||
If you have existing Kubernetes clusters, please refer to [Installing on Kubernetes](https://kubesphere-v3.netlify.app/docs/installing-on-kubernetes/).
|
||||
|
||||
### 2. Install KubeSphere on Existing Kubernetes
|
||||
{{</ notice >}}
|
||||
|
||||
You can install KubeSphere on your existing Kubernetes cluster. Please refer [Install KubeSphere on Kubernetes](../install-on-k8s) for instructions.
|
||||
## Quick Installation for Development and Testing
|
||||
|
||||
## High Availability Installation for Production Environment
|
||||
KubeSphere has decoupled some components since v2.1.0. KubeKey only installs necessary components by default as this way features fast installation and minimal resource consumption. If you want to enable enhanced pluggable functionalities, see [Overview of Pluggable Components](../intro#pluggable-components-overview) for details.
|
||||
|
||||
### 1. Install HA KubeSphere on Linux
|
||||
The quick installation of KubeSphere is only for development or testing since it uses local volume for storage by default. If you want a production installation, see HA Cluster Configuration.
|
||||
|
||||
KubeSphere installer supports installing a highly available cluster for production with the prerequisites being a load balancer and persistent storage service set up in advance.
|
||||
- **All-in-one**. It means a single-node hassle-free installation with just one command.
|
||||
- **Multi-node**. It allows you to install KubeSphere on multiple instances using the default storage class (local volume), which means it is not required to install storage server such as Ceph and GlusterFS.
|
||||
|
||||
- [Persistent Service Configuration](../storage-configuration): By default, KubeSphere Installer uses [Local Volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning in Kubernetes cluster. It is convenient for quick install of testing environment. In production environment, it must have a storage server set up. Please refer [Persistent Service Configuration](../storage-configuration) for details.
|
||||
- [Load Balancer Configuration for HA install](../master-ha): Before you get started with multi-node installation in production environment, you need to configure a load balancer. Either cloud LB or `HAproxy + keepalived` works for the installation.
|
||||
{{< notice note >}}
|
||||
|
||||
### 2. Install HA KubeSphere on Existing Kubernetes
|
||||
For air-gapped installation, please refer to [Install KubeSphere on Air Gapped Linux Machines](../install-ks-on-linux-airgapped).
|
||||
|
||||
Before you install KubeSphere on existing Kubernetes, please check the prerequisites of the installation on Linux described above, and verify the existing Kubernetes to see if it satisfies these prerequisites or not, i.e., a load balancer and persistent storage service.
|
||||
{{</ notice >}}
|
||||
|
||||
If your Kubernetes is ready, please refer [Install KubeSphere on Kubernetes](../install-on-k8s) for instructions.
|
||||
## Install HA KubeSphere on Linux
|
||||
|
||||
> You can install KubeSphere on cloud Kubernetes service such as [Installing KubeSphere on GKE cluster](../install-on-gke)
|
||||
KubeKey allows users to install a highly available cluster for production. Users need to configure load balancers and persistent storage services in advance.
|
||||
|
||||
## Pluggable Components Overview
|
||||
- [Persistent Storage Configuration](../storage-configuration): By default, KubeKey uses [Local Volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage services with dynamic provisioning in Kubernetes clusters. It is convenient for the quick installation of a testing environment. In a production environment, it must have a storage server set up. Please refer to [Persistent Storage Configuration](../storage-configuration) for details.
|
||||
- [Load Balancer Configuration for HA installation](../master-ha): Before you get started with multi-node installation in a production environment, you need to configure load balancers. Cloud load balancers, Nginx and `HAproxy + Keepalived` all work for the installation.
|
||||
|
||||
KubeSphere has decoupled some core feature components since v2.1.0. These components are designed to be pluggable, which means you can enable any of them before or after installation. The installer by default does not install the pluggable components. Please check the guide [Enable Pluggable Components Installation](../pluggable-components) for your requirement.
|
||||
For more information, see HA Cluster Configuration. You can also see the specific step of HA installations across major cloud providers in Installing on Public Cloud.
|
||||
|
||||
## Overview of Pluggable Components
|
||||
|
||||
KubeSphere has decoupled some core feature components since v2.1.0. These components are designed to be pluggable, which means you can enable any of them both before and after the installation. By default, KubeKey does not install these pluggable components. For more information, see Enable Pluggable Components.
|
||||
|
||||

|
||||
|
||||
|
|
@ -84,10 +89,24 @@ The following links explain how to configure different types of persistent stora
|
|||
- [QingCloud Block Storage](https://docs.qingcloud.com/product/storage/volume/)
|
||||
- [QingStor NeonSAN](https://docs.qingcloud.com/product/storage/volume/super_high_performance_shared_volume/)
|
||||
|
||||
## Add New Nodes
|
||||
## Cluster Operation and Maintenance
|
||||
|
||||
KubeSphere Installer allows you to scale the number of nodes, see [Add New Nodes](../add-nodes).
|
||||
### Add New Nodes
|
||||
|
||||
With KubeKey, you can scale the number of nodes to meet higher resource needs after the installation, especially in a production environment. For more information, see [Add New Nodes](../add-nodes).
|
||||
|
||||
### Remove Nodes
|
||||
|
||||
You need to drain a node before you remove. For more information, see Remove Nodes.
|
||||
|
||||
### Add New Storage Classes
|
||||
|
||||
KubeKey allows you to set a new storage class after the installation. You can set different storage classes for KubeSphere itself and your workloads.
|
||||
|
||||
For more information, see Add New Storage Classes.
|
||||
|
||||
## Uninstall
|
||||
|
||||
Uninstall will remove KubeSphere from the machines. This operation is irreversible and dangerous. Please check [Uninstall](../uninstall).
|
||||
Uninstalling KubeSphere means it will be removed from the machines, which is irreversible. Please be cautious with the operation.
|
||||
|
||||
For more information, see [Uninstall](../uninstall).
|
||||
|
|
@ -0,0 +1,299 @@
|
|||
---
|
||||
title: "Multi-node Installation"
|
||||
keywords: 'Multi-node, Installation, KubeSphere'
|
||||
description: 'Multi-node Installation Overview'
|
||||
|
||||
linkTitle: "Multi-node Installation"
|
||||
weight: 2112
|
||||
---
|
||||
|
||||
In a production environment, a single-node cluster cannot satisfy most of the needs as the cluster has limited resources with insufficient compute capabilities. Thus, single-node clusters are not recommended for large-scale data processing. Besides, a cluster of this kind is not available with high availability as it only has one node. On the other hand, a multi-node architecture is the most common and preferred choice in terms of application deployment and distribution.
|
||||
|
||||
This section gives you an overview of multi-node installation, including the concept, KubeKey and steps. For information about HA installation, refer to Installing on Public Cloud and Installing in On-premises Environment.
|
||||
|
||||
## Concept
|
||||
|
||||
A multi-node cluster is composed of at least one master node and one worker node. You can use any node as the **taskbox** to carry out the installation task. You can add additional nodes based on your needs (e.g. for high availability) both before and after the installation.
|
||||
|
||||
- **Master**. A master node generally hosts the control plane that controls and manages the whole system.
|
||||
- **Worker**. Worker nodes run the actual applications deployed on them.
|
||||
|
||||
## Why KubeKey
|
||||
|
||||
If you are not familiar with Kubernetes components, you may find it difficult to set up a highly-functional multi-node Kubernetes cluster. Starting from the version 3.0.0, KubeSphere uses a brand-new installer called KubeKey to replace the old ansible-based installer. Developed in Go language, KubeKey allows users to quickly deploy a multi-node architecture.
|
||||
|
||||
For users who do not have an existing Kubernetes cluster, they only need to create a configuration file with few commands and add node information (e.g. IP address and node roles) in it after KubeKey is downloaded. With one command, the installation will start and no additional operation is needed.
|
||||
|
||||
### Motivation
|
||||
|
||||
- The previous ansible-based installer has a bunch of software dependencies such as Python. KubeKey is developed in Go language to get rid of the problem in a variety of environments, making sure the installation is successful.
|
||||
- KubeKey uses Kubeadm to install Kubernetes clusters on nodes in parallel as much as possible in order to reduce installation complexity and improve efficiency. It will greatly save installation time compared to the older installer.
|
||||
- With KubeKey, users can scale clusters from an all-in-one cluster to a multi-node cluster, even an HA cluster.
|
||||
- KubeKey aims to install clusters as an object, i.e., CaaO.
|
||||
|
||||
## Step 1: Prepare Linux Hosts
|
||||
|
||||
Please see the requirements for hardware and operating system shown below. To get started with multi-node installation, you need to prepare at least three hosts according to the following requirements.
|
||||
|
||||
### System Requirements
|
||||
|
||||
| Systems | Minimum Requirements (Each node) |
|
||||
| ------------------------------------------------------ | ------------------------------------------- |
|
||||
| **Ubuntu** *16.04, 18.04* | CPU: 2 Cores, Memory: 4 G, Disk Space: 40 G |
|
||||
| **Debian** *Buster, Stretch* | CPU: 2 Cores, Memory: 4 G, Disk Space: 40 G |
|
||||
| **CentOS** *7*.x | CPU: 2 Cores, Memory: 4 G, Disk Space: 40 G |
|
||||
| **Red Hat Enterprise Linux 7** | CPU: 2 Cores, Memory: 4 G, Disk Space: 40 G |
|
||||
| **SUSE Linux Enterprise Server 15/openSUSE Leap 15.2** | CPU: 2 Cores, Memory: 4 G, Disk Space: 40 G |
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
The path `/var/lib/docker` is mainly used to store the container data, and will gradually increase in size during use and operation. In the case of a production environment, it is recommended that `/var/lib/docker` should mount a drive separately.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
### Node Requirements
|
||||
|
||||
- All nodes must be accessible through `SSH`.
|
||||
- Time synchronization for all nodes.
|
||||
- `sudo`/`curl`/`openssl` should be used in all nodes.
|
||||
- `ebtables`/`socat`/`ipset`/`conntrack` should be installed in all nodes.
|
||||
- `docker` can be installed by yourself or by KubeKey.
|
||||
|
||||
### Network and DNS Requirements
|
||||
|
||||
- Make sure the DNS address in `/etc/resolv.conf` is available. Otherwise, it may cause some issues of DNS in clusters.
|
||||
- If your network configuration uses Firewall or Security Group, you must ensure infrastructure components can communicate with each other through specific ports. It's recommended that you turn off the firewall or follow the guide [Network Access](https://github.com/kubesphere/kubekey/blob/master/docs/network-access.md).
|
||||
|
||||
{{< notice tip >}}
|
||||
|
||||
- It's recommended that your OS be clean (without any other software installed). Otherwise, there may be conflicts.
|
||||
- A container image mirror (accelerator) is recommended to be prepared if you have trouble downloading images from dockerhub.io. See [Configure registry mirrors for the Docker daemon](https://docs.docker.com/registry/recipes/mirror/#configure-the-docker-daemon).
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
This example includes three hosts as below with the master node serving as the taskbox.
|
||||
|
||||
| Host IP | Host Name | Role |
|
||||
| ----------- | --------- | ------------ |
|
||||
| 192.168.0.2 | master | master, etcd |
|
||||
| 192.168.0.3 | node1 | worker |
|
||||
| 192.168.0.4 | node2 | worker |
|
||||
|
||||
## Step 2: Download KubeKey
|
||||
|
||||
As below, you can either download the binary file.
|
||||
|
||||
Download the Installer for KubeSphere v3.0.0.
|
||||
|
||||
{{< tabs >}}
|
||||
|
||||
{{< tab "For users with poor network to GitHub" >}}
|
||||
|
||||
For users in China, you can download the installer using this link.
|
||||
|
||||
```bash
|
||||
wget https://kubesphere.io/kubekey/releases/v1.0.0
|
||||
```
|
||||
{{</ tab >}}
|
||||
|
||||
{{< tab "For users with good network to GitHub" >}}
|
||||
|
||||
For users with good network to GitHub, you can download it from [GitHub Release Page](https://github.com/kubesphere/kubekey/releases/tag/v1.0.0) or use the following link directly.
|
||||
|
||||
```bash
|
||||
wget https://github.com/kubesphere/kubekey/releases/download/v1.0.0/kubekey-v1.0.0-linux-amd64.tar.gz
|
||||
```
|
||||
{{</ tab >}}
|
||||
|
||||
{{</ tabs >}}
|
||||
|
||||
Unzip it.
|
||||
|
||||
```bash
|
||||
tar -zxvf v1.0.0
|
||||
```
|
||||
|
||||
Grant the execution right to `kk`:
|
||||
|
||||
```bash
|
||||
chmod +x kk
|
||||
```
|
||||
|
||||
## Step 3: Create a Cluster
|
||||
|
||||
For multi-node installation, you need to create a cluster by specifying a configuration file.
|
||||
|
||||
### 1. Create an example configuration file
|
||||
|
||||
Command:
|
||||
|
||||
```bash
|
||||
./kk create config [--with-kubernetes version] [--with-kubesphere version] [(-f | --file) path]
|
||||
```
|
||||
|
||||
{{< notice info >}}
|
||||
|
||||
Supported Kubernetes versions: *v1.15.12*, *v1.16.13*, *v1.17.9* (default), *v1.18.6*.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
Here are some examples for your reference:
|
||||
|
||||
- You can create an example configuration file with default configurations. You can also specify the file with a different filename, or in a different folder.
|
||||
|
||||
```bash
|
||||
./kk create config [-f ~/myfolder/abc.yaml]
|
||||
```
|
||||
|
||||
- You can customize the persistent storage plugins (e.g. NFS Client, Ceph RBD, and GlusterFS) in `sample-config.yaml`.
|
||||
|
||||
```bash
|
||||
./kk create config --with-storage localVolume
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
KubeKey will install [OpenEBS](https://openebs.io/) to provision [LocalPV](https://kubernetes.io/docs/concepts/storage/volumes/#local) for development and testing environment by default, which is convenient for new users. For this example of multi-cluster installation, we will use the default storage class (local volume). For production, please use NFS/Ceph/GlusterFS/CSI or commercial products as persistent storage solutions, you need to specify them in `addons` of `sample-config.yaml`, see [Persistent Storage Configuration](../storage-configuration).
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
- You can specify a KubeSphere version that you want to install (e.g. `--with-kubesphere v3.0.0`).
|
||||
|
||||
```bash
|
||||
./kk create config --with-kubesphere [version]
|
||||
```
|
||||
|
||||
### 2. Edit the configuration file
|
||||
|
||||
A default file **config-sample.yaml** will be created if you do not change the name. Edit the file and here is an example of the configuration file of a multi-node cluster with one master node.
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
hosts:
|
||||
- {name: master, address: 192.168.0.2, internalAddress: 192.168.0.2, user: ubuntu, password: Testing123}
|
||||
- {name: node1, address: 192.168.0.3, internalAddress: 192.168.0.3, user: ubuntu, password: Testing123}
|
||||
- {name: node2, address: 192.168.0.4, internalAddress: 192.168.0.4, user: ubuntu, password: Testing123}
|
||||
roleGroups:
|
||||
etcd:
|
||||
- master
|
||||
master:
|
||||
- master
|
||||
worker:
|
||||
- node1
|
||||
- node2
|
||||
controlPlaneEndpoint:
|
||||
domain: lb.kubesphere.local
|
||||
address: ""
|
||||
port: "6443"
|
||||
```
|
||||
|
||||
#### Hosts
|
||||
|
||||
- List all your machines under `hosts` and add their detailed information as above. In this case, port 22 is the default port of SSH. Otherwise, you need to add the port number after the IP address. For example:
|
||||
|
||||
```yaml
|
||||
hosts:
|
||||
- {name: master, address: 192.168.0.2, internalAddress: 192.168.0.2, port: 8022, user: ubuntu, password: Testing123}
|
||||
```
|
||||
|
||||
- For default root user:
|
||||
|
||||
```yaml
|
||||
hosts:
|
||||
- {name: master, address: 192.168.0.2, internalAddress: 192.168.0.2, password: Testing123}
|
||||
```
|
||||
|
||||
- For passwordless login with SSH keys:
|
||||
|
||||
```yaml
|
||||
hosts:
|
||||
- {name: master, address: 192.168.0.2, internalAddress: 192.168.0.2, privateKeyPath: "~/.ssh/id_rsa"}
|
||||
```
|
||||
|
||||
#### roleGroups
|
||||
|
||||
- `etcd`: etcd node names
|
||||
- `master`: Master node names
|
||||
- `worker`: Worker node names
|
||||
|
||||
#### controlPlaneEndpoint (for HA installation only)
|
||||
|
||||
`controlPlaneEndpoint` allows you to define an external load balancer for an HA cluster. You need to prepare and configure an external load balancer if and only if you need to install more than 3 master nodes. Please note that the address and port should be indented by two spaces in `config-sample.yaml`, and the `address` should be VIP. See HA Configuration for details.
|
||||
|
||||
{{< notice tip >}}
|
||||
|
||||
- You can enable the multi-cluster feature by editing the configuration file. For more information, see Multi-cluster Management.
|
||||
- You can also select the components you want to install. For more information, see Enable Pluggable Components. For an example of a complete config-sample.yaml file, see [this file](https://github.com/kubesphere/kubekey/blob/master/docs/config-example.md).
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
When you finish editing, save the file.
|
||||
|
||||
### 3. Create a cluster using the configuration file
|
||||
|
||||
```bash
|
||||
./kk create cluster -f config-sample.yaml
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
You need to change `config-sample.yaml` above to your own file if you use a different name.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
The whole installation process may take 10-20 minutes, depending on your machine and network.
|
||||
|
||||
### 4. Verify the installation
|
||||
|
||||
When the installation finishes, you can see the content as follows:
|
||||
|
||||
```bash
|
||||
#####################################################
|
||||
### Welcome to KubeSphere! ###
|
||||
#####################################################
|
||||
|
||||
Console: http://192.168.0.2:30880
|
||||
Account: admin
|
||||
Password: P@88w0rd
|
||||
|
||||
NOTES:
|
||||
1. After logging into the console, please check the
|
||||
monitoring status of service components in
|
||||
the "Cluster Management". If any service is not
|
||||
ready, please wait patiently until all components
|
||||
are ready.
|
||||
2. Please modify the default password after login.
|
||||
|
||||
#####################################################
|
||||
https://kubesphere.io 20xx-xx-xx xx:xx:xx
|
||||
#####################################################
|
||||
```
|
||||
|
||||
Now, you will be able to access the web console of KubeSphere at `http://{IP}:30880` (e.g. you can use the EIP) with the account and password `admin/P@88w0rd`.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
To access the console, you may need to forward the source port to the intranet port of the intranet IP depending on the platform of your cloud providers. Please also make sure port 30880 is opened in the security group.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||

|
||||
|
||||
## Enable kubectl Autocompletion
|
||||
|
||||
KubeKey doesn't enable kubectl autocompletion. See the content below and turn it on:
|
||||
|
||||
**Prerequisite**: make sure bash-autocompletion is installed and works.
|
||||
|
||||
```bash
|
||||
# Install bash-completion
|
||||
apt-get install bash-completion
|
||||
|
||||
# Source the completion script in your ~/.bashrc file
|
||||
echo 'source <(kubectl completion bash)' >>~/.bashrc
|
||||
|
||||
# Add the completion script to the /etc/bash_completion.d directory
|
||||
kubectl completion bash >/etc/bash_completion.d/kubectl
|
||||
```
|
||||
|
||||
Detailed information can be found [here](https://kubernetes.io/docs/tasks/tools/install-kubectl/#enabling-shell-autocompletion).
|
||||
|
|
@ -1,33 +1,32 @@
|
|||
---
|
||||
title: "Port Requirements"
|
||||
keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus'
|
||||
description: ''
|
||||
description: 'How to set the port in firewall rules'
|
||||
|
||||
linkTitle: "Requirements"
|
||||
linkTitle: "Port Requirements"
|
||||
weight: 2120
|
||||
---
|
||||
|
||||
|
||||
KubeSphere requires certain ports to communicate among services, so you need to make sure the following ports open for use.
|
||||
KubeSphere requires certain ports to communicate among services. If your network configuration uses a firewall,you need to ensure infrastructure components can communicate with each other through specific ports that act as communication endpoints for certain processes or services.
|
||||
|
||||
| Service | Protocol | Action | Start Port | End Port | Notes |
|
||||
|services|protocol|action|start port|end port|comment
|
||||
|---|---|---|---|---|---|
|
||||
| ssh | TCP | allow | 22 | | |
|
||||
| etcd | TCP | allow | 2379 | 2380 | |
|
||||
| apiserver | TCP | allow | 6443 | | |
|
||||
| calico | TCP | allow | 9099 | 9100 | |
|
||||
| bgp | TCP | allow | 179 | | |
|
||||
| nodeport | TCP | allow | 30000 | 32767 | |
|
||||
| master | TCP | allow | 10250 | 10258 | |
|
||||
| dns | TCP | allow | 53 | | |
|
||||
| dns | UDP | allow | 53 | | |
|
||||
| local-registry | TCP | allow | 5000 | | Required for air gapped environment|
|
||||
| local-apt | TCP | allow | 5080 | | Required for air gapped environment|
|
||||
| rpcbind | TCP | allow | 111 | | When using NFS as storage server |
|
||||
| ipip | IPIP | allow | | | Calico network requires ipip protocol |
|
||||
|ssh|TCP|allow|22|
|
||||
|etcd|TCP|allow|2379|2380|
|
||||
|apiserver|TCP|allow|6443|
|
||||
|calico|TCP|allow|9099|9100|
|
||||
|bgp|TCP|allow|179||
|
||||
|nodeport|TCP|allow|30000|32767|
|
||||
|master|TCP|allow|10250|10258|
|
||||
|dns|TCP|allow|53|
|
||||
|dns|UDP|allow|53|
|
||||
|local-registry|TCP|allow|5000||offline environment|
|
||||
|local-apt|TCP|allow|5080||offline environment|
|
||||
|rpcbind|TCP|allow|111|| use NFS |
|
||||
|ipip| IPENCAP / IPIP|allow| | |calico needs to allow the ipip protocol |
|
||||
|
||||
**Note**
|
||||
|
||||
Please note when you use Calico network plugin and run your cluster in classic network in cloud environment, you need to open IPIP protocol for souce IP. For instance, the following is the sample on QingCloud showing how to open IPIP protocol.
|
||||
|
||||

|
||||
{{< notice note >}}
|
||||
Please note when you use Calico network plugin and run your cluster in classic network in cloud environment, you need to open both IPENCAP and IPIP protocol for source IP.
|
||||
{{</ notice >}}
|
||||
|
|
|
|||
|
|
@ -0,0 +1,127 @@
|
|||
---
|
||||
title: "Persistent Storage Configuration"
|
||||
keywords: 'kubernetes, docker, kubesphere, storage, volume, PVC'
|
||||
description: 'Persistent Storage Configuration'
|
||||
|
||||
linkTitle: "Persistent Storage Configuration"
|
||||
weight: 2140
|
||||
---
|
||||
# Overview
|
||||
Persistence volume is **Must** for Kubesphere. So before installation of Kubesphere, one **default**
|
||||
[StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/) and corresponding storage plugin should be installed on the Kubernetes cluster.
|
||||
As different users may choose different storage plugin, [KubeKey](https://github.com/kubesphere/kubekey) supports to install storage plugin by the way of
|
||||
[add-on](https://github.com/kubesphere/kubekey/blob/v1.0.0/docs/addons.md). This passage will introduce add-on configuration for some mainly used storage plugin.
|
||||
|
||||
# QingCloud-CSI
|
||||
[QingCloud-CSI](https://github.com/yunify/qingcloud-csi) plugin implements an interface between CSI-enabled Container Orchestrator (CO) and the disk of QingCloud.
|
||||
Here is a helm-chart example of installing by KubeKey add-on.
|
||||
```bash
|
||||
addons:
|
||||
- name: csi-qingcloud
|
||||
namespace: kube-system
|
||||
sources:
|
||||
chart:
|
||||
name: csi-qingcloud
|
||||
repo: https://charts.kubesphere.io/test
|
||||
values:
|
||||
- config.qy_access_key_id=SHOULD_BE_REPLACED
|
||||
- config.qy_secret_access_key=SHOULD_BE_REPLACED
|
||||
- config.zone=SHOULD_BE_REPLACED
|
||||
- sc.isDefaultClass=true
|
||||
```
|
||||
For more information about QingCloud, see [QingCloud](https://www.qingcloud.com/).
|
||||
For more chart values, see [configuration](https://github.com/kubesphere/helm-charts/tree/master/src/test/csi-qingcloud#configuration).
|
||||
|
||||
# NFS-client
|
||||
The [nfs-client-provisioner](https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client) is an automatic provisioner for Kubernetes that uses your
|
||||
*already configured* NFS server, dynamically creating Persistent Volumes.
|
||||
Hear is a helm-chart example of installing by KubeKey add-on.
|
||||
```yaml
|
||||
addons:
|
||||
- name: nfs-client
|
||||
namespace: kube-system
|
||||
sources:
|
||||
chart:
|
||||
name: nfs-client-provisioner
|
||||
repo: https://charts.kubesphere.io/main
|
||||
values:
|
||||
- nfs.server=SHOULD_BE_REPLACED
|
||||
- nfs.path=SHOULD_BE_REPLACED
|
||||
- storageClass.defaultClass=true
|
||||
```
|
||||
For more chart values, see [configuration](https://github.com/kubesphere/helm-charts/tree/master/src/main/csi-nfs-provisioner#configuration)
|
||||
|
||||
# Ceph RBD
|
||||
Ceph RBD is an in-tree storage plugin on Kubernetes. As **hyperkube** images were [deprecated since 1.17](https://github.com/kubernetes/kubernetes/pull/85094),
|
||||
**KubeKey** will never use **hyperkube** images. So in-tree Ceph rbd may not work on Kubernetes installed by **KubeKey**.
|
||||
We could use [rbd provisioner](https://github.com/kubernetes-incubator/external-storage/tree/master/ceph/rbd) as substitute, which is same format with in-tree Ceph rbd.
|
||||
Here is an example of rbd-provisioner.
|
||||
```yaml
|
||||
- name: rbd-provisioner
|
||||
namespace: kube-system
|
||||
sources:
|
||||
chart:
|
||||
name: rbd-provisioner
|
||||
repo: https://charts.kubesphere.io/test
|
||||
values:
|
||||
- ceph.mon=SHOULD_BE_REPLACED # like 192.168.0.10:6789
|
||||
- ceph.adminKey=SHOULD_BE_REPLACED
|
||||
- ceph.userKey=SHOULD_BE_REPLACED
|
||||
- sc.isDefault=true
|
||||
```
|
||||
For more values, see [configuration](https://github.com/kubesphere/helm-charts/tree/master/src/test/rbd-provisioner#configuration))
|
||||
|
||||
# Glusterfs
|
||||
Glusterfs is an in-tree storage plugin on Kubernetes, only StorageClass is need to been installed.
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: heketi-secret
|
||||
namespace: kube-system
|
||||
type: kubernetes.io/glusterfs
|
||||
data:
|
||||
key: SHOULD_BE_REPLACED
|
||||
---
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
annotations:
|
||||
storageclass.beta.kubernetes.io/is-default-class: "true"
|
||||
storageclass.kubesphere.io/supported-access-modes: '["ReadWriteOnce","ReadOnlyMany","ReadWriteMany"]'
|
||||
name: glusterfs
|
||||
parameters:
|
||||
clusterid: SHOULD_BE_REPLACED
|
||||
gidMax: "50000"
|
||||
gidMin: "40000"
|
||||
restauthenabled: "true"
|
||||
resturl: SHOULD_BE_REPLCADED # like "http://192.168.0.14:8080"
|
||||
restuser: admin
|
||||
secretName: heketi-secret
|
||||
secretNamespace: kube-system
|
||||
volumetype: SHOULD_BE_REPLACED # like replicate:2
|
||||
provisioner: kubernetes.io/glusterfs
|
||||
reclaimPolicy: Delete
|
||||
volumeBindingMode: Immediate
|
||||
allowVolumeExpansion: true
|
||||
```
|
||||
For detailed information, see [configuration](https://kubernetes.io/docs/concepts/storage/storage-classes/#glusterfs)
|
||||
|
||||
Save the YAML file of StorageClass in local, **/root/glusterfs-sc.yaml** for example. The add-on configuration could be set like:
|
||||
```bash
|
||||
- addon
|
||||
- name: glusterfs
|
||||
sources:
|
||||
yaml:
|
||||
path:
|
||||
- /root/glusterfs-sc.yaml
|
||||
```
|
||||
|
||||
# OpenEBS/LocalVolumes
|
||||
[OpenEBS](https://github.com/openebs/openebs) Dynamic Local PV provisioner can create Kubernetes Local Persistent Volumes using a unique
|
||||
HostPath (directory) on the node to persist data. It's very convenient for experience KubeSphere when you has no special storage system.
|
||||
If no default StorageClass configured of **KubeKey** add-on, OpenEBS/LocalVolumes will be installed.
|
||||
|
||||
# Multi-Storage
|
||||
If you intend to install more than one storage plugins, remind to set only one to be default.
|
||||
Otherwise [ks-installer](https://github.com/kubesphere/ks-installer) will be confused about which StorageClass to use.
|
||||
|
|
@ -1,107 +1,36 @@
|
|||
---
|
||||
title: "Common Configurations"
|
||||
keywords: 'KubeSphere, kubernetes, docker, helm, jenkins, istio, prometheus'
|
||||
title: "Kubernetes Cluster Configuration"
|
||||
keywords: 'KubeSphere, kubernetes, docker, cluster, jenkins, prometheus'
|
||||
description: 'Configure cluster parameters before installing'
|
||||
|
||||
linkTitle: "Kubernetes Cluster Configuration"
|
||||
weight: 2130
|
||||
---
|
||||
|
||||
This tutorial explains how to customize KubeSphere configurations in `conf/common.yaml`. You can reference the following section to understand each parameter.
|
||||
This tutorial explains how to customize the Kubernetes cluster configurations in `config-example.yaml` when you start to use [KubeKey](https://github.com/kubesphere/kubekey) to provision a cluster. You can reference the following section to understand each parameter.
|
||||
|
||||
```yaml
|
||||
######################### Kubernetes #########################
|
||||
# The default k8s version will be installed
|
||||
kube_version: v1.16.7
|
||||
|
||||
# The default etcd version will be installed
|
||||
etcd_version: v3.2.18
|
||||
|
||||
# Configure a cron job to backup etcd data, which is running on etcd machines.
|
||||
# Period of running backup etcd job, the unit is minutes.
|
||||
# The default value 30 means backup etcd every 30 minutes.
|
||||
etcd_backup_period: 30
|
||||
|
||||
# How many backup replicas to keep.
|
||||
# The default value5 means to keep latest 5 backups, older ones will be deleted by order.
|
||||
keep_backup_number: 5
|
||||
|
||||
# The location to store etcd backups files on etcd machines.
|
||||
etcd_backup_dir: "/var/backups/kube_etcd"
|
||||
|
||||
# Add other registry. (For users who need to accelerate image download)
|
||||
docker_registry_mirrors:
|
||||
- https://docker.mirrors.ustc.edu.cn
|
||||
- https://registry.docker-cn.com
|
||||
- https://mirror.aliyuncs.com
|
||||
|
||||
# Kubernetes network plugin, Calico will be installed by default. Note that Calico and flannel are recommended, which are tested and verified by KubeSphere.
|
||||
kube_network_plugin: calico
|
||||
|
||||
# A valid CIDR range for Kubernetes services,
|
||||
# 1. should not overlap with node subnet
|
||||
# 2. should not overlap with Kubernetes pod subnet
|
||||
kube_service_addresses: 10.233.0.0/18
|
||||
|
||||
# A valid CIDR range for Kubernetes pod subnet,
|
||||
# 1. should not overlap with node subnet
|
||||
# 2. should not overlap with Kubernetes services subnet
|
||||
kube_pods_subnet: 10.233.64.0/18
|
||||
|
||||
# Kube-proxy proxyMode configuration, either ipvs, or iptables
|
||||
kube_proxy_mode: ipvs
|
||||
|
||||
# Maximum pods allowed to run on every node.
|
||||
kubelet_max_pods: 110
|
||||
|
||||
# Enable nodelocal dns cache, see https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.md#nodelocal-dns-cache for further information
|
||||
enable_nodelocaldns: true
|
||||
|
||||
# Highly Available loadbalancer example config
|
||||
# apiserver_loadbalancer_domain_name: "lb.kubesphere.local" # Loadbalancer domain name
|
||||
# loadbalancer_apiserver: # Loadbalancer apiserver configuration, please uncomment this line when you prepare HA install
|
||||
# address: 192.168.0.10 # Loadbalancer apiserver IP address
|
||||
# port: 6443 # apiserver port
|
||||
|
||||
######################### KubeSphere #########################
|
||||
|
||||
# Version of KubeSphere
|
||||
ks_version: v2.1.0
|
||||
|
||||
# KubeSphere console port, range 30000-32767,
|
||||
# but 30180/30280/30380 are reserved for internal service
|
||||
console_port: 30880 # KubeSphere console nodeport
|
||||
|
||||
#CommonComponent
|
||||
mysql_volume_size: 20Gi # MySQL PVC size
|
||||
minio_volume_size: 20Gi # Minio PVC size
|
||||
etcd_volume_size: 20Gi # etcd PVC size
|
||||
openldap_volume_size: 2Gi # openldap PVC size
|
||||
redis_volume_size: 2Gi # Redis PVC size
|
||||
|
||||
|
||||
# Monitoring
|
||||
prometheus_replica: 2 # Prometheus replicas with 2 by default which are responsible for monitoring different segments of data source and provide high availability as well.
|
||||
prometheus_memory_request: 400Mi # Prometheus request memory
|
||||
prometheus_volume_size: 20Gi # Prometheus PVC size
|
||||
grafana_enabled: true # enable grafana or not
|
||||
|
||||
|
||||
## Container Engine Acceleration
|
||||
## Use nvidia gpu acceleration in containers
|
||||
# nvidia_accelerator_enabled: true # enable Nvidia GPU accelerator or not. It supports hybrid node with GPU and CPU installed.
|
||||
# nvidia_gpu_nodes: # The GPU nodes specified in hosts.ini. FOr now we only support Ubuntu 16.04
|
||||
# - kube-gpu-001 # The host name of the GPU node specified in hosts.ini
|
||||
kubernetes:
|
||||
version: v1.17.9 # The default k8s version is v1.17.9, you can specify 1.15.2, v1.16.13, v1.18.6 as you want
|
||||
imageRepo: kubesphere # DockerHub Repo
|
||||
clusterName: cluster.local # Kubernetes Cluster Name
|
||||
masqueradeAll: false # masqueradeAll tells kube-proxy to SNAT everything if using the pure iptables proxy mode. [Default: false]
|
||||
maxPods: 110 # maxPods is the number of pods that can run on this Kubelet. [Default: 110]
|
||||
nodeCidrMaskSize: 24 # internal network node size allocation. This is the size allocated to each node on your network. [Default: 24]
|
||||
proxyMode: ipvs # mode specifies which proxy mode to use. [Default: ipvs]
|
||||
network:
|
||||
plugin: calico # Calico by default, KubeSphere Network Policy is based on Calico. You can also specify Flannel as you want
|
||||
calico:
|
||||
ipipMode: Always # IPIP Mode to use for the IPv4 POOL created at start up. If set to a value other than Never, vxlanMode should be set to "Never". [Always | CrossSubnet | Never] [Default: Always]
|
||||
vxlanMode: Never # VXLAN Mode to use for the IPv4 POOL created at start up. If set to a value other than Never, ipipMode should be set to "Never". [Always | CrossSubnet | Never] [Default: Never]
|
||||
vethMTU: 1440 # The maximum transmission unit (MTU) setting determines the largest packet size that can be transmitted through your network. [Default: 1440]
|
||||
kubePodsCIDR: 10.233.64.0/18 # A valid CIDR range for Kubernetes pod subnet, it should not overlap with node subnet, and it should not overlap with Kubernetes services subnet.
|
||||
kubeServiceCIDR: 10.233.0.0/18 # A valid CIDR range for Kubernetes services, it should not overlap with node subnet, and it should not overlap with Kubernetes pod subnet
|
||||
registry:
|
||||
registryMirrors: [] # For users who need to accelerate image download speed
|
||||
insecureRegistries: [] # Configure an address of Insecure image Registry, see https://docs.docker.com/registry/insecure/
|
||||
privateRegistry: "" # Configure a private image registry for air-gapped installation (e.g. docker local registry or Harbor)
|
||||
addons: [] # You can specify any add-ons with one or more Helm Charts or YAML files in this field, e.g. CSI plugins or cloud provider plugins.
|
||||
```
|
||||
|
||||
## How to Configure a GPU Node
|
||||
|
||||
You may want to use GPU nodes for special purpose such as machine learning. Let's say you have a GPU node called `node2` in `hosts.ini`, then in the file `common.yaml` specify the following configuration. Please be aware the `- node2` has two spaces indent.
|
||||
|
||||
```yaml
|
||||
nvidia_accelerator_enabled: true
|
||||
nvidia_gpu_nodes:
|
||||
- node2
|
||||
```
|
||||
|
||||
> Note: The GPU node now only supports Ubuntu 16.04.
|
||||
|
|
@ -1,7 +0,0 @@
|
|||
---
|
||||
linkTitle: "Install on Linux"
|
||||
weight: 2200
|
||||
|
||||
_build:
|
||||
render: false
|
||||
---
|
||||
|
|
@ -1,224 +0,0 @@
|
|||
---
|
||||
title: "Air-Gapped Installation"
|
||||
keywords: 'kubernetes, kubesphere, air gapped, installation'
|
||||
description: 'How to install KubeSphere on air-gapped Linux machines'
|
||||
|
||||
|
||||
weight: 2240
|
||||
---
|
||||
|
||||
The air-gapped installation is almost the same as the online installation except it creates a local registry to host the Docker images. We will demonstrate how to install KubeSphere and Kubernetes on air-gapped environment.
|
||||
|
||||
> Note: The dependencies in different operating systems may cause upexpected problems. If you encounter any installation problems on air-gapped environment, please describe your OS information and error logs on [GitHub](https://github.com/kubesphere/kubesphere/issues).
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information.
|
||||
> - Installer will use `/var/lib/docker` as the default directory where all Docker related files, including the images, are stored. We recommend you to add additional storage to a disk with at least 100G mounted at `/var/lib/docker` and `/mnt/registry` respectively, use the [fdisk](https://www.computerhope.com/unix/fdisk.htm) command for reference.
|
||||
- Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [OpenEBS](https://openebs.io/) to provide storage service with dynamic provisioning. It is convenient for testing and development. For production, please [configure supported persistent storage service](../storage-configuration) and prepare [high availability configuration](../master-ha) before installation.
|
||||
- Since the air-gapped machines cannot connect to apt or yum source, please use clean Linux machine to avoid this problem.
|
||||
|
||||
## Step 1: Prepare Linux Hosts
|
||||
|
||||
The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements.
|
||||
|
||||
- Supported OSes: CentOS 7.4 ~ 7.7 (64-bit), Ubuntu 16.04.5/16.04.6/18.04.1/18.04.2/18.04.3 LTS (64-bit)
|
||||
- Time synchronization is required across all nodes, otherwise the installation may not succeed;
|
||||
- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`;
|
||||
- If you are using `Ubuntu 18.04`, you need to use the user `root`.
|
||||
- Ensure your disk of each node is at least 100G.
|
||||
- CPU and memory in total of all machines: 2 cores and 4 GB for minimal installation; 8 cores and 16 GB for complete installation.
|
||||
|
||||
|
||||
The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes.
|
||||
|
||||
> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide.
|
||||
|
||||
| Host IP | Host Name | Role |
|
||||
| --- | --- | --- |
|
||||
|192.168.0.1|master|master, etcd|
|
||||
|192.168.0.2|node1|node|
|
||||
|192.168.0.3|node2|node|
|
||||
|
||||
### Cluster Architecture
|
||||
|
||||
#### Single Master, Single Etcd, Two Nodes
|
||||
|
||||

|
||||
|
||||
## Step 2: Download Installer Package
|
||||
|
||||
Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`.
|
||||
|
||||
```bash
|
||||
curl -L https://kubesphere.io/download/offline/latest > kubesphere-all-offline-v2.1.1.tar.gz \
|
||||
&& tar -zxf kubesphere-all-offline-v2.1.1.tar.gz && cd kubesphere-all-offline-v2.1.1/conf
|
||||
```
|
||||
|
||||
## Step 3: Configure Host Template
|
||||
|
||||
> This step is only for multi-node installation, you can skip this step if you choose all-in-one installation.
|
||||
|
||||
Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file.
|
||||
|
||||
> Note:
|
||||
>
|
||||
> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`.
|
||||
> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`.
|
||||
> - master, node1 and node2 are the host names of each node and all host names should be in lowercase.
|
||||
|
||||
### hosts.ini
|
||||
|
||||
```ini
|
||||
[all]
|
||||
master ansible_connection=local ip=192.168.0.1
|
||||
node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD
|
||||
node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD
|
||||
|
||||
[local-registry]
|
||||
master
|
||||
|
||||
[kube-master]
|
||||
master
|
||||
|
||||
[kube-node]
|
||||
node1
|
||||
node2
|
||||
|
||||
[etcd]
|
||||
master
|
||||
|
||||
[k8s-cluster:children]
|
||||
kube-node
|
||||
kube-master
|
||||
```
|
||||
|
||||
> Note:
|
||||
>
|
||||
> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here.
|
||||
> - Installer will use a node as the local registry for docker images, defaults to "master" in the group `[local-registry]`.
|
||||
> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively.
|
||||
> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`.
|
||||
>
|
||||
> Parameters Specification:
|
||||
>
|
||||
> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection.
|
||||
> - `ansible_host`: The name of the host to be connected.
|
||||
> - `ip`: The ip of the host to be connected.
|
||||
> - `ansible_user`: The default ssh user name to use.
|
||||
> - `ansible_become_pass`: Allows you to set the privilege escalation password.
|
||||
> - `ansible_ssh_pass`: The password of the host to be connected using root.
|
||||
|
||||
## Step 4: Enable All Components
|
||||
|
||||
> This is step is complete installation. You can skip this step if you choose a minimal installation.
|
||||
|
||||
Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default.
|
||||
|
||||
```yaml
|
||||
# LOGGING CONFIGURATION
|
||||
# logging is an optional component when installing KubeSphere, and
|
||||
# Kubernetes builtin logging APIs will be used if logging_enabled is set to false.
|
||||
# Builtin logging only provides limited functions, so recommend to enable logging.
|
||||
logging_enabled: true # Whether to install logging system
|
||||
elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number
|
||||
elasticsearch_data_replica: 2 # total number of data nodes
|
||||
elasticsearch_volume_size: 20Gi # Elasticsearch volume size
|
||||
log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default.
|
||||
elk_prefix: logstash # the string making up index names. The index name will be formatted as ks-<elk_prefix>-log
|
||||
kibana_enabled: false # Kibana Whether to install built-in Grafana
|
||||
#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption.
|
||||
#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port
|
||||
|
||||
#DevOps Configuration
|
||||
devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image)
|
||||
jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default
|
||||
jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default
|
||||
jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default
|
||||
jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters
|
||||
jenkinsJavaOpts_Xmx: 6g
|
||||
jenkinsJavaOpts_MaxRAM: 8g
|
||||
sonarqube_enabled: true # Whether to install built-in SonarQube
|
||||
#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption.
|
||||
#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token
|
||||
|
||||
# Following components are all optional for KubeSphere,
|
||||
# Which could be turned on to install it before installation or later by updating its value to true
|
||||
openpitrix_enabled: true # KubeSphere application store
|
||||
metrics_server_enabled: true # For KubeSphere HPA to use
|
||||
servicemesh_enabled: true # KubeSphere service mesh system(Istio-based)
|
||||
notification_enabled: true # KubeSphere notification system
|
||||
alerting_enabled: true # KubeSphere alerting system
|
||||
```
|
||||
|
||||
## Step 5: Install KubeSphere to Linux Machines
|
||||
|
||||
> Note:
|
||||
>
|
||||
> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default.
|
||||
> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions.
|
||||
> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation.
|
||||
> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts.
|
||||
|
||||
**1.** Enter `scripts` folder, and execute `install.sh` using `root` user:
|
||||
|
||||
```bash
|
||||
cd ../cripts
|
||||
./install.sh
|
||||
```
|
||||
|
||||
**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume.
|
||||
|
||||
```bash
|
||||
################################################
|
||||
KubeSphere Installer Menu
|
||||
################################################
|
||||
* 1) All-in-one
|
||||
* 2) Multi-node
|
||||
* 3) Quit
|
||||
################################################
|
||||
https://kubesphere.io/ 2020-02-24
|
||||
################################################
|
||||
Please input an option: 2
|
||||
|
||||
```
|
||||
|
||||
**3.** Verify the multi-node installation:
|
||||
|
||||
**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go.
|
||||
|
||||
```bash
|
||||
successsful!
|
||||
#####################################################
|
||||
### Welcome to KubeSphere! ###
|
||||
#####################################################
|
||||
|
||||
Console: http://192.168.0.1:30880
|
||||
Account: admin
|
||||
Password: P@88w0rd
|
||||
|
||||
NOTE:Please modify the default password after login.
|
||||
#####################################################
|
||||
```
|
||||
|
||||
> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components).
|
||||
|
||||
**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in.
|
||||
|
||||

|
||||
|
||||
<font color=red>Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up.</font>
|
||||
|
||||

|
||||
|
||||
## Enable Pluggable Components
|
||||
|
||||
If you already have set up minimal installation, you still can edit the ConfigMap of ks-installer using the following command. Please make sure there is enough resource in your machines, see [Pluggable Components Overview](/en/installation/pluggable-components/).
|
||||
|
||||
```bash
|
||||
kubectl edit cm -n kubesphere-system ks-installer
|
||||
```
|
||||
|
||||
## FAQ
|
||||
|
||||
If you have further questions please do not hesitate to raise issues on [GitHub](https://github.com/kubesphere/kubesphere/issues).
|
||||
|
|
@ -0,0 +1,9 @@
|
|||
---
|
||||
linkTitle: "Install on On-premises environment"
|
||||
weight: 2200
|
||||
|
||||
_build:
|
||||
render: false
|
||||
---
|
||||
|
||||
In this chapter, we will demonstrate how to use KubeKey or Kubeadm to provision a new Kubernetes and KubeSphere cluster on some on on-premises environments, such as VMware vSphere, OpenStack, Bare Metal, etc. You just need prepare the machines with supported operating system before you start installation. The air-gapped installation guide is also included in this chapter.
|
||||
|
|
@ -2,28 +2,33 @@
|
|||
title: "VMware vSphere Installation"
|
||||
keywords: 'kubernetes, kubesphere, VMware vSphere, installation'
|
||||
description: 'How to install KubeSphere on VMware vSphere Linux machines'
|
||||
|
||||
|
||||
weight: 2260
|
||||
---
|
||||
|
||||
# 在 vSphere 部署高可用的 KubeSphere
|
||||
# Introduction
|
||||
|
||||
对于生产环境,我们需要考虑集群的高可用性。如果关键组件(例如 kube-apiserver,kube-scheduler 和 kube-controller-manager)都在同一主节点上运行,则一旦主节点出现故障,Kubernetes 和 KubeSphere 将不可用。因此,我们需要通过为负载均衡器配置多个主节点来设置高可用性集群。您可以使用任何云负载平衡器或任何硬件负载平衡器(例如F5)。另外,Keepalived 和HAproxy 或 Nginx 也是创建高可用性集群的替代方法。
|
||||
For a production environment, we need to consider the high availability of the cluster. If the key components (e.g. kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same master node, Kubernetes and KubeSphere will be unavailable once the master node goes down. Therefore, we need to set up a high-availability cluster by provisioning load balancers with multiple master nodes. You can use any cloud load balancer, or any hardware load balancer (e.g. F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters.
|
||||
|
||||
本教程为您提供了一个示例,说明如何使用 [keepalived + haproxy](https://kubesphere.com.cn/forum/d/1566-kubernetes-keepalived-haproxy) 对 kube-apiserver 进行负载均衡,实现高可用 kubernetes 集群。
|
||||
This tutorial walks you through an example of how to create keepalived + haproxy, and implement high availability of master and etcd nodes using the load balancers.
|
||||
|
||||
## 前提条件
|
||||
## Prerequisites
|
||||
|
||||
- 请遵循该[指南](https://github.com/kubesphere/kubekey),确保您已经知道如何将 KubeSphere 与多节点集群一起安装。有关用于安装的 config yaml 文件的详细信息,请参阅多节点安装。本教程重点介绍如何配置负载均衡器。
|
||||
- 您需要一个 VMware vSphere 帐户来创建VM资源。
|
||||
- 考虑到数据的持久性,对于生产环境,我们建议您准备持久性存储并预先创建 StorageClass 。为了进行开发和测试,您可以使用集成的 OpenEBS 直接将 LocalPV设置为存储服务。
|
||||
- Please make sure that you already know how to install KubeSphere with a multi-node cluster by following the [guide](https://github.com/kubesphere/kubekey). For the detailed information about the config yaml file that is used for installation, see Multi-node Installation. This tutorial focuses more on how to configure load balancers.
|
||||
- You need a VMware vSphere account to create VMs.
|
||||
- Considering data persistence, for a production environment, we recommend you to prepare persistent storage and create a StorageClass in advance. For development and testing, you can use the integrated OpenEBS to provision LocalPV as the storage service directly.
|
||||
|
||||
## 部署架构
|
||||

|
||||
## Architecture
|
||||
|
||||
## 创建主机
|
||||

|
||||
|
||||
本示例创建 9 台 **CentOS Linux release 7.6.1810(Core)** 的虚拟机,默认的最小化安装,每台配置为 2 Core 4 GB 40 G 即可。
|
||||
## Prepare Linux Hosts
|
||||
|
||||
| 主机 IP | 主机名称 | 角色 |
|
||||
This tutorial creates 9 virtual machines with **CentOS Linux release 7.6.1810 (Core)**, the default minimal installation, each configuration is 2 Core 4 GB 40 G.
|
||||
|
||||
|
||||
| Host IP | Host Name | Role |
|
||||
| --- | --- | --- |
|
||||
|10.10.71.214|master1|master1, etcd|
|
||||
|10.10.71.73|master2|master2, etcd|
|
||||
|
|
@ -35,58 +40,63 @@ description: 'How to install KubeSphere on VMware vSphere Linux machines'
|
|||
|10.10.71.77|lb-0|lb(keepalived + haproxy)|
|
||||
|10.10.71.66|lb-1|lb(keepalived + haproxy)|
|
||||
|
||||
选择可创建的资源池,点击右键-新建虚拟机(创建虚拟机入口请好几个,自己选择)
|
||||
Start the Virtual Machine Creation Process in the VMware Host Client
|
||||
You use the New Virtual Machine wizard to create a virtual machine to place in the VMware Host Client inventory
|
||||
|
||||

|
||||

|
||||
|
||||
选择创建类型,创建新虚拟机。
|
||||
You use the Select creation type page of the New Virtual Machine wizard to create a new virtual machine, deploy a virtual machine from an OVF or OVA file, or register an existing virtual machine
|
||||
|
||||

|
||||

|
||||
|
||||
填写虚拟机名称和存放文件夹。
|
||||
When you create a new virtual machine, provide a unique name for the virtual machine to distinguish it from existing virtual machines on the host you are managing.
|
||||
|
||||

|
||||

|
||||
|
||||
选择计算资源。
|
||||
Select the datastore or datastore cluster to store the virtual machine configuration files and all of the virtual disks in. You can select the datastore that has the most suitable properties, such as size, speed, and availability, for your virtual machine storage.
|
||||
|
||||

|
||||

|
||||
|
||||
选择存储。
|
||||

|
||||
|
||||

|
||||

|
||||
|
||||
选择兼容性,这里是 ESXi 7.0 及更高版本。
|
||||
you select a guest operating system, the wizard provides the appropriate defaults for the operating system installation.
|
||||
|
||||

|
||||

|
||||
|
||||
选择客户机操作系统,Linux CentOS 7 (64 位)。
|
||||
Before you deploy a new virtual machine, you have the option to configure the virtual machine hardware and the virtual machine options
|
||||
|
||||

|
||||

|
||||
|
||||
自定义硬件,这里操作系统是挂载的 ISO 文件(打开电源时连接),网络是 VLAN71(勾选)。
|
||||

|
||||
|
||||

|
||||

|
||||
|
||||
在`即将完成`页面上可查看为虚拟机选择的配置。
|
||||

|
||||
|
||||

|
||||
In the Ready to complete page, you review the configuration selections that you made for the virtual machine.
|
||||
|
||||
## 部署 keepalived+haproxy
|
||||
### yum 安装
|
||||

|
||||
|
||||
在主机为lb-0和lb-1中部署keepalived+haproxy 即IP为10.10.71.77与10.10.71.66的服务器上安装部署haproxy、keepalived、psmisc
|
||||
|
||||
## Install a Load Balancer using Keepalived and Haproxy (Optional)
|
||||
|
||||
For production environment, you have to prepare an external Load Balancer. If you do not have a Load Balancer, you can install it using Keepalived and Haproxy. If you are provisioning a development or testing environment, please skip this section.
|
||||
|
||||
### Yum Install
|
||||
|
||||
host lb-0(10.10.71.77) and host lb-1(10.10.71.66)
|
||||
|
||||
```bash
|
||||
yum install keepalived haproxy psmisc -y
|
||||
```
|
||||
|
||||
### 配置 haproxy
|
||||
|
||||
在IP为 10.10.71.77 与 10.10.71.66 的服务器 ,配置 haproxy (两台 lb 机器配置一致即可,注意后端服务地址)。
|
||||
|
||||
Haproxy 配置 /etc/haproxy/haproxy.cfg
|
||||
### Configure Haproxy
|
||||
|
||||
On the servers with IP 10.10.71.77 and 10.10.71.66, configure haproxy (the configuration of the two lb machines is the same, pay attention to the back-end service address).
|
||||
```bash
|
||||
#Haproxy Configure /etc/haproxy/haproxy.cfg
|
||||
global
|
||||
log 127.0.0.1 local2
|
||||
chroot /var/lib/haproxy
|
||||
|
|
@ -128,169 +138,198 @@ backend kube-apiserver
|
|||
server kube-apiserver-2 10.10.71.73:6443 check
|
||||
server kube-apiserver-3 10.10.71.62:6443 check
|
||||
```
|
||||
|
||||
|
||||
|
||||
启动之前检查语法是否有问题
|
||||
Check for grammar before starting
|
||||
|
||||
```bash
|
||||
haproxy -f /etc/haproxy/haproxy.cfg -c
|
||||
```
|
||||
|
||||
启动 Haproxy,并设置开机自启动
|
||||
Start Haproxy and set it to enable haproxy
|
||||
|
||||
```bash
|
||||
systemctl restart haproxy && systemctl enable haproxy
|
||||
```
|
||||
|
||||
停止 Haproxy
|
||||
Stop Haproxy
|
||||
|
||||
```bash
|
||||
systemctl stop haproxy
|
||||
```
|
||||
### Configure Keepalived
|
||||
|
||||
### 配置 keepalived
|
||||
|
||||
主 haproxy 77 lb-0-10.10.71.77 (/etc/keepalived/keepalived.conf)
|
||||
Main haproxy 77 lb-0-10.10.71.77 (/etc/keepalived/keepalived.conf)
|
||||
|
||||
```bash
|
||||
global_defs {
|
||||
notification_email {
|
||||
}
|
||||
smtp_connect_timeout 30 #连接超时时间
|
||||
router_id LVS_DEVEL01 ##相当于给这个服务器起个昵称
|
||||
vrrp_skip_check_adv_addr
|
||||
vrrp_garp_interval 0
|
||||
vrrp_gna_interval 0
|
||||
notification_email {
|
||||
}
|
||||
smtp_connect_timeout 30
|
||||
router_id LVS_DEVEL01
|
||||
vrrp_skip_check_adv_addr
|
||||
vrrp_garp_interval 0
|
||||
vrrp_gna_interval 0
|
||||
}
|
||||
vrrp_script chk_haproxy {
|
||||
script "killall -0 haproxy"
|
||||
interval 2
|
||||
weight 2
|
||||
script "killall -0 haproxy"
|
||||
interval 2
|
||||
weight 2
|
||||
}
|
||||
vrrp_instance haproxy-vip {
|
||||
state MASTER #主服务器 是MASTER
|
||||
priority 100 #主服务器优先级要比备服务器高
|
||||
interface ens192 #实例绑定的网卡
|
||||
virtual_router_id 60 #定义一个热备组,可以认为这是60号热备组
|
||||
advert_int 1 #1秒互相通告一次,检查对方死了没。
|
||||
authentication {
|
||||
auth_type PASS #认证类型
|
||||
auth_pass 1111 #认证密码 这些相当于暗号
|
||||
}
|
||||
unicast_src_ip 10.10.71.77 #当前机器地址
|
||||
unicast_peer {
|
||||
10.10.71.66 #peer中其它机器地址
|
||||
}
|
||||
virtual_ipaddress {
|
||||
#vip地址
|
||||
10.10.71.67/24
|
||||
}
|
||||
track_script {
|
||||
chk_haproxy
|
||||
}
|
||||
state MASTER
|
||||
priority 100
|
||||
interface ens192
|
||||
virtual_router_id 60
|
||||
advert_int 1
|
||||
authentication {
|
||||
auth_type PASS
|
||||
auth_pass 1111
|
||||
}
|
||||
unicast_src_ip 10.10.71.77
|
||||
unicast_peer {
|
||||
10.10.71.66
|
||||
}
|
||||
virtual_ipaddress {
|
||||
#vip
|
||||
10.10.71.67/24
|
||||
}
|
||||
track_script {
|
||||
chk_haproxy
|
||||
}
|
||||
}
|
||||
```
|
||||
remarks haproxy 66 lb-1-10.10.71.66 (/etc/keepalived/keepalived.conf)
|
||||
|
||||
备 haproxy 66 lb-1-10.10.71.66 (/etc/keepalived/keepalived.conf)
|
||||
```bash
|
||||
global_defs {
|
||||
notification_email {
|
||||
}
|
||||
router_id LVS_DEVEL02 ##相当于给这个服务器起个昵称
|
||||
vrrp_skip_check_adv_addr
|
||||
vrrp_garp_interval 0
|
||||
vrrp_gna_interval 0
|
||||
notification_email {
|
||||
}
|
||||
router_id LVS_DEVEL02
|
||||
vrrp_skip_check_adv_addr
|
||||
vrrp_garp_interval 0
|
||||
vrrp_gna_interval 0
|
||||
}
|
||||
vrrp_script chk_haproxy {
|
||||
script "killall -0 haproxy"
|
||||
interval 2
|
||||
weight 2
|
||||
script "killall -0 haproxy"
|
||||
interval 2
|
||||
weight 2
|
||||
}
|
||||
vrrp_instance haproxy-vip {
|
||||
state BACKUP #备份服务器 是 backup
|
||||
priority 90 #优先级要低(把备份的90修改为100)
|
||||
interface ens192 #实例绑定的网卡
|
||||
virtual_router_id 60
|
||||
advert_int 1
|
||||
authentication {
|
||||
auth_type PASS
|
||||
auth_pass 1111
|
||||
}
|
||||
unicast_src_ip 10.10.71.66 #当前机器地址
|
||||
unicast_peer {
|
||||
10.10.71.77 #peer 中其它机器地址
|
||||
}
|
||||
virtual_ipaddress {
|
||||
#加/24
|
||||
10.10.71.67/24
|
||||
}
|
||||
track_script {
|
||||
chk_haproxy
|
||||
}
|
||||
state BACKUP
|
||||
priority 90
|
||||
interface ens192
|
||||
virtual_router_id 60
|
||||
advert_int 1
|
||||
authentication {
|
||||
auth_type PASS
|
||||
auth_pass 1111
|
||||
}
|
||||
unicast_src_ip 10.10.71.66
|
||||
unicast_peer {
|
||||
10.10.71.77
|
||||
}
|
||||
virtual_ipaddress {
|
||||
10.10.71.67/24
|
||||
}
|
||||
track_script {
|
||||
chk_haproxy
|
||||
}
|
||||
}
|
||||
```
|
||||
start keepalived and enable keepalived
|
||||
|
||||
启动 keepalived,设置开机自启动
|
||||
```bash
|
||||
systemctl restart keepalived && systemctl enable keepalived
|
||||
systemctl stop keepaliv
|
||||
|
||||
systemctl start keepalived
|
||||
```
|
||||
|
||||
开启 keepalived服务
|
||||
### Verify availability
|
||||
|
||||
```bash
|
||||
systemctl start keepalivedb
|
||||
```
|
||||
Use `ip a s` to view the vip binding status of each lb node
|
||||
|
||||
### 验证可用性
|
||||
|
||||
使用 `ip a s` 查看各 lb 节点 vip 绑定情况
|
||||
```bash
|
||||
ip a s
|
||||
```
|
||||
暂停vip所在节点 haproxy:`systemctl stop haproxy`
|
||||
```bash
|
||||
|
||||
Pause VIP node haproxy:`systemctl stop haproxy`
|
||||
|
||||
```
|
||||
systemctl stop haproxy
|
||||
```
|
||||
再次使用 `ip a s ` 查看各 lb 节点 vip 绑定情况,查看 vip 是否发生漂移
|
||||
|
||||
Use `ip a s` again to check the vip binding of each lb node, and check whether vip drifts
|
||||
|
||||
```bash
|
||||
ip a s
|
||||
ip a s
|
||||
```
|
||||
或者使用 `systemctl status -l keepalived` 命令查看
|
||||
|
||||
Or use `systemctl status -l keepalived` command to view
|
||||
|
||||
```bash
|
||||
systemctl status -l keepalived
|
||||
```
|
||||
|
||||
## Get the Installer Excutable File
|
||||
|
||||
## 获取安装程序可执行文件
|
||||
Download the Installer for KubeSphere v3.0.0.
|
||||
|
||||
下载 installer 至一台目标机器
|
||||
{{< tabs >}}
|
||||
|
||||
{{< tab "For users with poor network to GitHub" >}}
|
||||
|
||||
For users in China, you can download the installer using this link.
|
||||
|
||||
```bash
|
||||
wget https://kubesphere.io/kubekey/releases/v1.0.0
|
||||
```
|
||||
{{</ tab >}}
|
||||
|
||||
{{< tab "For users with good network to GitHub" >}}
|
||||
|
||||
For users with good network to GitHub, you can download it from [GitHub Release Page](https://github.com/kubesphere/kubekey/releases/tag/v1.0.0) or use the following link directly.
|
||||
|
||||
```bash
|
||||
wget https://github.com/kubesphere/kubekey/releases/download/v1.0.0/kubekey-v1.0.0-linux-amd64.tar.gz
|
||||
```
|
||||
{{</ tab >}}
|
||||
|
||||
{{</ tabs >}}
|
||||
|
||||
Unzip it.
|
||||
|
||||
```bash
|
||||
tar -zxvf v1.0.0
|
||||
```
|
||||
|
||||
Grant the execution right to `kk`:
|
||||
|
||||
```bash
|
||||
curl -O -k https://kubernetes.pek3b.qingstor.com/tools/kubekey/kk
|
||||
chmod +x kk
|
||||
```
|
||||
|
||||
## 创建多节点群集
|
||||
## Create a Multi-node Cluster
|
||||
|
||||
您可以使用高级安装来控制自定义参数或创建多节点群集。具体来说,通过指定配置文件来创建集群。
|
||||
You have more control to customize parameters or create a multi-node cluster using the advanced installation. Specifically, create a cluster by specifying a configuration file.。
|
||||
|
||||
### kubekey 部署 k8s 集群
|
||||
With KubeKey, you can install Kubernetes and KubeSphere
|
||||
|
||||
创建配置文件(一个示例配置文件)|包含 kubesphere 的配置文件
|
||||
Create a Kubernetes cluster with KubeSphere installed (e.g. --with-kubesphere v3.0.0)
|
||||
|
||||
```bash
|
||||
./kk create config --with-kubesphere v3.0.0 -f ~/config-sample.yaml
|
||||
./kk create config --with-kubernetes v1.17.9 --with-kubesphere v3.0.0 -f ~/config-sample.yaml
|
||||
```
|
||||
|
||||
#### 集群节点配置
|
||||
> The following Kubernetes versions has been fully tested with KubeSphere:
|
||||
> - v1.15: v1.15.12
|
||||
> - v1.16: v1.16.13
|
||||
> - v1.17: v1.17.9 (default)
|
||||
> - v1.18: v1.18.6
|
||||
|
||||
vi ~/config-sample.yaml
|
||||
Modify the file config-sample.yaml according to your environment
|
||||
|
||||
```bash
|
||||
vi config-sample.yaml
|
||||
```
|
||||
|
||||
```yaml
|
||||
#vi ~/config-sample.yaml
|
||||
apiVersion: kubekey.kubesphere.io/v1alpha1
|
||||
kind: Cluster
|
||||
metadata:
|
||||
|
|
@ -308,7 +347,7 @@ spec:
|
|||
- master1
|
||||
- master2
|
||||
- master3
|
||||
master:
|
||||
master:
|
||||
- master1
|
||||
- master2
|
||||
- master3
|
||||
|
|
@ -418,22 +457,21 @@ spec:
|
|||
servicemesh: # Whether to install KubeSphere Service Mesh (Istio-based). It provides fine-grained traffic management, observability and tracing, and offer visualization for traffic topology
|
||||
enabled: false
|
||||
```
|
||||
Create a cluster using the configuration file you customized above:
|
||||
|
||||
使用您在上面自定义的配置文件创建集群:
|
||||
|
||||
```
|
||||
```bash
|
||||
./kk create cluster -f config-sample.yaml
|
||||
```
|
||||
|
||||
#### 验证安装结果
|
||||
#### Verify the multi-node installation
|
||||
|
||||
检查安装日志,然后等待一段时间
|
||||
Inspect the logs of installation, and wait a while:
|
||||
|
||||
```bash
|
||||
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
|
||||
```
|
||||
|
||||
如果在创建集群,最后返回 `Welcome to KubeSphere` ,则表示已安装成功。
|
||||
If you can see the welcome log return, it means the installation is successful. You are ready to go.
|
||||
|
||||
```bash
|
||||
**************************************************
|
||||
|
|
@ -447,7 +485,7 @@ NOTES:
|
|||
1. After logging into the console, please check the
|
||||
monitoring status of service components in
|
||||
the "Cluster Management". If any service is not
|
||||
ready, please wait patiently until all components
|
||||
ready, please wait patiently until all components
|
||||
are ready.
|
||||
2. Please modify the default password after login.
|
||||
#####################################################
|
||||
|
|
@ -455,15 +493,11 @@ https://kubesphere.io 2020-08-15 23:32:12
|
|||
#####################################################
|
||||
```
|
||||
|
||||
#### 登录 console 界面
|
||||
#### Log in the console
|
||||
|
||||
使用给定的访问地址进行访问,进入到 KubeSphere 的登陆界面并使用默认账号(用户名 `admin`,密码 `P@88w0rd`)即可登陆平台。
|
||||
You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in.
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
#### 开启可插拔功能组件(可选)
|
||||
|
||||
上面的示例演示了默认最小安装的过程。若要在 KubeSphere 中启用其他组件,请参阅[启用可插拔组件](https://github.com/kubesphere/ks-installer/blob/master/README_zh.md#安装功能组件)了解更多详细信息。
|
||||

|
||||
|
||||
#### Enable Pluggable Components (Optional)
|
||||
The example above demonstrates the process of a default minimal installation. To enable other components in KubeSphere, see [Enable Pluggable Components for more details](https://github.com/kubesphere/ks-installer#enable-pluggable-components).
|
||||
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
linkTitle: "Install on Linux"
|
||||
linkTitle: "Installing on Public Cloud"
|
||||
weight: 2200
|
||||
|
||||
_build:
|
||||
render: false
|
||||
---
|
||||
---
|
||||
|
|
|
|||
|
|
@ -1,116 +0,0 @@
|
|||
---
|
||||
title: "All-in-One Installation"
|
||||
keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus'
|
||||
description: 'The guide for installing all-in-one KubeSphere for developing or testing'
|
||||
|
||||
linkTitle: "All-in-One"
|
||||
weight: 2210
|
||||
---
|
||||
|
||||
For those who are new to KubeSphere and looking for a quick way to discover the platform, the all-in-one mode is your best choice to install it since it is one-click and hassle-free configuration installation with provisioning KubeSphere and Kubernetes on your machine.
|
||||
|
||||
- <font color=red>The following instructions are for the default installation without enabling any optional components as we have made them pluggable since v2.1.0. If you want to enable any one, please see the section [Enable Pluggable Components](../all-in-one#enable-pluggable-components) below.</font>
|
||||
- <font color=red>If your machine has >= 8 cores and >= 16G memory, we recommend you to install the full package of KubeSphere by [enabling optional components](../complete-installation)</font>.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirement](../port-firewall) for more information.
|
||||
|
||||
## Step 1: Prepare Linux Machine
|
||||
|
||||
The following describes the requirements of hardware and operating system.
|
||||
|
||||
- For `Ubuntu 16.04` OS, it is recommended to select the latest `16.04.5`.
|
||||
- If you are using Ubuntu 18.04, you need to use the root user to install.
|
||||
- If the Debian system does not have the sudo command installed, you need to execute the `apt update && apt install sudo` command using root before installation.
|
||||
|
||||
### Hardware Recommendation
|
||||
|
||||
| System | Minimum Requirements |
|
||||
| ------- | ----------- |
|
||||
| CentOS 7.4 ~ 7.7 (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:100 G |
|
||||
| Ubuntu 16.04/18.04 LTS (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:100 G |
|
||||
| Red Hat Enterprise Linux Server 7.4 (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:100 G |
|
||||
| Debian Stretch 9.5 (64 bit)| CPU:2 Core, Memory:4 G, Disk Space:100 G |
|
||||
|
||||
## Step 2: Download Installer Package
|
||||
|
||||
Execute the following commands to download Installer 2.1.1 and unpack it.
|
||||
|
||||
```bash
|
||||
curl -L https://kubesphere.io/download/stable/latest > installer.tar.gz \
|
||||
&& tar -zxf installer.tar.gz && cd kubesphere-all-v2.1.1/scripts
|
||||
```
|
||||
|
||||
## Step 3: Get Started with Installation
|
||||
|
||||
You should not do anything except executing one command as follows. The installer will complete all things for you automatically including installing/updating dependency packages, installing Kubernetes with default version 1.16.7, storage service and so on.
|
||||
|
||||
> Note:
|
||||
>
|
||||
> - Generally speaking, do not modify any configuration.
|
||||
> - KubeSphere installs `calico` by default. If you would like to use a different network plugin, you are allowed to change the configuration in `conf/common.yaml`. You are also allowed to modify other configurations such as storage class, pluggable components, etc.
|
||||
> - The default storage class is [OpenEBS](https://openebs.io/) which is a kind of [Local Volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) to provision persistence storage service. OpenEBS supports [dynamic provisioning PV](https://docs.openebs.io/docs/next/uglocalpv.html#Provision-OpenEBS-Local-PV-based-on-hostpath). It will be installed automatically for your testing purpose.
|
||||
> - Please refer [storage configurations](../storage-configuration) for supported storage class.
|
||||
> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts.
|
||||
|
||||
**1.** Execute the following command:
|
||||
|
||||
```bash
|
||||
./install.sh
|
||||
```
|
||||
|
||||
**2.** Enter `1` to select `All-in-one` mode and type `yes` if your machine satisfies the requirements to start:
|
||||
|
||||
```bash
|
||||
################################################
|
||||
KubeSphere Installer Menu
|
||||
################################################
|
||||
* 1) All-in-one
|
||||
* 2) Multi-node
|
||||
* 3) Quit
|
||||
################################################
|
||||
https://kubesphere.io/ 2020-02-24
|
||||
################################################
|
||||
Please input an option: 1
|
||||
```
|
||||
|
||||
**3.** Verify if KubeSphere is installed successfully or not:
|
||||
|
||||
**(1).** If you see "Successful" returned after completed, it means the installation is successful. The console service is exposed through nodeport 30880 by default. You may need to bind EIP and configure port forwarding in your environment for outside users to access. Make sure you disable the related firewall.
|
||||
|
||||
```bash
|
||||
successsful!
|
||||
#####################################################
|
||||
### Welcome to KubeSphere! ###
|
||||
#####################################################
|
||||
|
||||
Console: http://192.168.0.8:30880
|
||||
Account: admin
|
||||
Password: P@88w0rd
|
||||
|
||||
NOTE:Please modify the default password after login.
|
||||
#####################################################
|
||||
```
|
||||
|
||||
> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components).
|
||||
|
||||
**(2).** You will be able to use default account and password to log in the console to take a tour of KubeSphere.
|
||||
|
||||
<font color=red>Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up.</font>
|
||||
|
||||

|
||||
|
||||
## Enable Pluggable Components
|
||||
|
||||
The guide above is only used for minimal installation by default. You can execute the following command to open the configure map and enable pluggable components. Make sure your cluster has enough CPU and memory in advance, see [Enable Pluggable Components](../pluggable-components).
|
||||
|
||||
```bash
|
||||
kubectl edit cm -n kubesphere-system ks-installer
|
||||
```
|
||||
|
||||
## FAQ
|
||||
|
||||
The installer has been tested on Aliyun, AWS, Huawei Cloud, QingCloud and Tencent Cloud. Please check the [results](https://github.com/kubesphere/ks-installer/issues/23) for details. Also please read the [FAQ of installation](../../faq/faq-install).
|
||||
|
||||
If you have any further questions please do not hesitate to file issues on [GitHub](https://github.com/kubesphere/kubesphere/issues).
|
||||
|
|
@ -1,76 +0,0 @@
|
|||
---
|
||||
title: "Install All Optional Components"
|
||||
keywords: 'kubesphere, kubernetes, docker, devops, service mesh, openpitrix'
|
||||
description: 'Install KubeSphere with all optional components enabled on Linux machine'
|
||||
|
||||
|
||||
weight: 2260
|
||||
---
|
||||
|
||||
The installer only installs required components (i.e. minimal installation) by default since v2.1.0. Other components are designed to be pluggable, which means you can enable any of them before or after installation. If your machine meets the following minimum requirements, we recommend you to **enable all components before installation**. A complete installation gives you an opportunity to comprehensively discover the container platform.
|
||||
|
||||
<font color="red">
|
||||
Minimum Requirements
|
||||
|
||||
- CPU: 8 cores in total of all machines
|
||||
- Memory: 16 GB in total of all machines
|
||||
|
||||
</font>
|
||||
|
||||
> Note:
|
||||
>
|
||||
> - If your machines do not meet the minimum requirements of a complete installation, you can enable any of components at your will. Please refer to [Enable Pluggable Components Installation](../pluggable-components).
|
||||
> - It works for [All-in-One](../all-in-one) and [Multi-Node](../multi-node).
|
||||
|
||||
This tutorial will walk you through how to enable all components of KubeSphere.
|
||||
|
||||
## Download Installer Package
|
||||
|
||||
If you do not have the package yet, please run the following commands to download Installer 2.1.1 and unpack it, then enter `conf` folder.
|
||||
|
||||
```bash
|
||||
$ curl -L https://kubesphere.io/download/stable/v2.1.1 > installer.tar.gz \
|
||||
&& tar -zxf installer.tar.gz && cd kubesphere-all-v2.1.1/conf
|
||||
```
|
||||
|
||||
## Enable All Components
|
||||
|
||||
Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default.
|
||||
|
||||
```yaml
|
||||
# LOGGING CONFIGURATION
|
||||
# logging is an optional component when installing KubeSphere, and
|
||||
# Kubernetes builtin logging APIs will be used if logging_enabled is set to false.
|
||||
# Builtin logging only provides limited functions, so recommend to enable logging.
|
||||
logging_enabled: true # Whether to install logging system
|
||||
elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number
|
||||
elasticsearch_data_replica: 2 # total number of data nodes
|
||||
elasticsearch_volume_size: 20Gi # Elasticsearch volume size
|
||||
log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default.
|
||||
elk_prefix: logstash # the string making up index names. The index name will be formatted as ks-<elk_prefix>-log
|
||||
kibana_enabled: false # Kibana Whether to install built-in Grafana
|
||||
#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption.
|
||||
#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port
|
||||
|
||||
#DevOps Configuration
|
||||
devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image)
|
||||
jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default
|
||||
jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default
|
||||
jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default
|
||||
jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters
|
||||
jenkinsJavaOpts_Xmx: 6g
|
||||
jenkinsJavaOpts_MaxRAM: 8g
|
||||
sonarqube_enabled: true # Whether to install built-in SonarQube
|
||||
#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption.
|
||||
#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token
|
||||
|
||||
# Following components are all optional for KubeSphere,
|
||||
# Which could be turned on to install it before installation or later by updating its value to true
|
||||
openpitrix_enabled: true # KubeSphere application store
|
||||
metrics_server_enabled: true # For KubeSphere HPA to use
|
||||
servicemesh_enabled: true # KubeSphere service mesh system(Istio-based)
|
||||
notification_enabled: true # KubeSphere notification system
|
||||
alerting_enabled: true # KubeSphere alerting system
|
||||
```
|
||||
|
||||
Save it, then you can continue the installation process.
|
||||
|
|
@ -0,0 +1,240 @@
|
|||
---
|
||||
title: "Deploy KubeSphere on Azure VM Instance"
|
||||
keywords: "Kubesphere, Installation, HA, high availability, load balancer, Azure"
|
||||
description: "The tutorial is for installing a high-availability cluster on Azure."
|
||||
---
|
||||
|
||||
## Before you begin
|
||||
|
||||
Technically, you can either install, administer, and manage Kubernetes yourself or go for a managed Kubernetes solution. If you are looking for a way to take advantage of Kubernetes with a hands-off approach, a fully managed platform solution is what you’re looking for, please see [Deploy KubeSphere on AKS](../../../installing-on-kubernetes/hosted-kubernetes/install-ks-on-aks) for more details. But if you want a bit more control over your configuration and setup a highly available cluster on Azure, this instruction will help you to setup a production-ready Kubernetes and KubeSphere.
|
||||
|
||||
## Introduction
|
||||
|
||||
In this tutorial, we will use two key features of Azure virtual machines (VMs):
|
||||
|
||||
- Virtual Machine Scale Sets: Azure VMSS let you create and manage a group of load balanced VMs. The number of VM instances can automatically increase or decrease in response to demand or a defined schedule(Kubernates Autoscaler is available, but not covered in this tutorial, see [autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider/azure) for more details), which perfectly fits the Worker Nodes.
|
||||
- Availability sets: An availability set is a logical grouping of VMs within a datacenter that automatically distributed across fault domains. This approach limits the impact of potential physical hardware failures, network outages, or power interruptions. All the Master and ETCD VMs will be placed in an Availability sets to meet our High Availability goals.
|
||||
|
||||
Besides those VMs, other resources like Load Balancer, Virtual Network and Network Security Group will be involved.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- You need an [Azure](https://portal.azure.com) account to create all the resources.
|
||||
- Basic knowledge of [Azure Resource Manager](https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/)(ARM) templates, which are files that define the Azure infrastructure and configuration.
|
||||
- Considering data persistence, for a production environment, we recommend you to prepare persistent storage and create a StorageClass in advance. For development and testing, you can use the integrated OpenEBS to provision LocalPV as the storage service directly.
|
||||
|
||||
## Architecture
|
||||
|
||||
Six machines of "Ubuntu 18.04" will be deployed in Azure Resources Group. Three of them are grouped into an Availability sets, playing the role of both Master and ETCD of the Kubernetes control plane. Another three VMs will be defined as a VMSS, Worker nodes will be run on it.
|
||||
|
||||

|
||||
|
||||
Those VMs will be attached to a load balancer, there are two predefined rules in the LB:
|
||||
|
||||
- **Inbound NAT**: ssh port will be mapped for each machine, so we can easily manage VMs.
|
||||
- **Load Balancing**: http and https ports will be mapped to Node pools by default, we can add other ports on demand.
|
||||
|
||||
| Service | Protocol | Rule | Backend Port | Frontend Port/Ports | Pools |
|
||||
|---|---|---|---|---|---|
|
||||
| ssh | TCP | Inbound NAT | 22 |50200, 50201,50202, 50100~50199| Master, Node |
|
||||
| apiserver | TCP | Load Balancing | 6443 | 6443 | Master |
|
||||
| ks-console | TCP | Load Balancing | 30880 | 30880 | Master |
|
||||
| http | TCP | Load Balancing | 80 | 80 | Node |
|
||||
| https | TCP | Load Balancing | 443 | 443 | Node |
|
||||
|
||||
## Deploy HA Cluster Infrastructrue
|
||||
|
||||
You don't have to create those resources one by one with Wizards. Following the best practice of **infrastructure as code** on Azure, all resources in the architecture are already defined as ARM templates.
|
||||
|
||||
### Start to deploy with one click
|
||||
|
||||
Click the *Deploy* button below, you will be redirected to Azure and asked to fill in deployment parameters.
|
||||
|
||||
[](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FRolandMa1986%2Fazurek8s%2Fmaster%2Fazuredeploy.json) [](http://armviz.io/#/?load=https%3A%2F%2Fraw.githubusercontent.com%2FRolandMa1986%2Fazurek8s%2Fmaster%2Fazuredeploy.json)
|
||||
|
||||
### Change template parameters
|
||||
|
||||
Only a few parameter need to be changed.
|
||||
|
||||
- Choose the *Create new* link under the Resources group and fill in a Name such as "KubeSphereVMRG".
|
||||
- Fill in the admin's Username.
|
||||
- Copy your public ssh key and fill in the Admin Key. Or create new one with *ssh-keygen*.
|
||||
|
||||
> Password authentication is restriced in the Linux configuration, only SSH accept.
|
||||
|
||||
Click the *Purchase* button in the bottom when you ready to continue.
|
||||
|
||||
### Review Azure Resources in the Portal
|
||||
|
||||
Once the deployment success, you can find all the resources you need in the KubeSphereVMRG Resources group. Take your time and check them one by one if you are new to Azure. Then find the public IP of LB and private IP addresses of the VMs. You will need them in the next step.
|
||||
|
||||

|
||||
|
||||
## Deploy Kubernetes and Kubesphere
|
||||
|
||||
You can execute the following command on your laptop or SSH to one of the Master VMs. There are files will be downloaded to local and disturbed to each VM during the installation. The installation will be much faster when you use **kk** in the Intranet than the Internet.
|
||||
|
||||
```bash
|
||||
# copy your private ssh to master-0
|
||||
scp -P 50200 ~/.ssh/id_rsa kubesphere@40.81.5.xx:/home/kubesphere/.ssh/
|
||||
|
||||
# ssh to the master-0
|
||||
ssh -i .ssh/id_rsa2 -p50200 kubesphere@40.81.5.xx
|
||||
```
|
||||
|
||||
### Download KubeKey
|
||||
|
||||
[Kubekey](https://github.com/kubesphere/kubekey) is the next-gen installer which is used for installing Kubernetes and KubeSphere v3.0.0 fastly, flexibly and easily.
|
||||
|
||||
1. First, download it and generate a configuration file to customize the installation as follows.
|
||||
|
||||
|
||||
{{< tabs >}}
|
||||
|
||||
{{< tab "For users with poor network to GitHub" >}}
|
||||
|
||||
For users in China, you can download the installer using this link.
|
||||
|
||||
```bash
|
||||
wget https://kubesphere.io/kubekey/releases/v1.0.0
|
||||
```
|
||||
{{</ tab >}}
|
||||
|
||||
{{< tab "For users with good network to GitHub" >}}
|
||||
|
||||
For users with good network to GitHub, you can download it from [GitHub Release Page](https://github.com/kubesphere/kubekey/releases/tag/v1.0.0) or use the following link directly.
|
||||
|
||||
```bash
|
||||
wget https://github.com/kubesphere/kubekey/releases/download/v1.0.0/kubekey-v1.0.0-linux-amd64.tar.gz
|
||||
```
|
||||
{{</ tab >}}
|
||||
|
||||
{{</ tabs >}}
|
||||
|
||||
Unzip it.
|
||||
|
||||
```bash
|
||||
tar -zxvf v1.0.0
|
||||
```
|
||||
|
||||
Grant the execution right to `kk`:
|
||||
|
||||
```bash
|
||||
chmod +x kk
|
||||
```
|
||||
|
||||
2. Then create an example configuration file with default configurations. Here we use Kubernetes v1.17.9 as an example.
|
||||
|
||||
```
|
||||
./kk create config --with-kubesphere v3.0.0 --with-kubernetes v1.17.9
|
||||
```
|
||||
> The following Kubernetes versions have been fully tested with KubeSphere:
|
||||
> - v1.15: v1.15.12
|
||||
> - v1.16: v1.16.13
|
||||
> - v1.17: v1.17.9 (default)
|
||||
> - v1.18: v1.18.6
|
||||
|
||||
### config-sample.yaml Example
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
hosts:
|
||||
- {name: master-0, address: 40.81.5.xx, port: 50200, internalAddress: 10.0.1.4, user: kubesphere, privateKeyPath: "~/.ssh/id_rsa"}
|
||||
- {name: master-1, address: 40.81.5.xx, port: 50201, internalAddress: 10.0.1.5, user: kubesphere, privateKeyPath: "~/.ssh/id_rsa"}
|
||||
- {name: master-2, address: 40.81.5.xx, port: 50202, internalAddress: 10.0.1.6, user: kubesphere, privateKeyPath: "~/.ssh/id_rsa"}
|
||||
- {name: node000000, address: 40.81.5.xx, port: 50100, internalAddress: 10.0.0.4, user: kubesphere, privateKeyPath: "~/.ssh/id_rsa"}
|
||||
- {name: node000001, address: 40.81.5.xx, port: 50101, internalAddress: 10.0.0.5, user: kubesphere, privateKeyPath: "~/.ssh/id_rsa"}
|
||||
- {name: node000002, address: 40.81.5.xx, port: 50102, internalAddress: 10.0.0.6, user: kubesphere, privateKeyPath: "~/.ssh/id_rsa"}
|
||||
roleGroups:
|
||||
etcd:
|
||||
- master-0
|
||||
- master-1
|
||||
- master-2
|
||||
master:
|
||||
- master-0
|
||||
- master-1
|
||||
- master-2
|
||||
worker:
|
||||
- node000000
|
||||
- node000001
|
||||
- node000002
|
||||
```
|
||||
For a complete configuration sample explanation, please see [this file](https://github.com/kubesphere/kubekey/blob/master/docs/config-example.md).
|
||||
|
||||
### Configure the Load Balancer
|
||||
|
||||
In addition to the node information, you need to provide the load balancer information in the same yaml file. For the IP address, you can find it in *Azure -> KubeSphereVMRG -> PublicLB*. Assume the IP address and listening port of the **load balancer** are `40.81.5.xx` and `6443` respectively, and you can refer to the following example.
|
||||
|
||||
#### The configuration example in config-sample.yaml
|
||||
|
||||
```yaml
|
||||
## Public LB config example
|
||||
## apiserver_loadbalancer_domain_name: "lb.kubesphere.local"
|
||||
controlPlaneEndpoint:
|
||||
domain: lb.kubesphere.local
|
||||
address: "40.81.5.xx"
|
||||
port: "6443"
|
||||
```
|
||||
|
||||
> - Note we are using the public load balancer directly instead of an internal load balancer due to the Azure [Load Balancer limits](https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-troubleshoot#cause-4-accessing-the-internal-load-balancer-frontend-from-the-participating-load-balancer-backend-pool-vm).
|
||||
|
||||
### Persistent Storage Plugin Configuration
|
||||
|
||||
See [Storage Configuration](../storage-configuration) for details.
|
||||
|
||||
### Configure the Network Plugin
|
||||
|
||||
Azure Virtual Network doesn't support IPIP mode which used by [calico](https://docs.projectcalico.org/reference/public-cloud/azure#about-calico-on-azure). So let's change the network plugin to flannel.
|
||||
|
||||
```yaml
|
||||
network:
|
||||
plugin: flannel
|
||||
kubePodsCIDR: 10.233.64.0/18
|
||||
kubeServiceCIDR: 10.233.0.0/18
|
||||
```
|
||||
|
||||
### Start to Bootstrap a Cluster
|
||||
|
||||
After you complete the configuration, you can execute the following command to start the installation:
|
||||
|
||||
```bash
|
||||
./kk create cluster -f config-sample.yaml
|
||||
```
|
||||
|
||||
Inspect the logs of installation:
|
||||
|
||||
```bash
|
||||
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
|
||||
```
|
||||
When the installation finishes, you can see the following message:
|
||||
|
||||
```bash
|
||||
#####################################################
|
||||
### Welcome to KubeSphere! ###
|
||||
#####################################################
|
||||
Console: http://10.128.0.44:30880
|
||||
Account: admin
|
||||
Password: P@88w0rd
|
||||
NOTES:
|
||||
1. After logging into the console, please check the
|
||||
monitoring status of service components in
|
||||
the "Cluster Management". If any service is not
|
||||
ready, please wait patiently until all components
|
||||
are ready.
|
||||
2. Please modify the default password after login.
|
||||
#####################################################
|
||||
https://kubesphere.io 2020-xx-xx xx:xx:xx
|
||||
```
|
||||
|
||||
### Access KubeSphere Console
|
||||
|
||||
Congratulation! Now you can access the KubeSphere console using http://10.128.0.44:30880 (Replace the IP with yours).
|
||||
|
||||
## Add addtional Ports
|
||||
|
||||
Since we are using self-hosted Kubernetes solutions on Azure, So the Load Balancer is not integrated with [Kubernetes Service](https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer). But you still can manually map the Nodeport to the PublicLB. There are 2 steps required.
|
||||
|
||||
1. Create a new Load Balance Rule in the Load Balancer.
|
||||

|
||||
2. Create an Inbound Security rule to allow Internet access in the Network Security Group.
|
||||

|
||||
|
|
@ -1,263 +0,0 @@
|
|||
---
|
||||
title: "KubeSphere 在华为云 ECS 高可用实例"
|
||||
keywords: "Kubesphere 安装, 华为云, ECS, 高可用性, 高可用性, 负载均衡器"
|
||||
description: "本教程用于安装高可用性集群"
|
||||
|
||||
Weight: 2230
|
||||
---
|
||||
|
||||
由于对于生产环境,我们需要考虑集群的高可用性。教你部署如何在华为云 ECS 实例服务快速部署一套高可用的生产环境
|
||||
Kubernetes 服务需要做到高可用,需要保证 kube-apiserver 的 HA ,推荐华为云负载均衡器服务.
|
||||
|
||||
## 前提条件
|
||||
|
||||
- 请遵循该[指南](https://github.com/kubesphere/kubekey),确保您已经知道如何将 KubeSphere 与多节点集群一起安装。有关用于安装的 config.yaml 文件的详细信息。本教程重点介绍配置华为云负载均衡器服务高可用安装。
|
||||
- 考虑到数据的持久性,对于生产环境,我们不建议您使用存储OpenEBS,建议 NFS , GlusterFS 等存储(需要提前安装)。文章为了进行开发和测试,集成的 OpenEBS 直接将 LocalPV 设置为存储服务。
|
||||
- SSH 可以访问所有节点。
|
||||
- 所有节点的时间同步。
|
||||
- Red Hat 在其 Linux 发行版本中包括了SELinux,建议关闭 SELinux 或者将 SELinux 的模式切换为 Permissive [宽容]工作模式。
|
||||
|
||||
## 创建主机
|
||||
|
||||
本示例创建 6 台 Ubuntu 18.04 server 64bit 的云服务器,每台配置为 4 核 8 GB
|
||||
|
||||
| 主机IP | 主机名称 | 角色 |
|
||||
| --- | --- | --- |
|
||||
|192.168.1.10|master1|master1, etcd|
|
||||
|192.168.1.11|master2|master2, etcd|
|
||||
|192.168.1.12|master3|master3, etcd|
|
||||
|192.168.1.13|node1|node|
|
||||
|192.168.1.14|node2|node|
|
||||
|192.168.1.15|node3|node|
|
||||
|
||||
> 注意:机器有限,所以把 etcd 放入 master,在生产环境建议单独部署 etcd,提高稳定性
|
||||
|
||||
## 华为云负载均衡器部署
|
||||
### 创建 VPC
|
||||
|
||||
进入到华为云控制, 在左侧列表选择'虚拟私有云', 选择'创建虚拟私有云' 创建VPC,配置如下图
|
||||
|
||||

|
||||
|
||||
### 创建安全组
|
||||
|
||||
在 `访问控制→ 安全组`下,创建一个安全组,设置入方向的规则参考如下:
|
||||
|
||||

|
||||
> 提示:后端服务器的安全组规则必须放行 100.125.0.0/16 网段,否则会导致健康检查异常,详见 后端服务器配置安全组 。此外,还应放行 192.168.1.0/24 (主机之间的网络需全放行)。
|
||||
|
||||
### 创建主机
|
||||

|
||||
在网络配置中,网络选择第一步创建的 VPC 和子网。在安全组中,选择上一步创建的安全组。
|
||||

|
||||
|
||||
### 创建负载均衡器
|
||||
在左侧栏选择 '弹性负载均衡器',进入后选择 购买弹性负载均衡器
|
||||
> 以下健康检查结果在部署后才会显示正常,目前状态为异常
|
||||
#### 内网LB 配置
|
||||
为所有master 节点 添加后端监听器 ,监听端口为 6443
|
||||
|
||||

|
||||
|
||||

|
||||
#### 外网LB 配置
|
||||
若集群需要配置公网访问,则需要为外网负载均衡器配置一个公网 IP为 所有节点 添加后端监听器,监听端口为 80(测试使用 30880 端口,此处 80 端口也需要在安全组中开放)。
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
后面配置文件 config.yaml 需要配置 slb 分配的地址
|
||||
```yaml
|
||||
controlPlaneEndpoint:
|
||||
domain: lb.kubesphere.local
|
||||
address: "192.168.1.8"
|
||||
port: "6443"
|
||||
```
|
||||
### 获取安装程序可执行文件
|
||||
|
||||
```bash
|
||||
#下载 kk installer 至任意一台机器
|
||||
curl -O -k https://kubernetes.pek3b.qingstor.com/tools/kubekey/kk
|
||||
chmod +x kk
|
||||
```
|
||||
|
||||
{{< notice tip >}}
|
||||
|
||||
您可以使用高级安装来控制自定义参数或创建多节点群集。具体来说,通过指定配置文件来创建集群。
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
### 使用 kubekey 部署k8s集群和 KubeSphere 控制台
|
||||
|
||||
```bash
|
||||
# 在当前位置创建配置文件 master-HA.yaml |包含 KubeSphere 的配置文件
|
||||
./kk create config --with-kubesphere v3.0.0 -f master-HA.yaml
|
||||
---
|
||||
# 同时安装存储插件 (支持:localVolume、nfsClient、rbd、glusterfs)。您可以指定多个插件并用逗号分隔。请注意,您添加的第一个将是默认存储类。
|
||||
./kk create config --with-storage localVolume --with-kubesphere v3.0.0 -f master-HA.yaml
|
||||
```
|
||||
|
||||
### 集群配置调整
|
||||
目前当前集群开启了全量的组件,文末也提供了自定义的方法.可默认为 false
|
||||
```yaml
|
||||
apiVersion: kubekey.kubesphere.io/v1alpha1
|
||||
kind: Cluster
|
||||
metadata:
|
||||
name: master-HA
|
||||
spec:
|
||||
hosts:
|
||||
- {name: master1, address: 192.168.1.10, internalAddress: 192.168.1.10, password: yourpassword} # Assume that the default port for SSH is 22, otherwise add the port number after the IP address as above
|
||||
- {name: master2, address: 192.168.1.11, internalAddress: 192.168.1.11, password: yourpassword} # Assume that the default port for SSH is 22, otherwise add the port number after the IP address as above
|
||||
- {name: master3, address: 192.168.1.12, internalAddress: 192.168.1.12, password: yourpassword} # Assume that the default port for SSH is 22, otherwise add the port number after the IP address as above
|
||||
- {name: node1, address: 192.168.1.13, internalAddress: 192.168.1.13, password: yourpassword} # Assume that the default port for SSH is 22, otherwise add the port number after the IP address as above
|
||||
- {name: node2, address: 192.168.1.14, internalAddress: 192.168.1.14, password: yourpassword} # Assume that the default port for SSH is 22SSH is 22, otherwise add the port number after the IP address as above
|
||||
- {name: node3, address: 192.168.1.15, internalAddress: 192.168.1.15, password: yourpassword} # Assume that the default port for SSH is 22, otherwise add the port number after the IP address as above
|
||||
roleGroups:
|
||||
etcd:
|
||||
- master[1:3]
|
||||
master:
|
||||
- master[1:3]
|
||||
worker:
|
||||
- node[1:3]
|
||||
controlPlaneEndpoint:
|
||||
domain: lb.kubesphere.local
|
||||
address: "192.168.1.8"
|
||||
port: "6443"
|
||||
kubernetes:
|
||||
version: v1.17.9
|
||||
imageRepo: kubesphere
|
||||
clusterName: cluster.local
|
||||
masqueradeAll: false # masqueradeAll tells kube-proxy to SNAT everything if using the pure iptables proxy mode. [Default: false]
|
||||
maxPods: 110 # maxPods is the number of pods that can run on this Kubelet. [Default: 110]
|
||||
nodeCidrMaskSize: 24 # internal network node size allocation. This is the size allocated to each node on your network. [Default: 24]
|
||||
proxyMode: ipvs # mode specifies which proxy mode to use. [Default: ipvs]
|
||||
network:
|
||||
plugin: calico
|
||||
calico:
|
||||
ipipMode: Always # IPIP Mode to use for the IPv4 POOL created at start up. If set to a value other than Never, vxlanMode should be set to "Never". [Always | CrossSubnet | Never] [Default: Always]
|
||||
vxlanMode: Never # VXLAN Mode to use for the IPv4 POOL created at start up. If set to a value other than Never, ipipMode should be set to "Never". [Always | CrossSubnet | Never] [Default: Never]
|
||||
vethMTU: 1440 # The maximum transmission unit (MTU) setting determines the largest packet size that can be transmitted through your network. [Default: 1440]
|
||||
kubePodsCIDR: 10.233.64.0/18
|
||||
kubeServiceCIDR: 10.233.0.0/18
|
||||
registry:
|
||||
registryMirrors: ["https://*.mirror.aliyuncs.com"] # # input your registryMirrors
|
||||
insecureRegistries: []
|
||||
privateRegistry: ""
|
||||
storage:
|
||||
defaultStorageClass: localVolume
|
||||
localVolume:
|
||||
storageClassName: local
|
||||
|
||||
---
|
||||
apiVersion: installer.kubesphere.io/v1alpha1
|
||||
kind: ClusterConfiguration
|
||||
metadata:
|
||||
name: ks-installer
|
||||
namespace: kubesphere-system
|
||||
labels:
|
||||
version: v3.0.0
|
||||
spec:
|
||||
local_registry: ""
|
||||
persistence:
|
||||
storageClass: ""
|
||||
authentication:
|
||||
jwtSecret: ""
|
||||
etcd:
|
||||
monitoring: true # Whether to install etcd monitoring dashboard
|
||||
endpointIps: 192.168.1.10,192.168.1.11,192.168.1.12 # etcd cluster endpointIps
|
||||
port: 2379 # etcd port
|
||||
tlsEnable: true
|
||||
common:
|
||||
mysqlVolumeSize: 20Gi # MySQL PVC size
|
||||
minioVolumeSize: 20Gi # Minio PVC size
|
||||
etcdVolumeSize: 20Gi # etcd PVC size
|
||||
openldapVolumeSize: 2Gi # openldap PVC size
|
||||
redisVolumSize: 2Gi # Redis PVC size
|
||||
es: # Storage backend for logging, tracing, events and auditing.
|
||||
elasticsearchMasterReplicas: 1 # total number of master nodes, it's not allowed to use even number
|
||||
elasticsearchDataReplicas: 1 # total number of data nodes
|
||||
elasticsearchMasterVolumeSize: 4Gi # Volume size of Elasticsearch master nodes
|
||||
elasticsearchDataVolumeSize: 20Gi # Volume size of Elasticsearch data nodes
|
||||
logMaxAge: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default.
|
||||
elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log
|
||||
# externalElasticsearchUrl:
|
||||
# externalElasticsearchPort:
|
||||
console:
|
||||
enableMultiLogin: false # enable/disable multiple sing on, it allows an account can be used by different users at the same time.
|
||||
port: 30880
|
||||
alerting: # Whether to install KubeSphere alerting system. It enables Users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from.
|
||||
enabled: true
|
||||
auditing: # Whether to install KubeSphere audit log system. It provides a security-relevant chronological set of records,recording the sequence of activities happened in platform, initiated by different tenants.
|
||||
enabled: true
|
||||
devops: # Whether to install KubeSphere DevOps System. It provides out-of-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & Binary-to-Image
|
||||
enabled: true
|
||||
jenkinsMemoryLim: 2Gi # Jenkins memory limit
|
||||
jenkinsMemoryReq: 1500Mi # Jenkins memory request
|
||||
jenkinsVolumeSize: 8Gi # Jenkins volume size
|
||||
jenkinsJavaOpts_Xms: 512m # The following three fields are JVM parameters
|
||||
jenkinsJavaOpts_Xmx: 512m
|
||||
jenkinsJavaOpts_MaxRAM: 2g
|
||||
events: # Whether to install KubeSphere events system. It provides a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters.
|
||||
enabled: true
|
||||
logging: # Whether to install KubeSphere logging system. Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd.
|
||||
enabled: true
|
||||
logsidecarReplicas: 2
|
||||
metrics_server: # Whether to install metrics-server. IT enables HPA (Horizontal Pod Autoscaler).
|
||||
enabled: true
|
||||
monitoring: #
|
||||
prometheusReplicas: 1 # Prometheus replicas are responsible for monitoring different segments of data source and provide high availability as well.
|
||||
prometheusMemoryRequest: 400Mi # Prometheus request memory
|
||||
prometheusVolumeSize: 20Gi # Prometheus PVC size
|
||||
alertmanagerReplicas: 1 # AlertManager Replicas
|
||||
multicluster:
|
||||
clusterRole: none # host | member | none # You can install a solo cluster, or specify it as the role of host or member cluster
|
||||
networkpolicy: # Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods).
|
||||
enabled: true
|
||||
notification: # It supports notification management in multi-tenant Kubernetes clusters. It allows you to set AlertManager as its sender, and receivers include Email, Wechat Work, and Slack.
|
||||
enabled: true
|
||||
openpitrix: # Whether to install KubeSphere Application Store. It provides an application store for Helm-based applications, and offer application lifecycle management
|
||||
enabled: true
|
||||
servicemesh: # Whether to install KubeSphere Service Mesh (Istio-based). It provides fine-grained traffic management, observability and tracing, and offer visualization for traffic topology
|
||||
enabled: true
|
||||
```
|
||||
|
||||
### 执行命令创建集群
|
||||
```bash
|
||||
# 指定配置文件创建集群
|
||||
./kk create cluster --with-kubesphere v3.0.0 -f master-HA.yaml
|
||||
|
||||
# 查看 KubeSphere 安装日志 -- 直到出现控制台的访问地址和登陆账号
|
||||
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
|
||||
```
|
||||
|
||||
```bash
|
||||
#####################################################
|
||||
### Welcome to KubeSphere! ###
|
||||
#####################################################
|
||||
|
||||
Console: http://192.168.1.10:30880
|
||||
Account: admin
|
||||
Password: P@88w0rd
|
||||
|
||||
NOTES:
|
||||
1. After logging into the console, please check the
|
||||
monitoring status of service components in
|
||||
the "Cluster Management". If any service is not
|
||||
ready, please wait patiently until all components
|
||||
are ready.
|
||||
2. Please modify the default password after login.
|
||||
|
||||
#####################################################
|
||||
https://kubesphere.io 2020-08-28 01:25:54
|
||||
#####################################################
|
||||
```
|
||||
访问公网 IP + Port 为部署后的使用情况,使用默认账号密码 (`admin/P@88w0rd`),文章组件安装为最大化,登陆点击`平台管理>集群管理` 可看到下图安装组件列表和机器情况。
|
||||
|
||||
|
||||
## 如何自定义开启可插拔组件
|
||||
|
||||
点击 `集群管理` - `自定义资源CRD` ,在过滤条件框输入 `ClusterConfiguration` ,如图下
|
||||

|
||||
点击 `ClusterConfiguration` 详情,对 `ks-installer` 编辑保存退出即可,组件描述介绍:[文档说明](https://github.com/kubesphere/ks-installer/blob/master/deploy/cluster-configuration.yaml)
|
||||

|
||||
|
|
@ -1,224 +0,0 @@
|
|||
---
|
||||
title: "Air-Gapped Installation"
|
||||
keywords: 'kubernetes, kubesphere, air gapped, installation'
|
||||
description: 'How to install KubeSphere on air-gapped Linux machines'
|
||||
|
||||
|
||||
weight: 2240
|
||||
---
|
||||
|
||||
The air-gapped installation is almost the same as the online installation except it creates a local registry to host the Docker images. We will demonstrate how to install KubeSphere and Kubernetes on air-gapped environment.
|
||||
|
||||
> Note: The dependencies in different operating systems may cause upexpected problems. If you encounter any installation problems on air-gapped environment, please describe your OS information and error logs on [GitHub](https://github.com/kubesphere/kubesphere/issues).
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information.
|
||||
> - Installer will use `/var/lib/docker` as the default directory where all Docker related files, including the images, are stored. We recommend you to add additional storage to a disk with at least 100G mounted at `/var/lib/docker` and `/mnt/registry` respectively, use the [fdisk](https://www.computerhope.com/unix/fdisk.htm) command for reference.
|
||||
- Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [OpenEBS](https://openebs.io/) to provide storage service with dynamic provisioning. It is convenient for testing and development. For production, please [configure supported persistent storage service](../storage-configuration) and prepare [high availability configuration](../master-ha) before installation.
|
||||
- Since the air-gapped machines cannot connect to apt or yum source, please use clean Linux machine to avoid this problem.
|
||||
|
||||
## Step 1: Prepare Linux Hosts
|
||||
|
||||
The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements.
|
||||
|
||||
- Supported OSes: CentOS 7.4 ~ 7.7 (64-bit), Ubuntu 16.04.5/16.04.6/18.04.1/18.04.2/18.04.3 LTS (64-bit)
|
||||
- Time synchronization is required across all nodes, otherwise the installation may not succeed;
|
||||
- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`;
|
||||
- If you are using `Ubuntu 18.04`, you need to use the user `root`.
|
||||
- Ensure your disk of each node is at least 100G.
|
||||
- CPU and memory in total of all machines: 2 cores and 4 GB for minimal installation; 8 cores and 16 GB for complete installation.
|
||||
|
||||
|
||||
The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes.
|
||||
|
||||
> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide.
|
||||
|
||||
| Host IP | Host Name | Role |
|
||||
| --- | --- | --- |
|
||||
|192.168.0.1|master|master, etcd|
|
||||
|192.168.0.2|node1|node|
|
||||
|192.168.0.3|node2|node|
|
||||
|
||||
### Cluster Architecture
|
||||
|
||||
#### Single Master, Single Etcd, Two Nodes
|
||||
|
||||

|
||||
|
||||
## Step 2: Download Installer Package
|
||||
|
||||
Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`.
|
||||
|
||||
```bash
|
||||
curl -L https://kubesphere.io/download/offline/latest > kubesphere-all-offline-v2.1.1.tar.gz \
|
||||
&& tar -zxf kubesphere-all-offline-v2.1.1.tar.gz && cd kubesphere-all-offline-v2.1.1/conf
|
||||
```
|
||||
|
||||
## Step 3: Configure Host Template
|
||||
|
||||
> This step is only for multi-node installation, you can skip this step if you choose all-in-one installation.
|
||||
|
||||
Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file.
|
||||
|
||||
> Note:
|
||||
>
|
||||
> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`.
|
||||
> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`.
|
||||
> - master, node1 and node2 are the host names of each node and all host names should be in lowercase.
|
||||
|
||||
### hosts.ini
|
||||
|
||||
```ini
|
||||
[all]
|
||||
master ansible_connection=local ip=192.168.0.1
|
||||
node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD
|
||||
node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD
|
||||
|
||||
[local-registry]
|
||||
master
|
||||
|
||||
[kube-master]
|
||||
master
|
||||
|
||||
[kube-node]
|
||||
node1
|
||||
node2
|
||||
|
||||
[etcd]
|
||||
master
|
||||
|
||||
[k8s-cluster:children]
|
||||
kube-node
|
||||
kube-master
|
||||
```
|
||||
|
||||
> Note:
|
||||
>
|
||||
> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here.
|
||||
> - Installer will use a node as the local registry for docker images, defaults to "master" in the group `[local-registry]`.
|
||||
> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively.
|
||||
> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`.
|
||||
>
|
||||
> Parameters Specification:
|
||||
>
|
||||
> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection.
|
||||
> - `ansible_host`: The name of the host to be connected.
|
||||
> - `ip`: The ip of the host to be connected.
|
||||
> - `ansible_user`: The default ssh user name to use.
|
||||
> - `ansible_become_pass`: Allows you to set the privilege escalation password.
|
||||
> - `ansible_ssh_pass`: The password of the host to be connected using root.
|
||||
|
||||
## Step 4: Enable All Components
|
||||
|
||||
> This is step is complete installation. You can skip this step if you choose a minimal installation.
|
||||
|
||||
Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default.
|
||||
|
||||
```yaml
|
||||
# LOGGING CONFIGURATION
|
||||
# logging is an optional component when installing KubeSphere, and
|
||||
# Kubernetes builtin logging APIs will be used if logging_enabled is set to false.
|
||||
# Builtin logging only provides limited functions, so recommend to enable logging.
|
||||
logging_enabled: true # Whether to install logging system
|
||||
elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number
|
||||
elasticsearch_data_replica: 2 # total number of data nodes
|
||||
elasticsearch_volume_size: 20Gi # Elasticsearch volume size
|
||||
log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default.
|
||||
elk_prefix: logstash # the string making up index names. The index name will be formatted as ks-<elk_prefix>-log
|
||||
kibana_enabled: false # Kibana Whether to install built-in Grafana
|
||||
#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption.
|
||||
#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port
|
||||
|
||||
#DevOps Configuration
|
||||
devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image)
|
||||
jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default
|
||||
jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default
|
||||
jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default
|
||||
jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters
|
||||
jenkinsJavaOpts_Xmx: 6g
|
||||
jenkinsJavaOpts_MaxRAM: 8g
|
||||
sonarqube_enabled: true # Whether to install built-in SonarQube
|
||||
#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption.
|
||||
#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token
|
||||
|
||||
# Following components are all optional for KubeSphere,
|
||||
# Which could be turned on to install it before installation or later by updating its value to true
|
||||
openpitrix_enabled: true # KubeSphere application store
|
||||
metrics_server_enabled: true # For KubeSphere HPA to use
|
||||
servicemesh_enabled: true # KubeSphere service mesh system(Istio-based)
|
||||
notification_enabled: true # KubeSphere notification system
|
||||
alerting_enabled: true # KubeSphere alerting system
|
||||
```
|
||||
|
||||
## Step 5: Install KubeSphere to Linux Machines
|
||||
|
||||
> Note:
|
||||
>
|
||||
> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default.
|
||||
> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions.
|
||||
> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation.
|
||||
> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts.
|
||||
|
||||
**1.** Enter `scripts` folder, and execute `install.sh` using `root` user:
|
||||
|
||||
```bash
|
||||
cd ../cripts
|
||||
./install.sh
|
||||
```
|
||||
|
||||
**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume.
|
||||
|
||||
```bash
|
||||
################################################
|
||||
KubeSphere Installer Menu
|
||||
################################################
|
||||
* 1) All-in-one
|
||||
* 2) Multi-node
|
||||
* 3) Quit
|
||||
################################################
|
||||
https://kubesphere.io/ 2020-02-24
|
||||
################################################
|
||||
Please input an option: 2
|
||||
|
||||
```
|
||||
|
||||
**3.** Verify the multi-node installation:
|
||||
|
||||
**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go.
|
||||
|
||||
```bash
|
||||
successsful!
|
||||
#####################################################
|
||||
### Welcome to KubeSphere! ###
|
||||
#####################################################
|
||||
|
||||
Console: http://192.168.0.1:30880
|
||||
Account: admin
|
||||
Password: P@88w0rd
|
||||
|
||||
NOTE:Please modify the default password after login.
|
||||
#####################################################
|
||||
```
|
||||
|
||||
> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components).
|
||||
|
||||
**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in.
|
||||
|
||||

|
||||
|
||||
<font color=red>Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up.</font>
|
||||
|
||||

|
||||
|
||||
## Enable Pluggable Components
|
||||
|
||||
If you already have set up minimal installation, you still can edit the ConfigMap of ks-installer using the following command. Please make sure there is enough resource in your machines, see [Pluggable Components Overview](/en/installation/pluggable-components/).
|
||||
|
||||
```bash
|
||||
kubectl edit cm -n kubesphere-system ks-installer
|
||||
```
|
||||
|
||||
## FAQ
|
||||
|
||||
If you have further questions please do not hesitate to raise issues on [GitHub](https://github.com/kubesphere/kubesphere/issues).
|
||||
|
|
@ -1,276 +0,0 @@
|
|||
---
|
||||
title: "KubeSphere 在阿里云 ECS 高可用实例"
|
||||
keywords: "Kubesphere 安装, 阿里云, ECS, 高可用性, 高可用性, 负载均衡器"
|
||||
description: "本教程用于安装高可用性集群"
|
||||
|
||||
Weight: 2230
|
||||
---
|
||||
|
||||
由于对于生产环境,我们需要考虑集群的高可用性。教你部署如何在阿里 ECS 实例服务快速部署一套高可用的生产环境
|
||||
Kubernetes 服务需要做到高可用,需要保证 kube-apiserver 的 HA ,推荐下列两种方式
|
||||
1. 阿里云 SLB
|
||||
2. keepalived + haproxy [keepalived + haproxy](https://kubesphere.com.cn/forum/d/1566-kubernetes-keepalived-haproxy)对 kube-apiserver 进行负载均衡,实现高可用 kubernetes 集群。
|
||||
|
||||
## 前提条件
|
||||
|
||||
- 请遵循该[指南](https://github.com/kubesphere/kubekey),确保您已经知道如何将 KubeSphere 与多节点集群一起安装。有关用于安装的 config.yaml 文件的详细信息。本教程重点介绍配置阿里负载均衡器服务高可用安装。
|
||||
- 考虑到数据的持久性,对于生产环境,我们不建议您使用存储OpenEBS,建议 NFS , GlusterFS 等存储(需要提前安装)。文章为了进行开发和测试,集成的 OpenEBS 直接将 LocalPV 设置为存储服务。
|
||||
- SSH 可以访问所有节点。
|
||||
- 所有节点的时间同步。
|
||||
- Red Hat 在其 Linux 发行版本中包括了SELinux,建议关闭 SELinux 或者将 SELinux 的模式切换为 Permissive [宽容]工作模式。
|
||||
|
||||
## 部署架构
|
||||
|
||||

|
||||
|
||||
## 创建主机
|
||||
|
||||
本示例创建 SLB + 6 台 **CentOS Linux release 7.6.1810 (Core)** 的虚拟机,每台配置为 2Core4GB40G
|
||||
|
||||
| 主机IP | 主机名称 | 角色 |
|
||||
| --- | --- | --- |
|
||||
|39.104.82.170|Eip|slb|
|
||||
|172.24.107.72|master1|master1, etcd|
|
||||
|172.24.107.73|master2|master2, etcd|
|
||||
|172.24.107.74|master3|master3, etcd|
|
||||
|172.24.107.75|node1|node|
|
||||
|172.24.107.76|node2|node|
|
||||
|172.24.107.77|node3|node|
|
||||
|
||||
> 注意:机器有限,所以把 etcd 放入 master,在生产环境建议单独部署 etcd,提高稳定性
|
||||
|
||||
## 使用阿里 SLB 部署
|
||||
### 创建 SLB
|
||||
|
||||
进入到阿里云控制, 在左侧列表选择'负载均衡', 选择'实例管理' 进入下图, 选择'创建负载均衡'
|
||||
|
||||

|
||||
|
||||
### 配置 SLB
|
||||
|
||||
配置规格根据自身流量规模创建
|
||||
|
||||

|
||||
|
||||
后面的 config.yaml 需要配置 slb 分配的地址
|
||||
```yaml
|
||||
controlPlaneEndpoint:
|
||||
domain: lb.kubesphere.local
|
||||
address: "39.104.82.170"
|
||||
port: "6443"
|
||||
```
|
||||
### 配置SLB 主机实例
|
||||
|
||||
需要在服务器组添加需要负载的3台 master 主机后按下图顺序配置监听 TCP 6443 端口( api-server )
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
再按上述操作配置监听 HTTP 30880 端口( ks-console ),主机添加选择全部主机节点。
|
||||
|
||||

|
||||
|
||||
- <font color=red>现在的健康检查暂时是失败的,因为还没部署 master 的服务,所以端口 telnet 不通的。</font>
|
||||
- 然后提交审核即可
|
||||
|
||||
### 获取安装程序可执行文件
|
||||
|
||||
```bash
|
||||
#下载 kk installer 至任意一台机器
|
||||
curl -O -k https://kubernetes.pek3b.qingstor.com/tools/kubekey/kk
|
||||
chmod +x kk
|
||||
```
|
||||
|
||||
{{< notice tip >}}
|
||||
|
||||
您可以使用高级安装来控制自定义参数或创建多节点群集。具体来说,通过指定配置文件来创建集群。
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
### 使用 kubekey 部署k8s集群和 KubeSphere 控制台
|
||||
|
||||
```bash
|
||||
# 在当前位置创建配置文件 config-sample.yaml |包含 KubeSphere 的配置文件
|
||||
./kk create config --with-kubesphere v3.0.0 -f config-sample.yaml
|
||||
---
|
||||
# 同时安装存储插件 (支持:localVolume、nfsClient、rbd、glusterfs)。您可以指定多个插件并用逗号分隔。请注意,您添加的第一个将是默认存储类。
|
||||
./kk create config --with-storage localVolume --with-kubesphere v3.0.0 -f config-sample.yaml
|
||||
```
|
||||
### 集群配置调整
|
||||
|
||||
```yaml
|
||||
#vi ~/config-sample.yaml
|
||||
apiVersion: kubekey.kubesphere.io/v1alpha1
|
||||
kind: Cluster
|
||||
metadata:
|
||||
name: config-sample
|
||||
spec:
|
||||
hosts:
|
||||
- {name: master1, address: 172.24.107.72, internalAddress: 172.24.107.72, user: root, password: QWEqwe123}
|
||||
- {name: master2, address: 172.24.107.73, internalAddress: 172.24.107.73, user: root, password: QWEqwe123}
|
||||
- {name: master3, address: 172.24.107.74, internalAddress: 172.24.107.74, user: root, password: QWEqwe123}
|
||||
- {name: node1, address: 172.24.107.75, internalAddress: 172.24.107.75, user: root, password: QWEqwe123}
|
||||
- {name: node2, address: 172.24.107.76, internalAddress: 172.24.107.76, user: root, password: QWEqwe123}
|
||||
- {name: node3, address: 172.24.107.77, internalAddress: 172.24.107.77, user: root, password: QWEqwe123}
|
||||
|
||||
roleGroups:
|
||||
etcd:
|
||||
- master1
|
||||
- master2
|
||||
- master3
|
||||
master:
|
||||
- master1
|
||||
- master2
|
||||
- master3
|
||||
worker:
|
||||
- node1
|
||||
- node2
|
||||
- node3
|
||||
controlPlaneEndpoint:
|
||||
domain: lb.kubesphere.local
|
||||
address: "39.104.82.170"
|
||||
port: "6443"
|
||||
kubernetes:
|
||||
version: v1.17.9
|
||||
imageRepo: kubesphere
|
||||
clusterName: cluster.local
|
||||
network:
|
||||
plugin: calico
|
||||
kubePodsCIDR: 10.233.64.0/18
|
||||
kubeServiceCIDR: 10.233.0.0/18
|
||||
registry:
|
||||
registryMirrors: ["https://*.mirror.aliyuncs.com"] # # input your registryMirrors
|
||||
insecureRegistries: []
|
||||
storage:
|
||||
defaultStorageClass: localVolume
|
||||
localVolume:
|
||||
storageClassName: local
|
||||
|
||||
---
|
||||
apiVersion: installer.kubesphere.io/v1alpha1
|
||||
kind: ClusterConfiguration
|
||||
metadata:
|
||||
name: ks-installer
|
||||
namespace: kubesphere-system
|
||||
labels:
|
||||
version: v3.0.0
|
||||
spec:
|
||||
local_registry: ""
|
||||
persistence:
|
||||
storageClass: ""
|
||||
authentication:
|
||||
jwtSecret: ""
|
||||
etcd:
|
||||
monitoring: true
|
||||
endpointIps: 172.24.107.72,172.24.107.73,172.24.107.74
|
||||
port: 2379
|
||||
tlsEnable: true
|
||||
common:
|
||||
es:
|
||||
elasticsearchDataVolumeSize: 20Gi
|
||||
elasticsearchMasterVolumeSize: 4Gi
|
||||
elkPrefix: logstash
|
||||
logMaxAge: 7
|
||||
mysqlVolumeSize: 20Gi
|
||||
minioVolumeSize: 20Gi
|
||||
etcdVolumeSize: 20Gi
|
||||
openldapVolumeSize: 2Gi
|
||||
redisVolumSize: 2Gi
|
||||
console:
|
||||
enableMultiLogin: false # enable/disable multi login
|
||||
port: 30880
|
||||
alerting:
|
||||
enabled: false
|
||||
auditing:
|
||||
enabled: false
|
||||
devops:
|
||||
enabled: false
|
||||
jenkinsMemoryLim: 2Gi
|
||||
jenkinsMemoryReq: 1500Mi
|
||||
jenkinsVolumeSize: 8Gi
|
||||
jenkinsJavaOpts_Xms: 512m
|
||||
jenkinsJavaOpts_Xmx: 512m
|
||||
jenkinsJavaOpts_MaxRAM: 2g
|
||||
events:
|
||||
enabled: false
|
||||
ruler:
|
||||
enabled: true
|
||||
replicas: 2
|
||||
logging:
|
||||
enabled: false
|
||||
logsidecarReplicas: 2
|
||||
metrics_server:
|
||||
enabled: true
|
||||
monitoring:
|
||||
prometheusMemoryRequest: 400Mi
|
||||
prometheusVolumeSize: 20Gi
|
||||
multicluster:
|
||||
clusterRole: none # host | member | none
|
||||
networkpolicy:
|
||||
enabled: false
|
||||
notification:
|
||||
enabled: false
|
||||
openpitrix:
|
||||
enabled: false
|
||||
servicemesh:
|
||||
enabled: false
|
||||
```
|
||||
|
||||
### 执行命令创建集群
|
||||
```bash
|
||||
# 指定配置文件创建集群
|
||||
./kk create cluster -f config-sample.yaml
|
||||
|
||||
# 查看 KubeSphere 安装日志 -- 直到出现控制台的访问地址和登陆账号
|
||||
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
|
||||
```
|
||||
|
||||
```bash
|
||||
**************************************************
|
||||
#####################################################
|
||||
### Welcome to KubeSphere! ###
|
||||
#####################################################
|
||||
|
||||
Console: http://172.24.107.72:30880
|
||||
Account: admin
|
||||
Password: P@88w0rd
|
||||
|
||||
NOTES:
|
||||
1. After logging into the console, please check the
|
||||
monitoring status of service components in
|
||||
the "Cluster Management". If any service is not
|
||||
ready, please wait patiently until all components
|
||||
are ready.
|
||||
2. Please modify the default password after login.
|
||||
|
||||
#####################################################
|
||||
https://kubesphere.io 2020-08-24 23:30:06
|
||||
#####################################################
|
||||
```
|
||||
|
||||
- 访问公网 IP + Port 为部署后的使用情况,使用默认账号密码 (`admin/P@88w0rd`),文章安装为最小化,登陆点击`工作台` 可看到下图安装组件列表和机器情况。
|
||||
|
||||

|
||||
|
||||
## 如何自定义开启可插拔组件
|
||||
|
||||
+ 点击 `集群管理` - `自定义资源CRD` ,在过滤条件框输入 `ClusterConfiguration` ,如图下
|
||||
|
||||

|
||||
|
||||
+ 点击 `ClusterConfiguration` 详情,对 `ks-installer` 编辑保存退出即可,组件描述介绍:[文档说明](https://github.com/kubesphere/ks-installer/blob/master/deploy/cluster-configuration.yaml)
|
||||
|
||||

|
||||
|
||||
## 安装问题
|
||||
|
||||
> 提示: 如果安装过程中碰到 `Failed to add worker to cluster: Failed to exec command...`
|
||||
> <br>
|
||||
``` bash 处理方式
|
||||
kubeadm reset
|
||||
```
|
||||
|
|
@ -0,0 +1,310 @@
|
|||
---
|
||||
title: "KubeSphere on QingCloud Instance"
|
||||
keywords: "KubeSphere, Installation, HA, High-availability, LoadBalancer"
|
||||
description: "The tutorial is for installing a high-availability cluster."
|
||||
|
||||
Weight: 2229
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
For a production environment, we need to consider the high availability of the cluster. If the key components (e.g. kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same master node, Kubernetes and KubeSphere will be unavailable once the master node goes down. Therefore, we need to set up a high-availability cluster by provisioning load balancers with multiple master nodes. You can use any cloud load balancer, or any hardware load balancer (e.g. F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters.
|
||||
|
||||
This tutorial walks you through an example of how to create two [QingCloud Load Balancers](https://docs.qingcloud.com/product/network/loadbalancer), serving as the internal load balancer and external load balancer respectively, and of how to implement high availability of master and etcd nodes using the load balancers.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Please make sure that you already know how to install KubeSphere with a multi-node cluster by following the [guide](https://github.com/kubesphere/kubekey). For the detailed information about the config yaml file that is used for installation, see Multi-node Installation. This tutorial focuses more on how to configure load balancers.
|
||||
- You need a [QingCloud](https://console.qingcloud.com/login) account to create load balancers, or follow the guide of any other cloud provider to create load balancers.
|
||||
- Considering data persistence, for a production environment, we recommend you to prepare persistent storage and create a StorageClass in advance. For development and testing, you can use the integrated OpenEBS to provision LocalPV as the storage service directly.
|
||||
|
||||
## Architecture
|
||||
|
||||
This example prepares six machines of **Ubuntu 16.04.6**. We will create two load balancers, and deploy three master and etcd nodes on three of the machines. You can configure these master and etcd nodes in `config-sample.yaml` of KubeKey (Please note that this is the default name, which can be changed by yourself).
|
||||
|
||||

|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
The Kubernetes document [Options for Highly Available topology](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/) demonstrates that there are two options for configuring the topology of a highly available (HA) Kubernetes cluster, i.e. stacked etcd topology and external etcd topology. You should carefully consider the advantages and disadvantages of each topology before setting up an HA cluster according to [this document](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/). In this guide, we adopt stacked etcd topology to bootstrap an HA cluster for convenient demonstration.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
## Install HA Cluster
|
||||
|
||||
### Create Load Balancers
|
||||
|
||||
This step demonstrates how to create load balancers on QingCloud platform.
|
||||
|
||||
#### Create an Internal Load Balancer
|
||||
|
||||
1. Log in [QingCloud Console](https://console.qingcloud.com/login). In the menu on the left, under **Network & CDN**, select **Load Balancers**. Click **Create** to create a load balancer.
|
||||
|
||||

|
||||
|
||||
2. In the pop-up window, set a name for the load balancer. Choose the VxNet where your machines are created from the Network drop-down list. Here is `pn`. Other fields can be default values as shown below. Click **Submit** to finish.
|
||||
|
||||

|
||||
|
||||
3. Click the load balancer. In the detailed information page, create a listener that listens on port `6443` with the Listener Protocol set as `TCP`.
|
||||
|
||||

|
||||
|
||||
- Name: Define a name for this Listener
|
||||
- Listener Protocol: Select `TCP` protocol
|
||||
- Port: `6443`
|
||||
- Load mode: `Poll`
|
||||
|
||||
Click Submit to continue.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
After you create the listener, please check the firewall rules of the load balancer. Make sure that the port `6443` has been added to the firewall rules and the external traffic can pass through `6443`. Otherwise, the installation will fail. If you are using QingCloud platform, you can find the information in **Security Groups** under **Security**.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
4. Click **Add Backend**, and choose the VxNet you just selected (in this example, it is `pn`). Click the button **Advanced Search**, choose the three master nodes, and set the port to `6443` which is the default secure port of api-server.
|
||||
|
||||

|
||||
|
||||
Click **Submit** when you finish.
|
||||
|
||||
5. Click the button **Apply Changes** to activate the configurations. At this point, you can find the three masters have been added as the backend servers of the listener that is behind the internal load balancer.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
The status of all masters might show `Not Available` after you added them as backends. This is normal since the port `6443` of api-server is not active on master nodes yet. The status will change to `Active` and the port of api-server will be exposed after the installation finishes, which means the internal load balancer you configured works as expected.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||

|
||||
|
||||
Record the Intranet VIP shown under Networks. The IP address will be added later to the config yaml file.
|
||||
|
||||
#### Create an External Load Balancer
|
||||
|
||||
You need to create an EIP in advance. To create an EIP, go to **Elastic IPs** under **Networks & CDN**.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
Two elastic IPs are needed for this whole tutorial, one for the VPC network and the other for the external load balancer created in this step. You cannot associate the same EIP to the VPC network and the load balancer at the same time.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
6. Similarly, create an external load balancer while don't select VxNet for the Network field. Bind the EIP that you created to this load balancer by clicking **Add IPv4**.
|
||||
|
||||

|
||||
|
||||
7. In the load balancer detailed information page, create a listener that listens on port `30880` (NodePort of KubeSphere console) with the Listener Protocol set as `HTTP`.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
After you create the listener, please check the firewall rules of the load balancer. Make sure that the port `30880` has been added to the firewall rules and the external traffic can pass through `6443`. Otherwise, the installation will fail. If you are using QingCloud platform, you can find the information in **Security Groups** under **Security**.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||

|
||||
|
||||
8. Click **Add Backend**. In **Advanced Search**, choose the `six` machines on which we are going to install KubeSphere within the VxNet `pn`, and set the port to `30880`.
|
||||
|
||||

|
||||
|
||||
Click **Submit** when you finish.
|
||||
|
||||
9. Click **Apply Changes** to activate the configurations. At this point, you can find the six machines have been added as the backend servers of the listener that is behind the external load balancer.
|
||||
|
||||
### Download KubeKey
|
||||
|
||||
[Kubekey](https://github.com/kubesphere/kubekey) is the next-gen installer which is used for installing Kubernetes and KubeSphere v3.0.0 fastly, flexibly and easily.
|
||||
|
||||
{{< tabs >}}
|
||||
|
||||
{{< tab "For users with poor network to GitHub" >}}
|
||||
|
||||
For users in China, you can download the installer using this link.
|
||||
|
||||
```bash
|
||||
wget https://kubesphere.io/kubekey/releases/v1.0.0
|
||||
```
|
||||
{{</ tab >}}
|
||||
|
||||
{{< tab "For users with good network to GitHub" >}}
|
||||
|
||||
For users with good network to GitHub, you can download it from [GitHub Release Page](https://github.com/kubesphere/kubekey/releases/tag/v1.0.0) or use the following link directly.
|
||||
|
||||
```bash
|
||||
wget https://github.com/kubesphere/kubekey/releases/download/v1.0.0/kubekey-v1.0.0-linux-amd64.tar.gz
|
||||
```
|
||||
{{</ tab >}}
|
||||
|
||||
{{</ tabs >}}
|
||||
|
||||
Unzip it.
|
||||
|
||||
```bash
|
||||
tar -zxvf v1.0.0
|
||||
```
|
||||
|
||||
Grant the execution right to `kk`:
|
||||
|
||||
```bash
|
||||
chmod +x kk
|
||||
```
|
||||
|
||||
Then create an example configuration file with default configurations. Here we use Kubernetes v1.17.9 as an example.
|
||||
|
||||
```bash
|
||||
./kk create config --with-kubesphere v3.0.0 --with-kubernetes v1.17.9
|
||||
```
|
||||
|
||||
> Tip: These Kubernetes versions have been fully tested with KubeSphere: *v1.15.12*, *v1.16.13*, *v1.17.9* (default), *v1.18.6*.
|
||||
|
||||
### Cluster Node Planning
|
||||
|
||||
As we adopt the HA topology with stacked control plane nodes, where etcd nodes are colocated with master nodes, we will define the master nodes and etcd nodes are on the same three machines.
|
||||
|
||||
| **Property** | **Description** |
|
||||
| :----------- | :-------------------------------- |
|
||||
| `hosts` | Detailed information of all nodes |
|
||||
| `etcd` | etcd node names |
|
||||
| `master` | Master node names |
|
||||
| `worker` | Worker node names |
|
||||
|
||||
- Put the master node name (master1, master2 and master3) under `etcd` and `master` respectively as below, which means these three machines will be assigned with both the master and etcd role. Please note that the number of etcd needs to be odd. Meanwhile, we do not recommend you to install etcd on worker nodes since the memory consumption of etcd is very high. Edit the configuration file, and we use **Ubuntu 16.04.6** in this example.
|
||||
|
||||
#### config-sample.yaml Example
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
hosts:
|
||||
- {name: master1, address: 192.168.0.2, internalAddress: 192.168.0.2, user: ubuntu, password: Testing123}
|
||||
- {name: master2, address: 192.168.0.3, internalAddress: 192.168.0.3, user: ubuntu, password: Testing123}
|
||||
- {name: master3, address: 192.168.0.4, internalAddress: 192.168.0.4, user: ubuntu, password: Testing123}
|
||||
- {name: node1, address: 192.168.0.5, internalAddress: 192.168.0.5, user: ubuntu, password: Testing123}
|
||||
- {name: node2, address: 192.168.0.6, internalAddress: 192.168.0.6, user: ubuntu, password: Testing123}
|
||||
- {name: node3, address: 192.168.0.7, internalAddress: 192.168.0.7, user: ubuntu, password: Testing123}
|
||||
roleGroups:
|
||||
etcd:
|
||||
- master1
|
||||
- master2
|
||||
- master3
|
||||
master:
|
||||
- master1
|
||||
- master2
|
||||
- master3
|
||||
worker:
|
||||
- node1
|
||||
- node2
|
||||
- node3
|
||||
```
|
||||
|
||||
For a complete configuration sample explanation, please see [this file](https://github.com/kubesphere/kubekey/blob/master/docs/config-example.md).
|
||||
|
||||
### Configure the Load Balancer
|
||||
|
||||
In addition to the node information, you need to provide the load balancer information in the same yaml file. For the Intranet VIP address, you can find it in step 5 mentioned above. Assume the VIP address and listening port of the **internal load balancer** are `192.168.0.253` and `6443` respectively, and you can refer to the following example.
|
||||
|
||||
#### The configuration example in config-sample.yaml
|
||||
|
||||
```yaml
|
||||
## Internal LB config example
|
||||
## apiserver_loadbalancer_domain_name: "lb.kubesphere.local"
|
||||
controlPlaneEndpoint:
|
||||
domain: lb.kubesphere.local
|
||||
address: "192.168.0.253"
|
||||
port: "6443"
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
- The address and port should be indented by two spaces in `config-sample.yaml`, and the address should be VIP.
|
||||
- The domain name of the load balancer is `lb.kubesphere.local` by default for internal access. If you need to change the domain name, please uncomment and modify it.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
After that, you can enable any components you need by following **Enable Pluggable Components** and start your HA cluster installation.
|
||||
|
||||
### Kubernetes Cluster Configuration (Optional)
|
||||
|
||||
Kubekey provides some fields and parameters to allow the cluster administrator to customize Kubernetes installation, including Kubernetes version, network plugins and image registry. There are some default values provided in `config-example.yaml`. Optionally, you can modify the Kubernetes related configuration in `config-example.yaml` according to your needs. See [config-example.md](https://github.com/kubesphere/kubekey/blob/master/docs/config-example.md) for detailed explanation.
|
||||
|
||||
### Persistent Storage Plugin Configuration
|
||||
|
||||
As we mentioned in the prerequisites, considering data persistence in a production environment, you need to prepare persistent storage and configure the storage plugin (e.g. CSI) in `config-sample.yaml` to define which storage service you want.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
For testing or development, you can skip this part. KubeKey will use the integrated OpenEBS to provision LocalPV as the storage service directly.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
**Available Storage Plugins & Clients**
|
||||
|
||||
- Ceph RBD & CephFS
|
||||
- GlusterFS
|
||||
- NFS
|
||||
- QingCloud CSI
|
||||
- QingStor CSI
|
||||
- More plugins are WIP, which will be added soon
|
||||
|
||||
For each storage plugin configuration, you can refer to [config-example.md](https://github.com/kubesphere/kubekey/blob/master/docs/config-example.md) to get detailed explanation. Make sure you have configured the storage plugin before you get started. KubeKey will create a StorageClass and persistent volumes for related workloads during the installation.
|
||||
|
||||
### Enable Pluggable Components (Optional)
|
||||
|
||||
KubeSphere has decoupled some core feature components since v2.1.0. These components are designed to be pluggable which means you can enable them either before or after installation. By default, KubeSphere will be started with a minimal installation if you do not enable them.
|
||||
|
||||
You can enable any of them according to your demands. It is highly recommended that you install these pluggable components to discover the full-stack features and capabilities provided by KubeSphere. Please ensure your machines have sufficient CPU and memory before enabling them. See [Enable Pluggable Components](https://github.com/kubesphere/ks-installer#enable-pluggable-components) for details.
|
||||
|
||||
### Start to Bootstrap a Cluster
|
||||
|
||||
After you complete the configuration, you can execute the following command to start the installation:
|
||||
|
||||
```bash
|
||||
./kk create cluster -f config-sample.yaml
|
||||
```
|
||||
|
||||
### Verify the Installation
|
||||
|
||||
Inspect the logs of installation. When you see the successful logs as follows, congratulations and enjoy it!
|
||||
|
||||
```bash
|
||||
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
|
||||
```
|
||||
|
||||
```bash
|
||||
#####################################################
|
||||
### Welcome to KubeSphere! ###
|
||||
#####################################################
|
||||
|
||||
Console: http://192.168.0.3:30880
|
||||
Account: admin
|
||||
Password: P@88w0rd
|
||||
|
||||
NOTES:
|
||||
1. After logging into the console, please check the
|
||||
monitoring status of service components in
|
||||
the "Cluster Management". If any service is not
|
||||
ready, please wait patiently until all components
|
||||
are ready.
|
||||
2. Please modify the default password after login.
|
||||
|
||||
#####################################################
|
||||
https://kubesphere.io 2020-08-13 10:50:24
|
||||
#####################################################
|
||||
```
|
||||
|
||||
### Verify the HA Cluster
|
||||
|
||||
Now that you have finished the installation, you can go back to the detailed information page of both the internal and external load balancers to see the status.
|
||||
|
||||

|
||||
|
||||
Both listeners show that the status is `Active`, meaning the node is up and running.
|
||||
|
||||

|
||||
|
||||
In the web console of KubeSphere, you can also see that all the nodes are functioning well.
|
||||
|
||||

|
||||
|
||||
To verify if the cluster is highly available, you can turn off an instance on purpose. For example, the above dashboard is accessed through the address `IP: 30880` (the EIP address here is the one bound to the external load balancer). If the cluster is highly available, the dashboard will still work well even if you shut down a master node.
|
||||
|
|
@ -1,152 +0,0 @@
|
|||
---
|
||||
title: "High Availability Configuration"
|
||||
keywords: "kubesphere, kubernetes, docker,installation, HA, high availability"
|
||||
description: "The guide for installing a high availability of KubeSphere cluster"
|
||||
|
||||
weight: 2230
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
[Multi-node installation](../multi-node) can help you to quickly set up a single-master cluster on multiple machines for development and testing. However, we need to consider the high availability of the cluster for production. Since the key components on the master node, i.e. kube-apiserver, kube-scheduler, and kube-controller-manager are running on a single master node, Kubernetes and KubeSphere will be unavailable during the master being down. Therefore we need to set up a high availability cluster by provisioning load balancers and multiple masters. You can use any cloud load balancer, or any hardware load balancer (e.g. F5). In addition, keepalved and Haproxy is also an alternative for creating such high-availability cluster.
|
||||
|
||||
This document walks you through an example how to create two [QingCloud Load Balancer](https://docs.qingcloud.com/product/network/loadbalancer), serving as internal load balancer and external load balancer respectively, and how to configure the high availability of masters and Etcd using the load balancers.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Please make sure that you already read [Multi-Node installation](../multi-node). This document only demonstrates how to configure load balancers.
|
||||
- You need a [QingCloud](https://console.qingcloud.com/login) account to create load balancers, or follow the guide of any other cloud provider to create load balancers.
|
||||
|
||||
## Architecture
|
||||
|
||||
This example prepares six machines of CentOS 7.5. We will create two load balancers, and deploy three masters and Etcd nodes on three of the machines. You can configure these masters and Etcd nodes in `conf/hosts.ini`.
|
||||
|
||||

|
||||
|
||||
## Install HA Cluster
|
||||
|
||||
### Step 1: Create Load Balancers
|
||||
|
||||
This step briefly shows an example of creating a load balancer on QingCloud platform.
|
||||
|
||||
#### Create an Internal Load Balancer
|
||||
|
||||
1.1. Log in [QingCloud Console](https://console.qingcloud.com/login) and select **Network & CDN → Load Balancers**, then click on the create button and fill in the basic information.
|
||||
|
||||
1.2. Choose the VxNet that your machines are created within from the **Network** dropdown list. Here is `kube`. Other settings can be default values as follows. Click **Submit** to complete the creation.
|
||||
|
||||

|
||||
|
||||
1.3. Drill into the detail page of the load balancer, then create a listener that listens to the port `6443` of the `TCP` protocol.
|
||||
|
||||
- Name: Define a name for this Listener
|
||||
- Listener Protocol: Select `TCP` protocol
|
||||
- Port: `6443`
|
||||
- Load mode: `Poll`
|
||||
|
||||
> Note: After creating the listener, please check the firewall rules of the load balancer. Make sure that the port `6443` has been added to the firewall rules and the external traffic can pass through `6443`. Otherwise, the installation will fail.
|
||||
|
||||

|
||||
|
||||
1.4. Click **Add Backend**, choose the VxNet `kube` that we chose. Then click on the button **Advanced Search** and choose the three master nodes under the VxNet and set the port to `6443` which is the default secure port of api-server.
|
||||
|
||||
Click **Submit** when you are done.
|
||||
|
||||

|
||||
|
||||
1.5. Click on the button **Apply Changes** to activate the configurations. At this point, you can find the three masters have been added as the backend servers of the listener that is behind the internal load balancer.
|
||||
|
||||
> Please note: The status of all masters might shows `Not available` after you added them as backends. This is normal since the port `6443` of api-server are not active in masters yet. The status will change to `Active` and the port of api-server will be exposed after installation complete, which means the internal load balancer you configured works as expected.
|
||||
|
||||

|
||||
|
||||
#### Create an External Load Balancer
|
||||
|
||||
You need to create an EIP in advance.
|
||||
|
||||
1.6. Similarly, create an external load balancer without joining any network, but associate the EIP that you created to this load balancer.
|
||||
|
||||
1.7. Enter the load balancer detail page, create a listener that listens to the port `30880` of the `HTTP` protocol which is the nodeport of KubeSphere console..
|
||||
|
||||
> Note: After creating the listener, please check the firewall rules of the load balancer. Make sure that the port `30880` has been added to the firewall rules and the external traffic can pass through `6443`. Otherwise, the installation will fail.
|
||||
|
||||

|
||||
|
||||
1.8. Click **Add Backend**, then choose the `six` machines that we are going to install KubeSphere within the VxNet `Kube`, and set the port to `30880`.
|
||||
|
||||
Click **Submit** when you are done.
|
||||
|
||||
1.9. Click on the button **Apply Changes** to activate the configurations. At this point, you can find the six machines have been added as the backend servers of the listener that is behind the external load balancer.
|
||||
|
||||

|
||||
|
||||
### Step 2: Modify the host.ini
|
||||
|
||||
Go to the taskbox where you downloaded the installer by following the [Multi-node Installation](../multi-node) and complete the following configurations.
|
||||
|
||||
| **Parameter** | **Description** |
|
||||
|--------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| `[all]` | node information. Use the following syntax if you run installation as `root` user: <br> - `<node_name> ansible_connection=<host> ip=<ip_address>` <br> - `<node_name> ansible_host=<ip_address> ip=<ip_address> ansible_ssh_pass=<pwd>` <br> If you log in as a non-root user, use the syntax: <br> - `<node_name> ansible_connection=<host> ip=<ip_address> ansible_user=<user> ansible_become_pass=<pwd>` |
|
||||
| `[kube-master]` | master node names |
|
||||
| `[kube-node]` | worker node names |
|
||||
| `[etcd]` | etcd node names. The number of `etcd` nodes needs to be odd. |
|
||||
| `[k8s-cluster:children]` | group names of `[kube-master]` and `[kube-node]` |
|
||||
|
||||
|
||||
We use **CentOS 7.5** with `root` user to install an HA cluster. Please see the following configuration as an example:
|
||||
|
||||
> Note:
|
||||
> <br>
|
||||
> If the _taskbox_ cannot establish `ssh` connection with the rest nodes, try to use the non-root user configuration.
|
||||
|
||||
#### host.ini example
|
||||
|
||||
```ini
|
||||
[all]
|
||||
master1 ansible_connection=local ip=192.168.0.1
|
||||
master2 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD
|
||||
master3 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD
|
||||
node1 ansible_host=192.168.0.4 ip=192.168.0.4 ansible_ssh_pass=PASSWORD
|
||||
node2 ansible_host=192.168.0.5 ip=192.168.0.5 ansible_ssh_pass=PASSWORD
|
||||
node3 ansible_host=192.168.0.6 ip=192.168.0.6 ansible_ssh_pass=PASSWORD
|
||||
|
||||
[kube-master]
|
||||
master1
|
||||
master2
|
||||
master3
|
||||
|
||||
[kube-node]
|
||||
node1
|
||||
node2
|
||||
node3
|
||||
|
||||
[etcd]
|
||||
master1
|
||||
master2
|
||||
master3
|
||||
|
||||
[k8s-cluster:children]
|
||||
kube-node
|
||||
kube-master
|
||||
```
|
||||
|
||||
### Step 3: Configure the Load Balancer Parameters
|
||||
|
||||
Besides configuring the `common.yaml` by following the [Multi-node Installation](../multi-node), you need to modify the load balancer information in the `common.yaml`. Assume the **VIP** address and listening port of the **internal load balancer** are `192.168.0.253` and `6443`, then you can refer to the following example.
|
||||
|
||||
> - Note that address and port should be indented by two spaces in `common.yaml`, and the address should be VIP.
|
||||
> - The domain name of the load balancer is "lb.kubesphere.local" by default for internal access. If you need to change the domain name, please uncomment and modify it.
|
||||
|
||||
#### The configuration sample in common.yaml
|
||||
|
||||
```yaml
|
||||
## External LB example config
|
||||
## apiserver_loadbalancer_domain_name: "lb.kubesphere.local"
|
||||
loadbalancer_apiserver:
|
||||
address: 192.168.0.253
|
||||
port: 6443
|
||||
```
|
||||
|
||||
Finally, please refer to the [guide](../storage-configuration) to configure the persistent storage service in `common.yaml` and start your HA cluster installation.
|
||||
|
||||
Then it is ready to install the high availability KubeSphere cluster.
|
||||
|
|
@ -1,176 +0,0 @@
|
|||
---
|
||||
title: "Multi-node Installation"
|
||||
keywords: 'kubesphere, kubernetes, docker, kubesphere installer'
|
||||
description: 'The guide for installing KubeSphere on Multi-Node in development or testing environment'
|
||||
|
||||
weight: 2220
|
||||
---
|
||||
|
||||
`Multi-Node` installation enables installing KubeSphere on multiple nodes. Typically, use any one node as _taskbox_ to run the installation task. Please note `ssh` communication is required to be established between taskbox and other nodes.
|
||||
|
||||
- <font color=red>The following instructions are for the default installation without enabling any optional components as we have made them pluggable since v2.1.0. If you want to enable any one, please read [Enable Pluggable Components](../pluggable-components).</font>
|
||||
- <font color=red>If your machines in total have >= 8 cores and >= 16G memory, we recommend you to install the full package of KubeSphere by [Enabling Optional Components](../complete-installation)</font>.
|
||||
- <font color=red> The installation time depends on your network bandwidth, your computer configuration, the number of nodes, etc. </font>
|
||||
|
||||
## Prerequisites
|
||||
|
||||
If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information.
|
||||
|
||||
## Step 1: Prepare Linux Hosts
|
||||
|
||||
The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements.
|
||||
|
||||
- Time synchronization is required across all nodes, otherwise the installation may not succeed;
|
||||
- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`;
|
||||
- If you are using `Ubuntu 18.04`, you need to use the user `root`;
|
||||
- If the Debian system does not have the sudo command installed, you need to execute `apt update && apt install sudo` command using root before installation.
|
||||
|
||||
### Hardware Recommendation
|
||||
|
||||
- KubeSphere can be installed on any cloud platform.
|
||||
- The installation speed can be accelerated by increasing network bandwidth.
|
||||
- If you choose air-gapped installation, ensure your disk of each node is at least 100G.
|
||||
|
||||
| System | Minimum Requirements (Each node) |
|
||||
| --- | --- |
|
||||
| CentOS 7.4 ~ 7.7 (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:40 G |
|
||||
| Ubuntu 16.04/18.04 LTS (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:40 G |
|
||||
| Red Hat Enterprise Linux Server 7.4 (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:40 G |
|
||||
| Debian Stretch 9.5 (64 bit)| CPU:2 Core, Memory:4 G, Disk Space:40 G |
|
||||
|
||||
The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes.
|
||||
|
||||
> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide.
|
||||
|
||||
| Host IP | Host Name | Role |
|
||||
| --- | --- | --- |
|
||||
|192.168.0.1|master|master, etcd|
|
||||
|192.168.0.2|node1|node|
|
||||
|192.168.0.3|node2|node|
|
||||
|
||||
### Cluster Architecture
|
||||
|
||||
#### Single Master, Single Etcd, Two Nodes
|
||||
|
||||

|
||||
|
||||
## Step 2: Download Installer Package
|
||||
|
||||
**1.** Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`.
|
||||
|
||||
```bash
|
||||
curl -L https://kubesphere.io/download/stable/latest > installer.tar.gz \
|
||||
&& tar -zxf installer.tar.gz && cd kubesphere-all-v2.1.1/conf
|
||||
```
|
||||
|
||||
**2.** Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file.
|
||||
|
||||
> Note:
|
||||
>
|
||||
> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`.
|
||||
> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`.
|
||||
> - master, node1 and node2 are the host names of each node and all host names should be in lowercase.
|
||||
|
||||
### hosts.ini
|
||||
|
||||
```ini
|
||||
[all]
|
||||
master ansible_connection=local ip=192.168.0.1
|
||||
node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD
|
||||
node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD
|
||||
|
||||
[kube-master]
|
||||
master
|
||||
|
||||
[kube-node]
|
||||
node1
|
||||
node2
|
||||
|
||||
[etcd]
|
||||
master
|
||||
|
||||
[k8s-cluster:children]
|
||||
kube-node
|
||||
kube-master
|
||||
```
|
||||
|
||||
> Note:
|
||||
>
|
||||
> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here.
|
||||
> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively.
|
||||
> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`.
|
||||
>
|
||||
> Parameters Specification:
|
||||
>
|
||||
> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection.
|
||||
> - `ansible_host`: The name of the host to be connected.
|
||||
> - `ip`: The ip of the host to be connected.
|
||||
> - `ansible_user`: The default ssh user name to use.
|
||||
> - `ansible_become_pass`: Allows you to set the privilege escalation password.
|
||||
> - `ansible_ssh_pass`: The password of the host to be connected using root.
|
||||
|
||||
## Step 3: Install KubeSphere to Linux Machines
|
||||
|
||||
> Note:
|
||||
>
|
||||
> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default.
|
||||
> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions.
|
||||
> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation.
|
||||
> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts.
|
||||
|
||||
**1.** Enter `scripts` folder, and execute `install.sh` using `root` user:
|
||||
|
||||
```bash
|
||||
cd ../cripts
|
||||
./install.sh
|
||||
```
|
||||
|
||||
**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume.
|
||||
|
||||
```bash
|
||||
################################################
|
||||
KubeSphere Installer Menu
|
||||
################################################
|
||||
* 1) All-in-one
|
||||
* 2) Multi-node
|
||||
* 3) Quit
|
||||
################################################
|
||||
https://kubesphere.io/ 2020-02-24
|
||||
################################################
|
||||
Please input an option: 2
|
||||
|
||||
```
|
||||
|
||||
**3.** Verify the multi-node installation:
|
||||
|
||||
**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go.
|
||||
|
||||
```bash
|
||||
successsful!
|
||||
#####################################################
|
||||
### Welcome to KubeSphere! ###
|
||||
#####################################################
|
||||
|
||||
Console: http://192.168.0.1:30880
|
||||
Account: admin
|
||||
Password: P@88w0rd
|
||||
|
||||
NOTE:Please modify the default password after login.
|
||||
#####################################################
|
||||
```
|
||||
|
||||
> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components).
|
||||
|
||||
**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in.
|
||||
|
||||

|
||||
|
||||
<font color=red>Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up.</font>
|
||||
|
||||

|
||||
|
||||
## FAQ
|
||||
|
||||
The installer has been tested on Aliyun, AWS, Huawei Cloud, QingCloud, Tencent Cloud. Please check the [results](https://github.com/kubesphere/ks-installer/issues/23) for details. Also please read the [FAQ of installation](../../faq/faq-install).
|
||||
|
||||
If you have any further questions please do not hesitate to file issues on [GitHub](https://github.com/kubesphere/kubesphere/issues).
|
||||
|
|
@ -1,157 +0,0 @@
|
|||
---
|
||||
title: "StorageClass Configuration"
|
||||
keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus'
|
||||
description: 'Instructions for Setting up StorageClass for KubeSphere'
|
||||
|
||||
weight: 2250
|
||||
---
|
||||
|
||||
Currently, Installer supports the following [Storage Class](https://kubernetes.io/docs/concepts/storage/storage-classes/), providing persistent storage service for KubeSphere (more storage classes will be supported soon).
|
||||
|
||||
- NFS
|
||||
- Ceph RBD
|
||||
- GlusterFS
|
||||
- QingCloud Block Storage
|
||||
- QingStor NeonSAN
|
||||
- Local Volume (for development and test only)
|
||||
|
||||
The versions of storage systems and corresponding CSI plugins in the table listed below have been well tested.
|
||||
|
||||
| **Name** | **Version** | **Reference** |
|
||||
| ----------- | --- |---|
|
||||
Ceph RBD Server | v0.94.10 | For development and testing, refer to [Install Ceph Storage Server](/zh-CN/appendix/ceph-ks-install/) for details. Please refer to [Ceph Documentation](http://docs.ceph.com/docs/master/) for production. |
|
||||
Ceph RBD Client | v12.2.5 | Before installing KubeSphere, you need to configure the corresponding parameters in `common.yaml`. Please refer to [Ceph RBD](../storage-configuration/#ceph-rbd) |
|
||||
GlusterFS Server | v3.7.6 | For development and testing, refer to [Deploying GlusterFS Storage Server](/zh-CN/appendix/glusterfs-ks-install/) for details. Please refer to [Gluster Documentation](https://www.gluster.org/install/) or [Gluster Documentation](http://gluster.readthedocs.io/en/latest/Install-Guide/Install/) for production. Note you need to install [Heketi Manager (V3.0.0)](https://github.com/heketi/heketi/tree/master/docs/admin). |
|
||||
|GlusterFS Client |v3.12.10|Before installing KubeSphere, you need to configure the corresponding parameters in `common.yaml`. Please refer to [GlusterFS](../storage-configuration/#glusterfs)|
|
||||
|NFS Client | v3.1.0 | Before installing KubeSphere, you need to configure the corresponding parameters in `common.yaml`. Make sure you have prepared NFS storage server. Please see [NFS Client](../storage-configuration/#nfs) |
|
||||
QingCloud-CSI|v0.2.0.1|You need to configure the corresponding parameters in `common.yaml` before installing KubeSphere. Please refer to [QingCloud CSI](../storage-configuration/#qingcloud-csi) for details|
|
||||
NeonSAN-CSI|v0.3.0| Before installing KubeSphere, you need to configure the corresponding parameters in `common.yaml`. Make sure you have prepared QingStor NeonSAN storage server. Please see [Neonsan-CSI](../storage-configuration/#neonsan-csi) |
|
||||
|
||||
> Note: You are only allowed to set ONE default storage classes in the cluster. To specify a default storage class, make sure there is no default storage class already exited in the cluster.
|
||||
|
||||
## Storage Configuration
|
||||
|
||||
After preparing the storage server, you need to refer to the parameters description in the following table. Then modify the corresponding configurations in `conf/common.yaml` accordingly.
|
||||
|
||||
The following describes the storage configuration in `common.yaml`.
|
||||
|
||||
> Note: Local Volume is configured as the default storage class in `common.yaml` by default. If you are going to set other storage class as the default, disable the Local Volume and modify the configuration for other storage class.
|
||||
|
||||
### Local Volume (For developing or testing only)
|
||||
|
||||
A [Local Volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) represents a mounted local storage device such as a disk, partition or directory. Local volumes can only be used as a statically created PersistentVolume. We recommend you to use Local volume in testing or development only since it is quick and easy to install KubeSphere without the struggle to set up persistent storage server. Refer to following table for the definition in `conf/common.yaml`.
|
||||
|
||||
| **Local volume** | **Description** |
|
||||
| --- | --- |
|
||||
| local\_volume\_provisioner\_enabled | Whether to use Local as the persistent storage, defaults to true |
|
||||
| local\_volume\_provisioner\_storage\_class | Storage class name, default value:local |
|
||||
| local\_volume\_is\_default\_class | Whether to set Local as the default storage class, defaults to true.|
|
||||
|
||||
### NFS
|
||||
|
||||
An NFS volume allows an existing NFS (Network File System) share to be mounted into your Pod. NFS can be configured in `conf/common.yaml`. Note you need to prepare NFS server in advance.
|
||||
|
||||
| **NFS** | **Description** |
|
||||
| --- | --- |
|
||||
| nfs\_client\_enable | Whether to use NFS as the persistent storage, defaults to false |
|
||||
| nfs\_client\_is\_default\_class | Whether to set NFS as default storage class, defaults to false. |
|
||||
| nfs\_server | The NFS server address, either IP or Hostname |
|
||||
| nfs\_path | NFS shared directory, which is the file directory shared on the server, see [Kubernetes Documentation](https://kubernetes.io/docs/concepts/storage/volumes/#nfs) |
|
||||
|nfs\_vers3\_enabled | Specifies which version of the NFS protocol to use, defaults to false which means v4. True means v4 |
|
||||
|nfs_archiveOnDelete | Archive PVC when deleting. It will automatically remove data from `oldPath` when it sets to false |
|
||||
|
||||
### Ceph RBD
|
||||
|
||||
The open source [Ceph RBD](https://ceph.com/) distributed storage system can be configured to use in `conf/common.yaml`. You need to prepare Ceph storage server in advance. Please refer to [Kubernetes Documentation](https://kubernetes.io/docs/concepts/storage/storage-classes/#ceph-rbd) for more details.
|
||||
|
||||
| **Ceph\_RBD** | **Description** |
|
||||
| --- | --- |
|
||||
| ceph\_rbd\_enabled | Whether to use Ceph RBD as the persistent storage, defaults to false |
|
||||
| ceph\_rbd\_storage\_class | Storage class name |
|
||||
| ceph\_rbd\_is\_default\_class | Whether to set Ceph RBD as default storage class, defaults to false |
|
||||
| ceph\_rbd\_monitors | Ceph monitors, comma delimited. This parameter is required, which depends on Ceph RBD server parameters |
|
||||
| ceph\_rbd\_admin\_id | Ceph client ID that is capable of creating images in the pool. Defaults to “admin” |
|
||||
| ceph\_rbd\_admin\_secret | Admin_id's secret, secret name for "adminId". This parameter is required. The provided secret must have type “kubernetes.io/rbd” |
|
||||
| ceph\_rbd\_pool | Ceph RBD pool. Default is “rbd” |
|
||||
| ceph\_rbd\_user\_id | Ceph client ID that is used to map the RBD image. Default is the same as adminId |
|
||||
| ceph\_rbd\_user\_secret | Secret for User_id, it is required to create this secret in namespace which used rbd image |
|
||||
| ceph\_rbd\_fsType | fsType that is supported by Kubernetes. Default: "ext4"|
|
||||
| ceph\_rbd\_imageFormat | Ceph RBD image format, “1” or “2”. Default is “1” |
|
||||
|ceph\_rbd\_imageFeatures| This parameter is optional and should only be used if you set imageFormat to “2”. Currently supported features are layering only. Default is “”, and no features are turned on|
|
||||
|
||||
> Note:
|
||||
>
|
||||
> The ceph secret, which is created in storage class, like "ceph_rbd_admin_secret" and "ceph_rbd_user_secret", is retrieved using following command in Ceph storage server.
|
||||
|
||||
```bash
|
||||
ceph auth get-key client.admin
|
||||
```
|
||||
|
||||
### GlusterFS
|
||||
|
||||
[GlusterFS](https://docs.gluster.org/en/latest/) is a scalable network filesystem suitable for data-intensive tasks such as cloud storage and media streaming. You need to prepare GlusterFS storage server in advance. Please refer to [Kubernetes Documentation](https://kubernetes.io/docs/concepts/storage/storage-classes/#glusterfs) for further information.
|
||||
|
||||
| **GlusterFS(It requires glusterfs cluster which is managed by heketi)**|**Description** |
|
||||
| --- | --- |
|
||||
| glusterfs\_provisioner\_enabled | Whether to use GlusterFS as the persistent storage, defaults to false |
|
||||
| glusterfs\_provisioner\_storage\_class | Storage class name |
|
||||
| glusterfs\_is\_default\_class | Whether to set GlusterFS as default storage class, defaults to false |
|
||||
| glusterfs\_provisioner\_restauthenabled | Gluster REST service authentication boolean that enables authentication to the REST server |
|
||||
| glusterfs\_provisioner\_resturl | Gluster REST service/Heketi service url which provision gluster volumes on demand. The general format should be "IP address:Port" and this is a mandatory parameter for GlusterFS dynamic provisioner|
|
||||
| glusterfs\_provisioner\_clusterid | Optional, for example, 630372ccdc720a92c681fb928f27b53f is the ID of the cluster which will be used by Heketi when provisioning the volume. It can also be a list of clusterids |
|
||||
| glusterfs\_provisioner\_restuser | Gluster REST service/Heketi user who has access to create volumes in the Gluster Trusted Pool |
|
||||
| glusterfs\_provisioner\_secretName | Optional, identification of Secret instance that contains user password to use when talking to Gluster REST service, Installer will automatically create this secret in kube-system |
|
||||
| glusterfs\_provisioner\_gidMin | The minimum value of GID range for the storage class |
|
||||
| glusterfs\_provisioner\_gidMax |The maximum value of GID range for the storage class |
|
||||
| glusterfs\_provisioner\_volumetype | The volume type and its parameters can be configured with this optional value, for example: ‘Replica volume’: volumetype: replicate:3 |
|
||||
| jwt\_admin\_key | "jwt.admin.key" field is from "/etc/heketi/heketi.json" in Heketi server |
|
||||
|
||||
**Attention:**
|
||||
|
||||
> Please note: `"glusterfs_provisioner_clusterid"` could be returned from glusterfs server by running the following command:
|
||||
|
||||
```bash
|
||||
export HEKETI_CLI_SERVER=http://localhost:8080
|
||||
heketi-cli cluster list
|
||||
```
|
||||
|
||||
### QingCloud Block Storage
|
||||
|
||||
[QingCloud Block Storage](https://docs.qingcloud.com/product/Storage/volume/) is supported in KubeSphere as the persistent storage service. If you would like to experience dynamic provisioning when creating volume, we recommend you to use it as your persistent storage solution. KubeSphere integrates [QingCloud-CSI](https://github.com/yunify/qingcloud-csi/blob/master/README_zh.md), and allows you to use various block storage services of QingCloud. With simple configuration, you can quickly expand, clone PVCs and view the topology of PVCs, create/delete snapshot, as well as restore volume from snapshot.
|
||||
|
||||
QingCloud-CSI plugin has implemented the standard CSI. You can easily create and manage different types of volumes in KubeSphere, which are provided by QingCloud. The corresponding PVCs will created with ReadWriteOnce access mode and mounted to running Pods.
|
||||
|
||||
QingCloud-CSI supports create the following five types of volume in QingCloud:
|
||||
|
||||
- High capacity
|
||||
- Standard
|
||||
- SSD Enterprise
|
||||
- Super high performance
|
||||
- High performance
|
||||
|
||||
|**QingCloud-CSI** | **Description**|
|
||||
| --- | ---|
|
||||
| qingcloud\_csi\_enabled|Whether to use QingCloud-CSI as the persistent storage volume, defaults to false |
|
||||
| qingcloud\_csi\_is\_default\_class| Whether to set QingCloud-CSI as default storage class, defaults to false |
|
||||
qingcloud\_access\_key\_id , <br> qingcloud\_secret\_access\_key| Please obtain it from [QingCloud Console](https://console.qingcloud.com/login) |
|
||||
|qingcloud\_zone| Zone should be the same as the zone where the Kubernetes cluster is installed, and the CSI plugin will operate on the storage volumes for this zone. For example: zone can be set to these values, such as sh1a (Shanghai 1-A), sh1b (Shanghai 1-B), pek2 (Beijing 2), pek3a (Beijing 3-A), pek3b (Beijing 3-B), pek3c (Beijing 3-C), gd1 (Guangdong 1), gd2a (Guangdong 2-A), ap1 (Asia Pacific 1), ap2a (Asia Pacific 2-A) |
|
||||
| type | The type of volume in QingCloud platform. In QingCloud platform, 0 represents high performance volume. 3 represents super high performance volume. 1 or 2 represents high capacity volume depending on cluster‘s zone, see [QingCloud Documentation](https://docs.qingcloud.com/product/api/action/volume/create_volumes.html)|
|
||||
| maxSize, minSize | Limit the range of volume size in GiB|
|
||||
| stepSize | Set the increment of volumes size in GiB|
|
||||
| fsType | The file system of the storage volume, which supports ext3, ext4, xfs. The default is ext4|
|
||||
|
||||
### QingStor NeonSAN
|
||||
|
||||
The NeonSAN-CSI plugin supports the enterprise-level distributed storage [QingStor NeonSAN](https://www.qingcloud.com/products/qingstor-neonsan/) as the persistent storage solution. You need prepare the NeonSAN server, then configure the NeonSAN-CSI plugin to connect to its storage server in `conf/common.yaml`. Please refer to [NeonSAN-CSI Reference](https://github.com/wnxn/qingstor-csi/blob/master/docs/reference_zh.md#storageclass-%E5%8F%82%E6%95%B0) for further information.
|
||||
|
||||
| **NeonSAN** | **Description** |
|
||||
| --- | --- |
|
||||
| neonsan\_csi\_enabled | Whether to use NeonSAN as the persistent storage, defaults to false |
|
||||
| neonsan\_csi\_is\_default\_class | Whether to set NeonSAN-CSI as the default storage class, defaults to false|
|
||||
Neonsan\_csi\_protocol | transportation protocol, user must set the option, such as TCP or RDMA|
|
||||
| neonsan\_server\_address | NeonSAN server address |
|
||||
| neonsan\_cluster\_name| NeonSAN server cluster name|
|
||||
| neonsan\_server\_pool|A comma separated list of pools. Tell plugin to manager these pools. User must set the option, the default value is kube|
|
||||
| neonsan\_server\_replicas|NeonSAN image replica count. Default: 1|
|
||||
| neonsan\_server\_stepSize|set the increment of volumes size in GiB. Default: 1|
|
||||
| neonsan\_server\_fsType|The file system to use for the volume. Default: ext4|
|
||||
|
|
@ -0,0 +1,10 @@
|
|||
---
|
||||
title: "Uninstalling"
|
||||
keywords: 'kubernetes, kubesphere, uninstalling, remove-cluster'
|
||||
description: 'How to uninstall KubeSphere'
|
||||
|
||||
|
||||
weight: 2450
|
||||
---
|
||||
|
||||
Uninstall will remove KubeSphere and Kubernetes from the machines. This operation is irreversible and does not have any backup. Please be caution with operation. You can see [Uninstalling KubeSphere and Kubernetes](../uninstalling-kubesphere-and-kubernetes) for details.
|
||||
|
|
@ -0,0 +1,26 @@
|
|||
---
|
||||
title: "Uninstalling KubeSphere and Kubernetes"
|
||||
keywords: 'kubernetes, kubesphere, uninstalling, remove-cluster'
|
||||
description: 'How to uninstall KubeSphere and kubernetes'
|
||||
|
||||
|
||||
weight: 2451
|
||||
---
|
||||
|
||||
You can delete the cluster by the following command.
|
||||
|
||||
{{< notice tip >}}
|
||||
Uninstall will remove KubeSphere and Kubernetes from the machines. This operation is irreversible and does not have any backup. Please be caution with operation.
|
||||
{{</ notice >}}
|
||||
|
||||
- If you started with the quick start (all-in-one):
|
||||
|
||||
```
|
||||
./kk delete cluster
|
||||
```
|
||||
|
||||
- If you started with the advanced mode (created with a configuration file):
|
||||
|
||||
```
|
||||
./kk delete cluster [-f config-sample.yaml]
|
||||
```
|
||||
|
|
@ -11,12 +11,29 @@ icon: "/images/docs/docs.svg"
|
|||
|
||||
---
|
||||
|
||||
## Installing KubeSphere and Kubernetes on Linux
|
||||
This chapter gives you an overview of the basic concept of KubeSphere, features, advantages, uses cases and more.
|
||||
|
||||
In this chapter, we will demonstrate how to use KubeKey to provision a new Kubernetes and KubeSphere cluster based on different infrastructures. Kubekey can help you to quickly build a production-ready cluster architecture on a set of machines from zero to one. It also helps you to easily scale the cluster and install pluggable components on existing architecture.
|
||||
## [What is KubeSphere](https://kubesphere-v3.netlify.app/docs/introduction/what-is-kubesphere/)
|
||||
|
||||
## Most Popular Pages
|
||||
Develop a basic understanding of KubeSphere and highlighted features of its latest version.
|
||||
|
||||
Below you will find some of the most common and helpful pages from this chapter. We highly recommend you to review them at first.
|
||||
## [Features](https://kubesphere-v3.netlify.app/docs/introduction/features/)
|
||||
|
||||
Get started with KubeSphere by understanding what KubeSphere is capable of and how you can make full use of it.
|
||||
|
||||
## [Architecture](https://kubesphere-v3.netlify.app/docs/introduction/architecture/)
|
||||
|
||||
Explore the structure of KubeSphere to get a clear view of the components both at front end and back end.
|
||||
|
||||
## [Advantages](https://kubesphere-v3.netlify.app/docs/introduction/advantages/)
|
||||
|
||||
Understand the reason why KubeSphere is beneficial to your work.
|
||||
|
||||
## [Use Cases](https://kubesphere-v3.netlify.app/docs/introduction/scenarios/)
|
||||
|
||||
See how KubeSphere can be used in different scenarios, such as multi-cluster deployment, DevOps and service mesh.
|
||||
|
||||
## [Glossary](https://kubesphere-v3.netlify.app/docs/introduction/glossary/)
|
||||
|
||||
Learn terms and phrases that are used in KubeSphere.
|
||||
|
||||
{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}}
|
||||
|
|
|
|||
|
|
@ -1,97 +1,92 @@
|
|||
---
|
||||
title: "Advantages"
|
||||
keywords: "kubesphere, kubernetes, docker, helm, jenkins, istio, prometheus, service mesh, advantages"
|
||||
description: "KubeSphere advantages"
|
||||
keywords: "KubeSphere, Kubernetes, Advantages"
|
||||
description: "KubeSphere Advantages"
|
||||
|
||||
weight: 1400
|
||||
---
|
||||
|
||||
## Vision
|
||||
|
||||
{{< notice note >}}
|
||||
### This is a simple note.
|
||||
{{</ notice >}}
|
||||
Kubernetes has become the de facto standard for deploying containerized applications at scale in private, public and hybrid cloud environments. However, many people can easily get confused when they start to use Kubernetes as it is complicated and has many additional components to manage. Some components need to be installed and deployed by users themselves, such as storage and network services. At present, Kubernetes only provides open-source solutions or projects, which can be difficult to install, maintain and operate to some extent. For users, it is not always easy to quickly get started as they are faced with a steep learning curve.
|
||||
|
||||
{{< notice tip >}}
|
||||
This is a simple tip.
|
||||
{{</ notice >}}
|
||||
KubeSphere is designed to reduce or eliminate many Kubernetes headaches related to building, deployment, management, observability and so on. It provides comprehensive services and automates provisioning, scaling and management of applications so that you can focus on code writing. More specifically, KubeSphere boasts an extensive portfolio of features including multi-cluster management, application lifecycle management, multi-tenant management, CI/CD pipelines, service mesh, and observability (monitoring, logging, alerting, auditing, events and notification).
|
||||
|
||||
{{< notice info >}}
|
||||
This is a simple info.
|
||||
{{</ notice >}}
|
||||
As a comprehensive open-source platform, KubeSphere strives to make the container platform more user-friendly and powerful. With a highly responsive web console, KubeSphere provides a graphic interface for developing, testing and operating, which can be easily accessed in a browser. For users who are accustomed to command-line tools, they can quickly get familiar with KubeSphere as kubectl is also integrated in the fully-functioning web console. With the responsive UI design, users can create, modify and create their apps and resources with a minimal learning curve.
|
||||
|
||||
{{< notice warning >}}
|
||||
This is a simple warning.
|
||||
{{</ notice >}}
|
||||
|
||||
{{< tabs >}}
|
||||
|
||||
{{< tab "first" >}}
|
||||
### Why KubeSphere
|
||||
{{</ tab >}}
|
||||
|
||||
{{< tab "second" >}}
|
||||
```
|
||||
console.log('test')
|
||||
```
|
||||
{{</ tab >}}
|
||||
|
||||
{{< tab "third" >}}
|
||||
this is third tab
|
||||
{{</ tab >}}
|
||||
|
||||
{{</ tabs >}}
|
||||
|
||||
KubeSphere is a distributed operating system that provides full stack system services and a pluggable framework for third-party software integration for enterprise-critical containerized workloads running in data center.
|
||||
|
||||
Kubernetes has now become the de facto standard for deploying containerized applications at scale in private, public and hybrid cloud environments. However, many people easily get confused when they start to use Kubernetes as it is complicated and has many additional components to manage, some of which need to be installed and deployed by users themselves, such as storage service and network service. At present, Kubernetes only provides open source solutions or projects, which may be difficult to install, maintain and operate to some extent. For users, learning costs and barrier to entry are both high. In a word, it is not easy to get started quickly.
|
||||
|
||||
If you want to deploy your cloud-native applications on the cloud, it is a good practice to adopt KubeSphere to help you run Kubernetes since KubeSphere already provides rich and required services for running your applications successfully so that you can focus on your core business. More specifically, KubeSphere provides application lifecycle management, infrastructure management, CI/CD pipeline, service mesh, observability such as monitoring, logging, alerting, events and notification. In another word, Kubernetes is a wonderful open-source platform. KubeSphere steps further to make the container platform more user-friendly and powerful not only to ease the learning curve and drive the adoption of Kubernetes, but also to help users deliver cloud-native applications faster and easier.
|
||||
In addition, KubeSphere offers excellent solutions to storage and network. Apart from the major open-source storage solutions such as Ceph RBD and GlusterFS, users are also provided with [QingCloud Block Storage](https://docs.qingcloud.com/product/storage/volume/) and [QingStor NeonSAN](https://docs.qingcloud.com/product/storage/volume/super_high_performance_shared_volume/), developed by QingCloud for persistent storage. With the integrated QingCloud CSI and NeonSAN CSI plugins, enterprises can enjoy a more stable and secure services of their apps and data.
|
||||
|
||||
## Why KubeSphere
|
||||
|
||||
KubeSphere provides high-performance and scalable container service management for enterprise users, It aims to help enterprises accomplish the digital transformation driven by the new generation of Internet technology, and accelerate the speed of iteration and delivery of business to meet the ever-changing business needs of enterprises.
|
||||
KubeSphere provides high-performance and scalable container service management for enterprises. It aims to help them accomplish digital transformation driven by cutting-edge technologies, and accelerate app iteration and business delivery to meet the ever-changing needs of enterprises.
|
||||
|
||||
## Awesome User Experience and Wizard UI
|
||||
Here are the six major advantages that make KubeSphere stand out among its counterparts.
|
||||
|
||||
- KubeSphere provides user-friendly web console for developing, testing and operating. With the wizard UI, users greatly reduce the learning and operating cost of Kubernetes.
|
||||
- Users can deploy an enterprise application with one click from template, and use the application lifecycle management service to deliver their products in the console.
|
||||
### Unified Management of Clusters across Cloud Providers
|
||||
|
||||
## High Reliability and Availability
|
||||
As container usage ramps up, enterprises are faced with increased complexity of cluster management as they deploy clusters across cloud and on-premises environments. To address the urgent need of users for a uniform platform to manage heterogeneous clusters, KubeSphere sees a major feature enhancement with substantial benefits. Users can leverage KubeSphere to manage, monitor, import, operate and retire clusters across regions, clouds and environments.
|
||||
|
||||
- Automatic elastic scaling: Deployment is able to scale the number of Pods horizontally, and Pod is able to scale vertically based on observed metrics such as CPU utilization when user requests change, which guarantees applications keep running without crash because of resource pressure.
|
||||
- Health check service: Supporting visually setting health check probes for containers to ensure the reliability of business.
|
||||
The feature can be enabled both before and after the installation, giving users great flexibility as they make their own decisions to use KubeSphere for their specific issues. In particular, it features:
|
||||
|
||||
## Containerized DevOps Delivery
|
||||
**Unified Management**. Users can import Kubernetes clusters either through direct connection or with an agent. With simple configurations, the process can be done within minutes in the interactive console. Once clusters are imported, users are able to monitor the status and operate on cluster resources in a unified way.
|
||||
|
||||
- Easy-to-use pipeline: CI/CD pipeline management is visualized without user configuring, also the system ships many built-in pipeline templates.
|
||||
- Source to Image (S2I):Through S2I, users do not need to write Dockerfile. The system can get source code from code repository and build the image automatically, deploy the workload into Kubernetes environment and push it to image registry automatically as well.
|
||||
- Binary to Image (B2I):exactly same as S2I except the input is binary artifacts instead of source code which is much useful for developers without Docker skills or legacy applications dockerized.
|
||||
- End-to-end pipeline configuration: supports end-to-end pipeline configuration from pulling source code from repository such as GitHub, SVN and Git, to compiling code, to packaging image, to scanning image in terms of security, then to pushing image to registry, and to releasing the application.
|
||||
- Source code quality management: supports static analysis scanning for code quality for the application in DevOps project.
|
||||
- Logging: Logs all steps of CI/CD pipeline.
|
||||
**High Availability**. This is extremely useful when it comes to disaster recovery. A cluster can run major services with another one serving as the backup. When the major one goes down, services can be quickly taken over by another cluster. The logic is quite similar to the case when clusters are deployed in different regions, as requests can be sent to the closest one for low latency. In short, high availability is achieved across zones and clusters.
|
||||
|
||||
## Out-of-Box Microservice Governance
|
||||
For more information, see Multi-cluster Management.
|
||||
|
||||
- Flexible micro-service framework: provides visual micro-service governance capabilities based on Istio micro-service framework, and divides Kubernetes services into finer-grained services to support non-intrusive micro-service governance.
|
||||
- Comprehensive governance services: offers excellent microservice governance such as grayscale releasing, circuit break, traffic monitoring, traffic control, rate limit, tracing, intelligent routing, etc.
|
||||
### Powerful Observability
|
||||
|
||||
## Multiple Persistent Storage Support
|
||||
The observability feature of KubeSphere has been greatly improved with key building blocks enhanced, including monitoring, logging, auditing, events, alerting and notification. The highly functional system allows users to observe virtually everything that happens in the platform. It has much to offer for users with distinct advantages listed as below:
|
||||
|
||||
- Support GlusterFS, CephRBD, NFS, etc., open source storage solutions.
|
||||
- Provide NeonSAN CSI plug-in to connect commercial QingStor NeonSAN service to meet core business requirements, i.e., low latency, strong resilient, high performance.
|
||||
- Provide QingCloud CSI plug-in that accesses commercial QingCloud block storage services.
|
||||
**Customized**. Users are allowed to customize their own monitoring dashboard with multiple display forms available. They can set their own templates based on their needs, add the metric they want to monitor and even choose the display color they prefer. Alerting policies and rules can all be customized as well, including repetition interval, time and threshold.
|
||||
|
||||
## Flexible Network Solution Support
|
||||
**Diversified**. Ops teams are freed from the complicated work of recording massive data as KubeSphere monitors resources from virtually all dimensions. It also features an efficient notification system with diversified channels for users to choose from.
|
||||
|
||||
- Support open-source network solutions such as Calico and Flannel.
|
||||
- A bare metal load balancer plug-in [Porter](https://github.com/kubesphere/porter) for Kubernetes installed on physical machines.
|
||||
**Visualized and Interactive**. KubeSphere presents users with a graphic web console, especially for the monitoring of different resources. They are displayed in highly interactive graphs that give users a clear view of what is happening inside a cluster. Resources at different levels can also be sorted based on their usage, which is convenient for users to compare for further data analysis.
|
||||
|
||||
## Multi-tenant and Multi-dimensional Monitoring and Logging
|
||||
**Accurate**. The entire monitoring system functions at second-level precision that allow users to quickly locate any component failures. In terms of events and auditing, all activities are accurately recorded for future reference.
|
||||
|
||||
- Monitoring system is fully visualized, and provides open standard APIs for enterprises to integrate their existing operating platforms such as alerting, monitoring, logging etc. in order to have a unified system for their daily operating work.
|
||||
- Multi-dimensional and second-level precision monitoring metrics.
|
||||
- Provide resource usage ranking by node, workspace and project.
|
||||
- Provide service component monitoring for user to quickly locate component failures.
|
||||
- Provide rich alerting rules based on multi-tenancy and multi-dimensional monitoring metrics. Currently, the system supports two types of alerting. One is infrastructure alerting for cluster administrator. The other one is workload alerting for tenants.
|
||||
- Provide multi-tenant log management. In KubeSphere log search system, different tenants can only see their own log information.
|
||||
For more information, see Project Administration and Usage.
|
||||
|
||||
### Automated DevOps
|
||||
|
||||
Automation represents a key part of implementing DevOps. With automatic, streamlined pipelines in place, users are better positioned to distribute apps in terms of continuous delivery and integration.
|
||||
|
||||
**Jenkins-powered**. KubeSphere DevOps system is built with Jenkins as the engine, which is abundant in plugins. On top of that, Jenkins provides an enabling environment for extension development, making it possible for the DevOps team to work smoothly across the whole process (developing, testing, building, deploying, monitoring, logging, notifying, etc.) in a unified platform. The KubeSphere account can also be used for the built-in Jenkins, meeting the demand of enterprises for multi-tenant isolation of CI/CD pipelines and unified authentication.
|
||||
|
||||
**Convenient built-in tools**. Users can easily take advantage of automation tools (e.g. Binary-to-Image and Source-to-Image) even without a thorough understanding of how Docker or Kubernetes works. They only need to submit a registry address or upload binary files (e.g. JAR/WAR/Binary). Ultimately, services will be released to Kubernetes automatically without any coding in a Dockerfile.
|
||||
|
||||
For more information, see DevOps Administration.
|
||||
|
||||
### Fine-grained Access Control
|
||||
|
||||
KubeSphere users are allowed to implement fine-grained access control across different levels, including clusters, workspaces and projects. Users with specific roles can operate on different resources if they are authorized to do so.
|
||||
|
||||
**Self-defined**. Apart from system roles, KubeSphere empowers users to define their roles with a spectrum of operations that they can assign to tenants. This meets the need of enterprises for detailed task allocation as they can decide who should be responsible for what while not being affected by irrelevant resources.
|
||||
|
||||
**Secure**. As tenants at different levels are completely isolated from each other, they can share resources while not affecting one another. The network can also be completely isolated to ensure data security.
|
||||
|
||||
For more information, see Role and Member Management in Workspace.
|
||||
|
||||
### Out-of-Box Microservices Governance
|
||||
|
||||
On the back of Istio, KubeSphere features major grayscale strategies. All these features are out of the box, which means consistent user experiences without any code hacking. Traffic control, for example, plays an essential role in microservices governance. In this connection, Ops teams, in particular, are able to implement operational patterns (e.g. circuit breaking) to compensate for poorly behaving services. Here are two major reasons why you use microservices governance, or service mesh in KubeSphere:
|
||||
|
||||
- **Comprehensive**. KubeSphere provides users with a well-diversified portfolio of solutions to traffic management, including canary release, blue-green deployment, traffic mirroring and circuit breaking. In addition, the distributed tracing feature also helps users monitor apps, locate failures, and improve performance.
|
||||
- **Visualized**. With a highly responsive web console, KubeSphere allows users to view how microservices interconnect with each other in a straightforward way.
|
||||
|
||||
KubeSphere aims to make service-to-service calls within the microservices architecture reliable and fast. For more information, see Project Administration and Usage.
|
||||
|
||||
### Vibrant Open Source Community
|
||||
|
||||
As an open-source project, KubeSphere represents more than just a container platform for app deployment and distribution. We believe that a true open-source model focuses more on sharing, discussions and problem solving with everyone involved. Together with partners, ambassadors and contributors, and other community members, we file issues, submit pull requests, participate in meetups, and exchange ideas of innovation.
|
||||
|
||||
At KubeSphere, we have the capabilities and technical know-how to help you share the benefits that the open-source model can offer. More importantly, we have community members from around the world who make everything here possible.
|
||||
|
||||
**Partners**. KubeSphere partners play a critical role in KubeSphere's go-to-market strategy. They can be app developers, technology companies, cloud providers or go-to-market partners, all of whom drive the community ahead in their respective aspects.
|
||||
|
||||
**Ambassadors**. As community representatives, ambassadors promote KubeSphere in a variety of ways (e.g. activities, blogs and user cases) so that more people can join us.
|
||||
|
||||
**Contributors**. KubeSphere contributors help the whole community by contributing to code or documentation. You don't need to be an expert while you can still make a different even it is a minor code fix or language improvement.
|
||||
|
||||
For more information, see [Partner Program](https://kubesphere.io/partner/) and [Community Governance](https://kubesphere.io/contribution/).
|
||||
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: "Features and Benefits"
|
||||
keywords: "kubesphere, kubernetes, docker, helm, jenkins, istio, prometheus"
|
||||
description: "The document describes the features and benefits of KubeSphere"
|
||||
title: "Features"
|
||||
keywords: "KubeSphere, Kubernetes, Docker, Jenkins, Istio, Features"
|
||||
description: "KubeSphere Key Features"
|
||||
|
||||
linkTitle: "Features"
|
||||
weight: 1200
|
||||
|
|
@ -9,120 +9,164 @@ weight: 1200
|
|||
|
||||
## Overview
|
||||
|
||||
As an open source container platform, KubeSphere provides enterprises with a robust, secure and feature-rich platform, including most common functionalities needed for enterprise adopting Kubernetes, such as workload management, Service Mesh (Istio-based), DevOps projects (CI/CD), Source to Image and Binary to Image, multi-tenancy management, multi-dimensional monitoring, log query and collection, alerting and notification, service and network management, application management, infrastructure management, image registry management, application management. It also supports various open source storage and network solutions, as well as cloud storage services. Meanwhile, KubeSphere provides an easy-to-use web console to ease the learning curve and drive the adoption of Kubernetes.
|
||||
As an open source container platform, KubeSphere provides enterprises with a robust, secure and feature-rich platform, boasting the most common functionalities needed for enterprises adopting Kubernetes, such as multi-cluster deployment and management, network policy configuration, Service Mesh (Istio-based), DevOps projects (CI/CD), security management, Source-to-Image and Binary-to-Image, multi-tenant management, multi-dimensional monitoring, log query and collection, alerting and notification, auditing, application management, and image registry management.
|
||||
|
||||
It also supports various open source storage and network solutions, as well as cloud storage services. For example, KubeSphere presents users with a powerful cloud-native tool [Porter](https://porterlb.io/), a CNCF-certified load balancer developed for bare metal Kubernetes clusters.
|
||||
|
||||
With an easy-to-use web console in place, KubeSphere eases the learning curve for users and drives the adoption of Kubernetes.
|
||||
|
||||

|
||||
|
||||
The following modules elaborate the key features and benefits provided by KubeSphere container platform.
|
||||
The following modules elaborate on the key features and benefits provided by KubeSphere. For detailed information, see the respective chapter in this guide.
|
||||
|
||||
## Provisioning and Maintaining Kubernetes
|
||||
|
||||
### Provisioning Kubernetes Cluster
|
||||
### Provisioning Kubernetes Clusters
|
||||
|
||||
KubeSphere Installer allows you to deploy Kubernetes on your infrastructure out of box, provisioning Kubernetes cluster with high availability. It is recommended that at least three master nodes are configured behind a load balancer for production environment.
|
||||
[KubeKey](https://github.com/kubesphere/kubekey) allows you to deploy Kubernetes on your infrastructure out of box, provisioning Kubernetes clusters with high availability. It is recommended that at least three master nodes are configured behind a load balancer for production environment.
|
||||
|
||||
### Kubernetes Resource Management
|
||||
|
||||
KubeSphere provides graphical interface for creating and managing Kubernetes resources, including Pods and Containers, Workloads, Secrets and ConfigMaps, Services and Ingress, Jobs and CronJobs, HPA, etc. As well as powerful observability including resources monitoring, events, logging, alerting and notification.
|
||||
KubeSphere provides a graphical web console, giving users a clear view of a variety of Kubernetes resources, including Pods and containers, clusters and nodes, workloads, secrets and ConfigMaps, services and Ingress, jobs and CronJobs, and applications. With wizard user interfaces, users can easily interact with these resources for service discovery, HPA, image management, scheduling, high availability implementation, container health check and more.
|
||||
|
||||
As KubeSphere 3.0 features enhanced observability, users are able to keep track of resources from multi-tenant perspectives, such as custom monitoring, events, auditing logs, alerts and notifications.
|
||||
|
||||
### Cluster Upgrade and Scaling
|
||||
|
||||
KubeSphere Installer provides ease of setup, installation, management and maintenance. Moreover, it supports rolling upgrades of Kubernetes clusters so that the cluster service is always available while being upgraded. Additionally, it provides the ability to roll back to previous stable version in case of failure. Also, you can add new nodes to a Kubernetes cluster in order to support more workloads by using KubeSphere Installer.
|
||||
The next-gen installer [KubeKey](https://github.com/kubesphere/kubekey) provides an easy way of installation, management and maintenance. Moreover, it supports rolling upgrades of Kubernetes clusters so that the cluster service is always available while being upgraded. Also, you can add new nodes to a Kubernetes cluster to include more workloads by using KubeKey.
|
||||
|
||||
## Multi-cluster Management and Deployment
|
||||
|
||||
As the IT world sees a growing number of cloud-native applications reshaping software portfolios for enterprises, users tend to deploy their clusters across locations, geographies, and clouds. Against this backdrop, KubeSphere has undergone a significant upgrade to address the pressing need of users with its brand-new multi-cluster feature.
|
||||
|
||||
With KubeSphere, users can manage the infrastructure underneath, such as adding or deleting clusters. Heterogeneous clusters deployed on any infrastructure (e.g. Amazon EKS and Google Kubernetes Engine) can be managed in a unified way. This is made possible by a central control plane of KubeSphere with two efficient management approaches available.
|
||||
|
||||
- **Solo**. Independently deployed Kubernetes clusters can be maintained and managed together in KubeSphere container platform.
|
||||
- **Federation**. Multiple Kubernetes clusters can be aggregated together as a Kubernetes resource pool. When users deploy applications, replicas can be deployed on different Kubernetes clusters in the pool. In this regard, high availability is achieved across zones and clusters.
|
||||
|
||||
KubeSphere allows users to deploy applications across clusters. More importantly, an application can also be configured to run on a certain cluster. Besides, the multi-cluster feature, paired with [OpenPitrix](https://github.com/openpitrix/openpitrix), an industry-leading application management platform, enables users to manage apps across their whole lifecycle, including release, removal and distribution.
|
||||
|
||||
For more information, see Multi-cluster Management.
|
||||
|
||||
## DevOps Support
|
||||
|
||||
KubeSphere provides pluggable DevOps component based on popular CI/CD tools such as Jenkins, and offers automated workflow and tools including binary-to-image (B2I) and source-to-image (S2I) to get source code or binary artifacts into ready-to-run container images. The following are the detailed description of CI/CD pipeline, S2I and B2I.
|
||||
KubeSphere provides a pluggable DevOps component based on popular CI/CD tools such as Jenkins. It features automated workflows and tools including binary-to-image (B2I) and source-to-image (S2I) to package source code or binary artifacts into ready-to-run container images.
|
||||
|
||||

|
||||
|
||||
### CI/CD Pipeline
|
||||
|
||||
- CI/CD pipelines and build strategies are based on Jenkins, which streamlines the creation and automation of development, test and production process, and supports dependency cache to accelerate build and deployment.
|
||||
- Ship out-of-box Jenkins build strategy and client plugin to create a Jenkins pipeline based on Git repository/SVN. You can define any step and stage in your built-in Jenkinsfile.
|
||||
- Design a visualized control panel to create CI/CD pipelines, and deliver complete visibility to simplify user interaction.
|
||||
- Integrate source code quality analysis, also support output and collect logs of each step.
|
||||
- **Automation**. CI/CD pipelines and build strategies are based on Jenkins, streamlining and automating the development, test and production process. Dependency caches are used to accelerate build and deployment.
|
||||
- **Out-of-box**. Users can ship their Jenkins build strategy and client plugin to create a Jenkins pipeline based on Git repository/SVN. They can define any step and stage in the built-in Jenkinsfile. Common agent types are embedded, such as Maven, Node.js and Go. Users can customize the agent type as well.
|
||||
- **Visualization**. Users can easily interact with a visualized control panel to set conditions and manage CI/CD pipelines.
|
||||
- **Quality Management**. Static code analysis is supported to detect bugs, code smells and security vulnerabilities.
|
||||
- **Logs**. The entire running process of CI/CD pipelines is recorded.
|
||||
|
||||
### Source to Image
|
||||
### Source-to-Image
|
||||
|
||||
Source-to-Image (S2I) is a toolkit and automated workflow for building reproducible container images from source code. S2I produces ready-to-run images by injecting source code into a container image and making the container ready to execute from source code.
|
||||
|
||||
S2I allows you to publish your service to Kubernetes without writing Dockerfile. You just need to provide source code repository address, and specify the target image registry. All configurations will be stored as different resources in Kubernetes. Your service will be automatically published to Kubernetes, and the image will be pushed to target registry as well.
|
||||
S2I allows you to publish your service to Kubernetes without writing a Dockerfile. You just need to provide a source code repository address, and specify the target image registry. All configurations will be stored as different resources in Kubernetes. Your service will be automatically published to Kubernetes, and the image will be pushed to the target registry as well.
|
||||
|
||||

|
||||
|
||||
### Binary to Image
|
||||
### Binary-to-Image
|
||||
|
||||
As similar as S2I, Binary to Image (B2I) is a toolkit and automated workflow for building reproducible container images from binary (e.g. Jar, War, Binary package).
|
||||
Similar to S2I, Binary-to-Image (B2I) is a toolkit and automated workflow for building reproducible container images from binary (e.g. Jar, War, Binary package).
|
||||
|
||||
You just need to upload your application binary package, and specify the image registry to which you want to push. The rest is exactly same as S2I.
|
||||
You just need to upload your application binary package, and specify the image registry to which you want to push. The rest is exactly the same as S2I.
|
||||
|
||||
For more information, see DevOps Administration.
|
||||
|
||||
## Istio-based Service Mesh
|
||||
|
||||
KubeSphere service mesh is composed of a set of ecosystem projects, including Istio, Envoy and Jaeger, etc. We design a unified user interface to use and manage these tools. Most features are out-of-box and have been designed from developer's perspective, which means KubeSphere can help you to reduce the learning curve since you do not need to deep dive into those tools individually.
|
||||
KubeSphere service mesh is composed of a set of ecosystem projects, such as Istio, Envoy and Jaeger. We design a unified user interface to use and manage these tools. Most features are out-of-box and have been designed from the developer's perspective, which means KubeSphere can help you to reduce the learning curve since you do not need to deep dive into those tools individually.
|
||||
|
||||
KubeSphere service mesh provides fine-grained traffic management, observability, tracing, and service identity and security for a distributed microservice application, so the developer can focus on core business. With a service mesh management on KubeSphere, users can better track, route and optimize communications within Kubernetes for cloud native apps.
|
||||
KubeSphere service mesh provides fine-grained traffic management, observability, tracing, and service identity and security management for a distributed application. Therefore, developers can focus on core business. With service mesh management of KubeSphere, users can better track, route and optimize communications within Kubernetes for cloud-native apps.
|
||||
|
||||
### Traffic Management
|
||||
|
||||
- **Canary release** provides canary rollouts, and staged rollouts with percentage-based traffic splits.
|
||||
- **Blue-green deployment** allows the new version of the application to be deployed in the green environment and tested for functionality and performance. Once the testing results are successful, application traffic is routed from blue to green. Green then becomes the new production.
|
||||
- **Canary release** represents an important deployment strategy of new versions for testing purposes. Traffic is separated with a pre-configured ratio into a canary release and a production release respectively. If everything goes well, users can change the percentage and gradually replace the old version with the new one.
|
||||
- **Blue-green deployment** allows users to run two versions of an application at the same time. Blue stands for the current app version and green represents the new version tested for functionality and performance. Once the testing results are successful, application traffic is routed from the in-production version (blue) to the new one (green).
|
||||
- **Traffic mirroring** enables teams to bring changes to production with as little risk as possible. Mirroring sends a copy of live traffic to a mirrored service.
|
||||
- **Circuit breakers** allows users to set limits for calls to individual hosts within a service, such as the number of concurrent connections or how many times calls to this host have failed.
|
||||
- **Circuit breaker** allows users to set limits for calls to individual hosts within a service, such as the number of concurrent connections or how many times calls to this host have failed.
|
||||
|
||||
For more information, see Grayscale Release.
|
||||
|
||||
### Visualization
|
||||
|
||||
KubeSphere service mesh has the ability to visualize the connections between microservices and the topology of how they interconnect. As we know, observability is extremely useful in understanding cloud-native microservice interconnections.
|
||||
KubeSphere service mesh has the ability to visualize the connections between microservices and the topology of how they interconnect. In this regard, observability is extremely useful in understanding the interconnection of cloud-native microservices.
|
||||
|
||||
### Distributed Tracing
|
||||
|
||||
Based on Jaeger, KubeSphere service mesh enables users to track how each service interacts with other services. It brings a deeper understanding about request latency, bottlenecks, serialization and parallelism via visualization.
|
||||
Based on Jaeger, KubeSphere service mesh enables users to track how services interact with each other. It helps users gain a deeper understanding of request latency, bottlenecks, serialization and parallelism via visualization.
|
||||
|
||||
## Multi-tenant Management
|
||||
|
||||
- Multi-tenancy: provides unified authentication with fine-grained roles and three-tier authorization system.
|
||||
- Unified authentication: supports docking to a central enterprise authentication system that is LDAP/AD based protocol. And supports single sign-on (SSO) to achieve unified authentication of tenant identity.
|
||||
- Authorization system: It is organized into three levels, namely, cluster, workspace and project. We ensure the resource sharing as well as isolation among different roles at multiple levels to fully guarantee resource security.
|
||||
In KubeSphere, resources (e.g. clusters) can be shared between tenants. First, administrators or managers need to set different account roles with different authorizations. After that, members in the platform can be assigned with these roles to perform specific actions on varied resources. Meanwhile, as KubeSphere completely isolates tenants, they will not affect each other at all.
|
||||
|
||||
## Multi-dimensional Monitoring
|
||||
- **Multi-tenancy**. It provides role-based fine-grained authentication in a unified way and a three-tier authorization system.
|
||||
- **Unified authentication**. For enterprises, KubeSphere is compatible with their central authentication system that is base on LDAP or AD protocol. Single sign-on (SSO) is also supported to achieve unified authentication of tenant identity.
|
||||
- **Authorization system**. It is organized into three levels: cluster, workspace and project. KubeSphere ensures resources can be shared while different roles at multiple levels are completely isolated for resource security.
|
||||
|
||||
- Monitoring system is fully visualized, and provides open standard APIs for enterprises to integrate their existing operating platforms such as alerting, monitoring, logging etc. in order to have a unified system for their daily operating work.
|
||||
- Comprehensive and second-level precision monitoring metrics.
|
||||
- In the aspect of infrastructure monitoring, the system provides many metrics including CPU utilization, memory utilization, CPU load average, disk usage, inode utilization, disk throughput, IOPS, network interface outbound/inbound rate, Pod status, ETCD service status, API Server status, etc.
|
||||
- In the aspect of application resources, the system provides five monitoring metrics, i.e., CPU utilization, memory consumption, the number of Pods of applications, network outbound/inbound rate of an application. Besides, it supports sorting according to resource consumption, user-defined time range query and quickly locating the place where exception happens.
|
||||
- Provide resource usage ranking by node, workspace and project.
|
||||
- Provide service component monitoring for user to quickly locate component failures.
|
||||
For more information, see Role and Member Management in Workspace.
|
||||
|
||||
## Alerting and Notification System
|
||||
## Observability
|
||||
|
||||
- Provide rich alerting rules based on multi-tenancy and multi-dimensional monitoring metrics. Currently, the system supports two types of alerting. One is infrastructure alerting for cluster administrator. The other one is workload alerting for tenants.
|
||||
- Flexible alerting policy: You can customize an alerting policy that contains multiple alerting rules, and you can specify notification rules and repeat alerting rules.
|
||||
- Rich monitoring metrics for alerting: Provide alerting for infrastructure and workloads.
|
||||
- Flexible alerting rules: You can customize the detection period, duration and alerting level of monitoring metrics.
|
||||
- Flexible notification rules: You can customize the notification delivery period and receiver list. Mail notification is currently supported.
|
||||
- Custom repeat alerting rules: Support to set the repeat alerting cycle, maximum repeat times, and the alerting level.
|
||||
### Multi-dimensional Monitoring
|
||||
|
||||
KubeSphere features a self-updating monitoring system with graphical interfaces that streamline the whole process of operation and maintenance. It provides customized monitoring of a variety of resources and includes a set of alerts that can immediately notify users of any occurring issues.
|
||||
|
||||
- **Customized monitoring dashboard**. Users can decide exactly what metics need to be monitored in what kind of form. Different templates are available in KubeSphere for users to select, such as Elasticsearch, MySQL, and Redis. Alternatively, they can also create their own monitoring templates, including charts, colors, intervals and units.
|
||||
- **O&M-friendly**. The monitoring system can be operated in a visualized interface with open standard APIs for enterprises to integrate their existing systems. Therefore, they can implement operation and maintenance in a unified way.
|
||||
- **Third-party compatibility**. KubeSphere is compatible with Prometheus, which is the de facto metrics collection platform for monitoring in Kubernetes environments. Monitoring data can be seamlessly displayed in the web console of KubeSphere.
|
||||
|
||||
- **Multi-dimensional monitoring at second-level precision**.
|
||||
- For infrastructure monitoring, the system provides comprehensive metrics such as CPU utilization, memory utilization, CPU load average, disk usage, inode utilization, disk throughput, IOPS, network outbound/inbound rate, Pod status, ETCD service status, and API Server status.
|
||||
- For application resource monitoring, the system provides five key monitoring metrics: CPU utilization, memory consumption, Pod number, network outbound and inbound rate. Besides, users can sort data based on resource consumption and search metics by customizing the time range. In this way, occurring problems can be quickly located so that users can take necessary action.
|
||||
- **Ranking**. Users can sort data by node, workspace and project, which gives them a graphical view of how their resources are running in a straightforward way.
|
||||
- **Component monitoring**. It allows users to quickly locate any component failures to avoid unnecessary business downtime.
|
||||
|
||||
### Alerting, Events, Auditing and Notifications
|
||||
|
||||
- **Customized alerting policies and rules**. The alerting system is based on multi-tenant monitoring of multi-dimensional metrics. The system will send alerts related to a wide spectrum of resources such as pod, network and workload. In this regard, users can customize their own alerting policy by setting specific rules, such as repetition interval and time. The threshold and alerting level can also be defined by users themselves.
|
||||
- **Accurate event tracking**. KubeSphere allows users to know what is happening inside a cluster, such as container running status (successful or failed), node scheduling, and image pulling result. They will be accurately recorded with the specific reason, status and message displayed in the web console. In a production environment, this will help users to respond to any issues in time.
|
||||
- **Enhanced auditing security**. As KubeSphere features fine-grained management of user authorization, resources and network can be completely isolated to ensure data security. The comprehensive auditing feature allows users to search for activities related to any operation or alert.
|
||||
- **Diversified notification methods**. Emails represent a key approach for users to receive notifications of relevant activities they want to know. They can be sent based on the rule set by users themselves, who are able to customize the sender email address and their receiver lists. Besides, other channels, such as Slack and WeChat, are also supported to meet the need of our users. In this connection, KubeSphere provides users with more notification preferences as they are updated on the latest development in KubeSphere no matter what channel they select.
|
||||
|
||||
For more information, please see Project Administration and Usage.
|
||||
|
||||
## Log Query and Collection
|
||||
|
||||
- Provide multi-tenant log management. In KubeSphere log search system, different tenants can only see their own log information.
|
||||
- Contain multi-level log queries (project/workload/container group/container and keywords) as well as flexible and convenient log collection configuration options.
|
||||
- Support multiple log collection platforms such as Elasticsearch, Kafka, Fluentd.
|
||||
- **Multi-tenant log management**. In KubeSphere log search system, different tenants can only see their own log information. Logs can be exported as records for future reference.
|
||||
- **Multi-level log query**. Users can search for logs related to various resources, such as projects, workloads, and pods. Flexible and convenient log collection configuration options are available.
|
||||
- **Multiple log collectors**. Users can choose log collectors such as Elasticsearch, Kafka, and Fluentd.
|
||||
- **On-disk log collection**. For applications whose logs are saved in a Pod sidecar as a file, users can enable Disk Log Collection.
|
||||
|
||||
## Application Management and Orchestration
|
||||
|
||||
- Use open source [OpenPitrix](https://github.com/openpitrix/openpitrix) to set up app store and app repository services which provides full lifecycle of application management.
|
||||
- Users can easily deploy an application from templates with one click.
|
||||
- **App Store**. KubeSphere provides an app store based on [OpenPitrix](https://github.com/openpitrix/openpitrix), an industry-leading open source system for app management across the whole lifecycle, including release, removal, and distribution.
|
||||
- **App repository**. In KubeSphere, users can create an app repository hosted either in object storage (such as [QingStor](https://www.qingcloud.com/products/qingstor/) or [AWS S3](https://aws.amazon.com/what-is-cloud-object-storage/)) or in [GitHub](https://github.com/). App packages submitted to the app repository are composed of Helm Chart template files of the app.
|
||||
- **App template**. With app templates, KubeSphere provides a visualized way for app deployment with just one click. Internally, app templates can help different teams in the enterprise to share middleware and business systems. Externally, they can serve as an industry standard for application delivery based on different scenarios and needs.
|
||||
|
||||
## Infrastructure Management
|
||||
## Multiple Storage Solutions
|
||||
|
||||
Support storage management, host management and monitoring, resource quota management, image registry management, authorization management.
|
||||
- Open source storage solutions are available such as GlusterFS, CephRBD, and NFS.
|
||||
- NeonSAN CSI plugin connects to QingStor NeonSAN to meet core business requirements for low latency, high resilience, and high performance.
|
||||
- QingCloud CSI plugin connects to various block storage services in QingCloud platform.
|
||||
|
||||
## Multiple Storage Solutions Support
|
||||
## Multiple Network Solutions
|
||||
|
||||
- Support GlusterFS, CephRBD, NFS, etc., open source storage solutions.
|
||||
- Provide NeonSAN CSI plug-in to connect QingStor NeonSAN service to meet core business requirements, i.e., low latency, strong resilient, high performance.
|
||||
- Provide QingCloud CSI plug-in that accesses QingCloud block storage services.
|
||||
- Open source network solutions are available such as Calico and Flannel.
|
||||
|
||||
## Multiple Network Solutions Support
|
||||
- [Porter](https://github.com/kubesphere/porter), a load balancer developed for bare metal Kubernetes clusters, is designed by KubeSphere development team. This CNCF-certified tool serves as an important solution for developers. It mainly features:
|
||||
|
||||
- Support Calico, Flannel, etc., open source network solutions.
|
||||
- A bare metal load balancer plug-in [Porter](https://github.com/kubesphere/porter) for Kubernetes installed on physical machines.
|
||||
1. ECMP routing load balancing
|
||||
2. BGP dynamic routing configuration
|
||||
3. VIP management
|
||||
4. LoadBalancerIP assignment in Kubernetes services (v0.3.0)
|
||||
5. Installation with Helm Chart (v0.3.0)
|
||||
6. Dynamic BGP server configuration through CRD (v0.3.0)
|
||||
7. Dynamic BGP peer configuration through CRD (v0.3.0)
|
||||
|
||||
For more information, please see [this article](https://kubesphere.io/conferences/porter/).
|
||||
|
|
|
|||
|
|
@ -0,0 +1,105 @@
|
|||
---
|
||||
title: "Use Cases"
|
||||
keywords: 'KubeSphere, Kubernetes, Multi-cluster, Observability, DevOps'
|
||||
description: 'Applicable in a variety of scenarios, KubeSphere provides enterprises with containerized environments with a complete set of features for management and operation.'
|
||||
|
||||
weight: 1498
|
||||
---
|
||||
|
||||
KubeSphere is applicable in a variety of scenarios. For enterprises that deploy their business system on bare metal, their business modules are tightly coupled with each other. That means it is extremely difficult for resources to be horizontally scaled. In this connection, KubeSphere provides enterprises with containerized environments with a complete set of features for management and operation. It empowers enterprises to rise to the challenges in the middle of their digital transformation, including agile software development, automated operation and maintenance, microservices governance, traffic management, autoscaling, high availability, as well as DevOps and CI/CD.
|
||||
|
||||
At the same time, with the strong support for network and storage offered by QingCloud, KubeSphere is highly compatible with the existing monitoring and O&M system of enterprises. This is how they can upgrade their system for IT containerization.
|
||||
|
||||
## Multi-cluster Deployment
|
||||
|
||||
It is generally believed that using as few clusters as possible can reduce costs with less pressure for O&M. That said, both individuals and organizations tend to deploy multiple clusters for various reasons. For instance, the majority of enterprises may deploy their services across clusters as they need to be tested in non-production environments. Another typical example is that enterprises may separate their services based on regions, departments, and infrastructure providers by adopting multiple clusters.
|
||||
|
||||
The main reasons for employing this method fall into the following four categories:
|
||||
|
||||
### High Availability
|
||||
|
||||
Users can deploy workloads on multiple clusters by using a global VIP or DNS to send requests to corresponding backend clusters. When a cluster malfunctions or fails to handle requests, the VIP or DNS records can be transferred to a health cluster.
|
||||
|
||||

|
||||
|
||||
### Low Latency
|
||||
|
||||
When clusters are deployed in various regions, user requests can be forwarded to the nearest cluster, greatly reducing network latency. For example, we have three Kubernetes clusters deployed in New York, Houston and Los Angeles respectively. For users in California, their requests can be forwarded to Los Angeles. This will reduce the network latency due to geographical distance, providing the best user experience possible for users in different areas.
|
||||
|
||||
### Isolation
|
||||
|
||||
**Failure Isolation**. Generally, it is much easier for multiple small clusters to isolate failures than a large cluster. In case of outages, network failures, insufficient resources or other possible resulting issues, the failure can be isolated within a certain cluster without spreading to others.
|
||||
|
||||
**Business Isolation**. Although Kubernetes provides namespaces as a solution to app isolation, this method only represents the isolation in logic. This is because different namespaces are connected through the network, which means the issue of resource preemption still exists. To achieve further isolation, users need to create additional network isolation policies or set resource quotas. Using multiple clusters can achieve complete physical isolation that is more secure and reliable than the isolation through namespaces. For example, this is extremely effective when different departments within an enterprise use multiple clusters for the deployment of development, testing or production environments.
|
||||
|
||||

|
||||
|
||||
### Avoid Vendor Lock-in
|
||||
|
||||
Kubernetes has become the de facto standard in container orchestration. Against this backdrop, many enterprises avoid putting all eggs in one basket as they deploy clusters by using services of different cloud providers. That means they can transfer and scale their business anytime between clusters. However, it is not that easy for them to transfer their business in terms of costs, as different cloud providers feature varied Kubernetes services, including storage and network interface.
|
||||
|
||||
KubeSphere provides its unique feature as a solution to the above four cases. Based on the Federation pattern of KubeSphere's multi-cluster feature, multiple heterogeneous Kubernetes clusters can be aggregated within a unified Kubernetes resource pool. When users deploy applications, they can decide to which Kubernetes cluster they want app replicas to be scheduled in the pool. The whole process is managed and maintained through KubeSphere. This is how KubeSphere helps users achieve multi-site high availability (across zones and clusters).
|
||||
|
||||
For more information, see Multi-cluster Management.
|
||||
|
||||
## Full-stack Observability with Streamlined O&M
|
||||
|
||||
Observability represents an important part in the work of Ops teams. In this regard, enterprises see increasing pressure on their Ops teams as they deploy their business on Kubernetes directly or on the platform of other cloud providers. This poses considerable challenges to Ops teams since they need to cope with extensive data.
|
||||
|
||||
### Multi-dimensional Cluster Monitoring
|
||||
|
||||
Again, the adoption of multi-cluster deployment across clouds is on the rise both among individuals and enterprises. However, because they run different services, users need to learn, deploy and especially, monitor across different cloud environments. After all, the tool provided by one cloud vendor for observability may not be applicable to another. In short, Ops teams are in desperate need of a unified view across different clouds for cluster monitoring covering metrics across the board.
|
||||
|
||||
### Log Query
|
||||
|
||||
A comprehensive monitoring feature is meaningless without a flexible log query system. This is because users need to be able to track all the information related to their resources, such as alerting messages, node scheduling status, app deployment success, or network policy modification. All these records play an important role in making sure users can keep up with the latest development, which will inform policy decisions of their business.
|
||||
|
||||
### Customization
|
||||
|
||||
Even for resource monitoring on the same platform, the tool provided by the cloud vendor may not be a panacea. In some cases, users need to create their own standard of observability, such as the specific monitoring metrics and display form. Moreover, they need to integrate common tools to the cloud for special use, such as Prometheus, which is the de facto standard for Kubernetes monitoring. In other words, customization has become a necessity in the industry as cloud-powered applications drive business on the one hand while requiring fine-grained monitoring on the other just in case of any failure.
|
||||
|
||||
KubeSphere features a unified platform for the management of clusters deployed across cloud providers. Apps can be deployed automatically, streamlining the process of operation and maintenance. At the same time, KubeSphere boasts powerful observability features (alerting, events, auditing, logging and notifications) with a comprehensive customized monitoring system for a wide range of resources. Users themselves can decide what resources they want to monitor in what kind of forms.
|
||||
|
||||
With KubeSphere, enterprises can focus more on business innovation as they are freed from complicated process of data collection and analysis.
|
||||
|
||||
## Implement DevOps Practices
|
||||
|
||||
DevOps represents an important set of practices or methods that engage both development and Ops teams for more coordinated and efficient cooperation between them. Therefore, development, test and release can be faster, more efficient and more reliable. CI/CD pipelines in KubeSphere provide enterprises with agile development and automated O&M. Besides, the microservices feature (service mesh) in KubeSphere enables enterprises to develop, test and release services in a fine-grained way, creating an enabling environment for their implementation of DevOps. With KubeSphere, enterprises can make full use of DevOps by:
|
||||
|
||||
- Testing service robustness through fault injection without code hacking.
|
||||
- Decoupling Kubernetes services with credential management and access control.
|
||||
- Visualizing end-to-end monitoring process.
|
||||
|
||||
## Service Mesh and Cloud-native Architecture
|
||||
|
||||
Enterprises are now under increasing pressure to accelerate innovation amid their digital transformation. Specifically, they need to speed up in terms of development cycle, delivery time and deployment frequency. As application architectures evolve from monolithic to microservices, enterprises are faced with a multitude of resulting challenges. For example, microservices communicate with each other frequently, which entails smooth and stable network connectivity. Among others, latency represents a key factor that affects the entire architecture and user experience. In case of any failure, a troubleshooting and identifying system also needs to be in place to respond in time. Besides, deploying distributed applications is never an easy job without highly-functional tools and infrastructure.
|
||||
|
||||
KubeSphere service mesh addresses a series of microservices use cases.
|
||||
|
||||
### Multi-cloud App Distribution
|
||||
|
||||
As mentioned above, it is not uncommon for individuals or organizations to deploy apps across Kubernetes clusters, whether on premises, public or hybrid. This may bring out significant challenges in unified traffic management, application and service scalability, DevOps pipeline automation, monitoring and so on.
|
||||
|
||||
### Visualization
|
||||
|
||||
As users deploy microservices which will communicate among themselves considerably, it will help users gain a better understanding of topological relations between microservices if the connection is highly visualized. Besides, distributed tracing is also essential for each service, providing operators with a detailed understanding of call flows and service dependencies within a mesh.
|
||||
|
||||
### Rolling Updates
|
||||
|
||||
When enterprises introduce a new version of a service, they may adopt a canary upgrade or blue-green deployment. The new one runs side by side with the old one and a set percentage of traffic is moved to the new service for error detection and latency monitoring. If everything works fine, the traffic to the new one will gradually increase until 100% of customers are using the new version. For this type of update, KubeSphere provides three kinds of categories of grayscale release:
|
||||
|
||||
**Blue-green Deployment**. The blue-green release provides a zero downtime deployment, which means the new version can be deployed with the old one preserved. It enables both versions to run at the same time. If there is a problem with running, you can quickly roll back to the old version.
|
||||
|
||||
**Canary Release**. This method brings part of the actual traffic into a new version to test its performance and reliability. It can help detect potential problems in the actual environment while not affecting the overall system stability.
|
||||
|
||||
**Traffic Mirroring**. Traffic mirroring provides a more accurate way to test new versions as problems can be detected in advance while not affecting the production environment.
|
||||
|
||||
With a lightweight, highly scalable microservices architecture offered by KubeSphere, enterprises are well-positioned to build their own cloud-native applications for the above scenarios. Based on Istio, a major solution to microservices, KubeSphere provides a platform for microservices governance without any hacking into code. Spring Cloud is also integrated for enterprises to build Java apps. KubeSphere also offers microservices upgrade consultations and technical support services, helping enterprises implement microservices architectures for their cloud-native transformation.
|
||||
|
||||
## Bare Metal Deployment
|
||||
|
||||
Sometimes, the cloud is not necessarily the ideal place for the deployment of resources. For example, physical, dedicated servers tend to function better when it comes to the cases that require considerable compute resources and high disk I/O. Besides, for some specialized workloads that are difficult to migrate to a cloud environment, certified hardware and complicated licensing and support agreements may be required.
|
||||
|
||||
KubeSphere can help enterprises deploy a containerized architecture on bare metal, load balancing traffic with a physical switch. In this connection, [Porter](https://github.com/kubesphere/porter), a CNCF-certified cloud-native tool is born for this end. At the same time, KubeSphere, together with QingCloud VPC and QingStor NeonSAN, provides users with a complete set of features ranging from load balancing, container platform building, network management, and storage. This means virtually all aspects of the containerized architecture can be fully controlled and uniformly managed, without sacrificing the performance in virtualization.
|
||||
|
||||
For detailed information about how KubeSphere drives the development of numerous industries, please see [Case Studies](https://kubesphere.io/case/).
|
||||
|
|
@ -1,35 +1,50 @@
|
|||
---
|
||||
title: "What is KubeSphere"
|
||||
keywords: 'Kubernetes, docker, jenkins, devops, istio, service mesh, devops, microservice'
|
||||
keywords: 'Kubernetes, KubeSphere, Introduction'
|
||||
description: 'What is KubeSphere'
|
||||
|
||||
linkTitle: "Introduction"
|
||||
weight: 1100
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
[KubeSphere](https://kubesphere.io) is a **distributed operating system providing cloud native stack** with [Kubernetes](https://kubernetes.io) as its kernel, and aims to be plug-and-play architecture for third-party applications seamless integration to boost its ecosystem. KubeSphere is also a multi-tenant enterprise-grade container platform with full-stack automated IT operation and streamlined DevOps workflows. It provides developer-friendly wizard web UI, helping enterprises to build out a more robust and feature-rich platform, which includes most common functionalities needed for enterprise Kubernetes strategy, such as the Kubernetes resource management, DevOps (CI/CD), application lifecycle management, monitoring, logging, service mesh, multi-tenancy, alerting and notification, storage and networking, autoscaling, access control, GPU support, etc., as well as multi-cluster management, network policy, registry management, more security enhancements in upcoming releases.
|
||||
[KubeSphere](https://kubesphere.io) is a **distributed operating system managing cloud-native applications** with [Kubernetes](https://kubernetes.io) as its kernel, providing a plug-and-play architecture for the seamless integration of third-party applications to boost its ecosystem.
|
||||
|
||||
KubeSphere delivers **consolidated views while integrating a wide breadth of ecosystem tools** around Kubernetes and offers consistent user experience to reduce complexity, and develops new features and capabilities that are not yet available in upstream Kubernetes in order to alleviate the pain points of Kubernetes including storage, network, security and ease of use. Not only does KubeSphere allow developers and DevOps teams use their favorite tools in a unified console, but, most importantly, these functionalities are loosely coupled with the platform since they are pluggable and optional.
|
||||
KubeSphere also represents a multi-tenant enterprise-grade container platform with full-stack automated IT operation and streamlined DevOps workflows. It provides developer-friendly wizard web UI, helping enterprises to build out a more robust and feature-rich platform. It boasts the most common functionalities needed for enterprise Kubernetes strategies, such as Kubernetes resource management, DevOps (CI/CD), application lifecycle management, monitoring, logging, service mesh, multi-tenancy, alerting and notification, auditing, storage and networking, autoscaling, access control, GPU support, multi-cluster deployment and management, network policy, registry management, and security management.
|
||||
|
||||
Last but not least, KubeSphere does not change Kubernetes itself at all. In another word, KubeSphere can be deployed **on any existing version-compatible Kubernetes cluster across any infrastructure** including virtual machine, bare metal, on-premise, public cloud and hybrid cloud. KubeSphere screens users from the infrastructure underneath and helps your enterprise modernize, migrate, deploy and manage existing and containerized apps seamlessly across a variety of infrastructure, so that developers and Ops team can focus on application development and accelerate DevOps automated workflows and delivery processes with enterprise-level observability and troubleshooting, unified monitoring and logging, centralized storage and networking management, easy-to-use CI/CD pipelines.
|
||||
KubeSphere delivers **consolidated views while integrating a wide breadth of ecosystem tools** around Kubernetes, thus providing consistent user experiences to reduce complexity. At the same time, it also features new capabilities that are not yet available in upstream Kubernetes, alleviating the pain points of Kubernetes including storage, network, security and usability. Not only does KubeSphere allow developers and DevOps teams use their favorite tools in a unified console, but, most importantly, these functionalities are loosely coupled with the platform since they are pluggable and optional.
|
||||
|
||||
## Run KubeSphere Everywhere
|
||||
|
||||
As a lightweight platform, KubeSphere has become more friendly to different cloud ecosystems as it does not change Kubernetes itself at all. In other words, KubeSphere can be deployed **on any existing version-compatible Kubernetes cluster on any infrastructure** including virtual machine, bare metal, on-premises, public cloud and hybrid cloud. KubeSphere users have the choice of installing KubeSphere on cloud and container platforms, such as Alibaba Cloud, AWS, QingCloud, Tencent Cloud, Huawei Cloud and Rancher, and even importing and managing their existing Kubernetes clusters created using major Kubernetes distributions. The seamless integration of KubeSphere into existing Kubernetes platforms means that the business of users will not be affected, without any modification to their current resources or assets. For more information, see Installation.
|
||||
|
||||
KubeSphere screens users from the infrastructure underneath and helps enterprises modernize, migrate, deploy and manage existing and containerized apps seamlessly across a variety of infrastructure types. This is how KubeSphere empowers developers and Ops teams to focus on application development and accelerate DevOps automated workflows and delivery processes with enterprise-level observability and troubleshooting, unified monitoring and logging, centralized storage and networking management, easy-to-use CI/CD pipelines, and so on.
|
||||
|
||||

|
||||
|
||||
## Video on Youtube
|
||||
## What's New in 3.0
|
||||
|
||||
<iframe width="560" height="315" src="https://www.youtube.com/embed/u5lQvhi_Xlc" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
|
||||
- **Multi-cluster Management**. As we usher in an era of hybrid cloud, multi-cluster management has emerged as the call of our times. It represents one of the most necessary features on top of Kubernetes as it addresses the pressing need of our users. In the latest version 3.0, we have equipped KubeSphere with its unique multi-cluster feature that is able to provide a central control plane for clusters deployed in different clouds. Users can import and manage their existing Kubernetes clusters created on the platform of mainstream infrastructure providers (e.g. Amazon EKS and Google Kubernetes Engine). This will greatly reduce the learning cost for our users with operation and maintenance process streamlined as well. Solo and Federation are the two featured patterns for multi-cluster management, making KubeSphere stand out among its counterparts.
|
||||
|
||||
## What is New in 2.1
|
||||
- **Improved Observability**. We have enhanced observability as it becomes more powerful to include custom monitoring, tenant event management, diversified notification methods (e.g. WeChat and Slack) and more features. Among others, users can now customize monitoring dashboards, with a variety of metrics and graphs to choose from for their own needs. It also deserves to mention that KubeSphere 3.0 is compatible with Prometheus, which is the de facto standard for Kubernetes monitoring in the cloud-native industry.
|
||||
|
||||
We decouple some main feature components and make them pluggable and optional to choose so that users can install a default KubeSphere with resource requirements down to 2 cores CPU and 4G memory. Meanwhile, there are great enhancements in application store, especially in application lifecycle management.
|
||||
- **Enhanced Security**. Security has alway remained one of our focuses in KubeSphere. In this connection, feature enhancements can be summarized as follows:
|
||||
|
||||
It is worth mentioning that both DevOps and observability components have been improved significantly. For example, we add lots of new features including Binary-to-Image, dependency caching support in pipeline, branch switch support and Git logs output within DevOps component. We also bring upgrade, enhancements and bugfix in storage, authentication and security, as well as user experience improvements. See [Release Notes For 2.1.0](../../release/release-v210) for details.
|
||||
- **Auditing**. Records will be kept to track who does what at what time. The support of auditing is extremely important especially for traditional industries such as finance and banking.
|
||||
|
||||
- **Network Policy and Isolation**. Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods). By configuring network isolation to control traffic among Pods within the same cluster and traffic from outside, users can isolate applications with security enhanced. They can also decide whether services are accessible externally.
|
||||
|
||||
- **Open Policy Agent**. KubeSphere provides flexible, fine-grained access control based on [Open Policy Agent](https://www.openpolicyagent.org/). Users can manage their security and authorization policies in a unified way with a general architecture.
|
||||
|
||||
- **OAuth 2.0**. Users can now easily integrate third-party applications with OAuth 2.0 protocol.
|
||||
|
||||
- **Multilingual Support of Web Console**. KubeSphere is designed for users around the world at the very beginning. Thanks to our community members across the globe, KubeSphere 3.0 now supports four official languages for its web console: English, Simplified Chinese, Traditional Chinese, and Spanish. More languages are expected to be supported going forward.
|
||||
|
||||
In addition to the above highlights, KubeSphere 3.0 also features other functionality upgrades. For more and detailed information, see Release Notes for 3.0.0.
|
||||
|
||||
## Open Source
|
||||
|
||||
As we adopt open source model, development is taking in the open way and driven by KubeSphere community. KubeSphere is **100% open source** and available on [GitHub](https://github.com/kubesphere/) where you can find all source code, documents and discussions. It has been widely installed and used in development testing and production environments, and a large number of services are running smoothly in KubeSphere.
|
||||
As we adopt the open source model, development is proceeding in an open way and driven by KubeSphere community. KubeSphere is **100% open source** and available on [GitHub](https://github.com/kubesphere/) where you can find all the source code, documents and discussions. It has been widely installed and used in development, testing and production environments, and a large number of services are running smoothly in KubeSphere.
|
||||
|
||||
## Roadmap
|
||||
|
||||
|
|
@ -37,10 +52,9 @@ As we adopt open source model, development is taking in the open way and driven
|
|||
|
||||

|
||||
|
||||
## Landscapes
|
||||
## Landscape
|
||||
|
||||
KubeSphere is a member of CNCF and a [Kubernetes Conformance Certified platform
|
||||
](https://www.cncf.io/certification/software-conformance/#logos), which enriches the [CNCF CLOUD NATIVE Landscape.
|
||||
KubeSphere is a member of CNCF and a [Kubernetes Conformance Certified platform](https://www.cncf.io/certification/software-conformance/#logos), further enriching [CNCF CLOUD NATIVE Landscape.
|
||||
](https://landscape.cncf.io/landscape=observability-and-analysis&license=apache-license-2-0)
|
||||
|
||||

|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: "Multi-cluster Management"
|
||||
description: "Import a hosted or on-premise Kubernetes cluster into KubeSphere"
|
||||
description: "Import a hosted or on-premises Kubernetes cluster into KubeSphere"
|
||||
layout: "single"
|
||||
|
||||
linkTitle: "Multi-cluster Management"
|
||||
|
|
@ -11,9 +11,13 @@ icon: "/images/docs/docs.svg"
|
|||
|
||||
---
|
||||
|
||||
## Installing KubeSphere and Kubernetes on Linux
|
||||
Today, it's very common for organizations to run and manage multiple Kubernetes Clusters on different cloud providers or infrastructures. Each Kubernetes cluster is a relatively self-contained unit. And the upstream community is struggling to research and develop the multi-cluster management solution, such as [kubefed](https://github.com/kubernetes-sigs/kubefed).
|
||||
|
||||
In this chapter, we will demonstrate how to use KubeKey to provision a new Kubernetes and KubeSphere cluster based on different infrastructures. Kubekey can help you to quickly build a production-ready cluster architecture on a set of machines from zero to one. It also helps you to easily scale the cluster and install pluggable components on existing architecture.
|
||||
The most common use cases in multi-cluster management including **service traffic load balancing, development and production isolation, decoupling of data processing and data storage, cross-cloud backup and disaster recovery, flexible allocation of computing resources, low latency access with cross-region services, and no vendor lock-in,** etc.
|
||||
|
||||
KubeSphere is developed to address the multi-cluster and multi-cloud management challenges and implement the proceeding user scenarios, providing users with a unified control plane to distribute applications and its replicas to multiple clusters from public cloud to on-premise environment. KubeSphere also provides rich observability cross multiple clusters including centralized monitoring, logging, events, and auditing logs.
|
||||
|
||||

|
||||
|
||||
## Most Popular Pages
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,7 @@
|
|||
---
|
||||
linkTitle: "Enable Multi-cluster in KubeSphere"
|
||||
weight: 3010
|
||||
|
||||
_build:
|
||||
render: false
|
||||
---
|
||||
|
|
@ -0,0 +1,214 @@
|
|||
---
|
||||
title: "Agent Connection"
|
||||
keywords: 'kubernetes, kubesphere, multicluster, agent-connection'
|
||||
description: 'Overview'
|
||||
|
||||
|
||||
weight: 2343
|
||||
---
|
||||
|
||||
## Prerequisites
|
||||
|
||||
You have already installed at least two KubeSphere clusters, please refer to [Installing on Linux](../../../installing-on-linux) or [Installing on Kubernetes](../../../installing-on-kubernetes) if not yet.
|
||||
|
||||
{{< notice note >}}
|
||||
Multi-cluster management requires Kubesphere to be installed on the target clusters. If you have an existing cluster, please install a minimal KubeSphere on it as an agent, see [Installing Minimal KubeSphere on Kubernetes](../../installing-on-kubernetes/minimal-kubesphere-on-k8s) for details.
|
||||
{{</ notice >}}
|
||||
|
||||
## Agent Connection
|
||||
|
||||
The component [Tower](https://github.com/kubesphere/tower) of KubeSphere is used for agent connection. Tower is a tool for network connection between clusters through the agent. If the H Cluster cannot access the M Cluster directly, you can expose the proxy service address of the H cluster. This enables the M Cluster to connect to the H cluster through the agent. This method is applicable when the M Cluster is in a private environment (e.g. IDC) and the H Cluster is able to expose the proxy service. The agent connection is also applicable when your clusters are distributed in different cloud providers.
|
||||
|
||||
### Prepare a Host Cluster
|
||||
|
||||
{{< tabs >}}
|
||||
|
||||
{{< tab "KubeSphere has been installed" >}}
|
||||
|
||||
If you already have a standalone KubeSphere installed, you can change the `clusterRole` to a host cluster by editing the cluster configuration and **wait for a while**.
|
||||
|
||||
- Option A - Use Web Console:
|
||||
|
||||
Use `cluster-admin` account to enter **Cluster Management → CRDs**, search for the keyword `ClusterConfiguration` and enter its detailed page, edit the YAML of `ks-installer`. This is similar to Enable Pluggable Components.
|
||||
|
||||
- Option B - Use Kubectl:
|
||||
|
||||
```shell
|
||||
kubectl edit cc ks-installer -n kubesphere-system
|
||||
```
|
||||
|
||||
Scroll down and change the value of `clusterRole` to `host`, then click **Update** to make it effective:
|
||||
|
||||
```yaml
|
||||
multicluster:
|
||||
clusterRole: host
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{< tab "KubeSphere has not been installed" >}}
|
||||
|
||||
There is no big difference if you just start the installation. Please fill in the `jwtSecret` with the value shown as above in `config-sample.yaml` or `cluster-configuration.yaml`:
|
||||
|
||||
```yaml
|
||||
authentication:
|
||||
jwtSecret: gfIwilcc0WjNGKJ5DLeksf2JKfcLgTZU
|
||||
```
|
||||
|
||||
Then scroll down and change the `clusterRole` to `member`:
|
||||
|
||||
```yaml
|
||||
multicluster:
|
||||
clusterRole: member
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{</ tabs >}}
|
||||
|
||||
Then you can use the **kubectl** to retrieve the installation logs to verify the status. Wait for a while, you will be able to see the successful logs return if the host cluster is ready.
|
||||
|
||||
```
|
||||
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
|
||||
```
|
||||
|
||||
#### Set Proxy Service Address
|
||||
|
||||
After the installation of the Host Cluster, a proxy service called tower will be created in `kubesphere-system`, whose type is **LoadBalancer**.
|
||||
|
||||
{{< tabs >}}
|
||||
|
||||
{{< tab "There is a LoadBalancer in your cluster" >}}
|
||||
|
||||
If a LoadBalancer plugin is available for the cluster, you can see a corresponding address for `EXTERNAL-IP`, which will be acquired by KubeSphere automatically. That means we can skip the step to set the proxy.
|
||||
|
||||
```shell
|
||||
$ kubectl -n kubesphere-system get svc
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
tower LoadBalancer 10.233.63.191 139.198.110.23 8080:30721/TCP 16h
|
||||
```
|
||||
|
||||
> Generally, there is always a LoadBalancer solution in the public cloud, and the external IP should be allocated by Load Balancer automatically. If your clusters are running in an on-premises environment (Especially for the **bare metal environment**), we recommend you to use [Porter](https://github.com/porter/porter) as the LB solution.
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{< tab "There is not a LoadBalancer in your cluster" >}}
|
||||
|
||||
1. If you cannot see a corresponding address displayed (the EXTERNAL-IP is pending), you need to manually set the proxy address. For example, you have an available public IP address `139.198.120.120`. And the port `8080` of this IP address has been forwarded to the port `30721` of the cluster.
|
||||
|
||||
```shell
|
||||
kubectl -n kubesphere-system get svc
|
||||
```
|
||||
|
||||
```
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
tower LoadBalancer 10.233.63.191 <pending> 8080:30721/TCP 16h
|
||||
```
|
||||
|
||||
2. Change the ConfigMap of the ks-installer and input the the address you set before. You can also edit the ConfigMap from **Configuration → ConfigMaps**, search for the keyword `kubesphere-config`, then edit its YAML and add the following configuration:
|
||||
|
||||
```bash
|
||||
kubectl -n kubesphere-system edit cm kubesphere-config
|
||||
```
|
||||
|
||||
```
|
||||
multicluster:
|
||||
clusterRole: host
|
||||
proxyPublishAddress: http://139.198.120.120:8080 # Add this line to set the address to access tower
|
||||
```
|
||||
|
||||
3. Save and update the ConfigMap, then restart the Deployment `ks-apiserver`.
|
||||
|
||||
```shell
|
||||
kubectl -n kubesphere-system rollout restart deployment ks-apiserver
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{</ tabs >}}
|
||||
|
||||
|
||||
### Prepare a Member Cluster
|
||||
|
||||
In order to manage the member cluster within the host cluster, we need to make the jwtSecret same between them. So first you need to get it from the host by the following command.
|
||||
|
||||
```bash
|
||||
kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep jwtSecret
|
||||
```
|
||||
|
||||
```yaml
|
||||
jwtSecret: "gfIwilcc0WjNGKJ5DLeksf2JKfcLgTZU"
|
||||
```
|
||||
|
||||
{{< tabs >}}
|
||||
|
||||
{{< tab "KubeSphere has been installed" >}}
|
||||
|
||||
If you already have a standalone KubeSphere installed, you can change the `clusterRole` to a host cluster by editing the cluster configuration and **wait for a while**.
|
||||
|
||||
- Option A - Use Web Console:
|
||||
|
||||
Use `cluster-admin` account to enter **Cluster Management → CRDs**, search for the keyword `ClusterConfiguration` and enter its detailed page, edit the YAML of `ks-installer`. This is similar to Enable Pluggable Components.
|
||||
|
||||
- Option B - Use Kubectl:
|
||||
|
||||
```shell
|
||||
kubectl edit cc ks-installer -n kubesphere-system
|
||||
```
|
||||
|
||||
Then input the corresponding jwtSecret shown above:
|
||||
|
||||
```yaml
|
||||
authentication:
|
||||
jwtSecret: gfIwilcc0WjNGKJ5DLeksf2JKfcLgTZU
|
||||
```
|
||||
|
||||
Then scroll down and change the value of `clusterRole` to `member`, then click **Update** to make it effective:
|
||||
|
||||
```yaml
|
||||
multicluster:
|
||||
clusterRole: member
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{< tab "KubeSphere has not been installed" >}}
|
||||
|
||||
There is no big difference if you just start the installation. Please fill in the `jwtSecret` with the value shown as above in `config-sample.yaml` or `cluster-configuration.yaml`:
|
||||
|
||||
```yaml
|
||||
authentication:
|
||||
jwtSecret: gfIwilcc0WjNGKJ5DLeksf2JKfcLgTZU
|
||||
```
|
||||
|
||||
Then scroll down and change the `clusterRole` to `member`:
|
||||
|
||||
```yaml
|
||||
multicluster:
|
||||
clusterRole: member
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{</ tabs >}}
|
||||
|
||||
|
||||
### Import Cluster
|
||||
|
||||
1. Open the H Cluster Dashboard and click **Add Cluster**.
|
||||
|
||||

|
||||
|
||||
2. Enter the basic information of the imported cluster and click **Next**.
|
||||
|
||||

|
||||
|
||||
3. In **Connection Method**, select **Cluster connection agent** and Click **Import**.
|
||||
|
||||

|
||||
|
||||
4. Create an `agent.yaml` file in the M Cluster based on the instruction, then copy and paste the deployment to the file. Execute `kubectl create -f agent.yaml` on the node and wait for the agent to be up and running. Please make sure the proxy address is accessible to the M Cluster.
|
||||
|
||||
5. You can see the cluster you have imported in the H Cluster when the cluster agent is up and running.
|
||||
|
||||

|
||||
|
|
@ -0,0 +1,160 @@
|
|||
---
|
||||
title: "Direct Connection"
|
||||
keywords: 'kubernetes, kubesphere, multicluster, hybrid-cloud'
|
||||
description: 'Overview'
|
||||
|
||||
|
||||
weight: 2340
|
||||
---
|
||||
|
||||
## Prerequisites
|
||||
|
||||
You have already installed at least two KubeSphere clusters, please refer to [Installing on Linux](../../../installing-on-linux) or [Installing on Kubernetes](../../../installing-on-kubernetes) if not yet.
|
||||
|
||||
{{< notice note >}}
|
||||
Multi-cluster management requires Kubesphere to be installed on the target clusters. If you have an existing cluster, please install a minimal KubeSphere on it as an agent, see [Installing Minimal KubeSphere on Kubernetes](../../installing-on-kubernetes/minimal-kubesphere-on-k8s) for details.
|
||||
{{</ notice >}}
|
||||
|
||||
## Direct Connection
|
||||
|
||||
If the kube-apiserver address of Member Cluster (hereafter referred to as **M** Cluster) is accessible on any node of the Host Cluster (hereafter referred to as **H** Cluster), you can adopt **Direction Connection**. This method is applicable when the kube-apiserver address of M Cluster can be exposed or H Cluster and M Cluster are in the same private network or subnet.
|
||||
|
||||
### Prepare a Host Cluster
|
||||
|
||||
{{< tabs >}}
|
||||
|
||||
{{< tab "KubeSphere has been installed" >}}
|
||||
|
||||
If you already have a standalone KubeSphere installed, you can change the `clusterRole` to a host cluster by editing the cluster configuration and **wait for a while**.
|
||||
|
||||
- Option A - Use Web Console:
|
||||
|
||||
Use `cluster-admin` account to enter **Cluster Management → CRDs**, search for the keyword `ClusterConfiguration` and enter its detailed page, edit the YAML of `ks-installer`. This is similar to Enable Pluggable Components.
|
||||
|
||||
- Option B - Use Kubectl:
|
||||
|
||||
```shell
|
||||
kubectl edit cc ks-installer -n kubesphere-system
|
||||
```
|
||||
|
||||
Scroll down and change the value of `clusterRole` to `host`, then click **Update** to make it effective:
|
||||
|
||||
```yaml
|
||||
multicluster:
|
||||
clusterRole: host
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{< tab "KubeSphere has not been installed" >}}
|
||||
|
||||
There is no big difference if you just start the installation. Please note that the `clusterRole` in `config-sample.yaml` or `cluster-configuration.yaml` has to be set like following:
|
||||
|
||||
```yaml
|
||||
multicluster:
|
||||
clusterRole: host
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{</ tabs >}}
|
||||
|
||||
Then you can use the **kubectl** to retrieve the installation logs to verify the status. Wait for a while, you will be able to see the successful logs return if the host cluster is ready.
|
||||
|
||||
```
|
||||
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
|
||||
```
|
||||
|
||||
### Prepare a Member Cluster
|
||||
|
||||
In order to manage the member cluster within the host cluster, we need to make the jwtSecret same between them. So first you need to get it from the host by the following command.
|
||||
|
||||
```bash
|
||||
kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep jwtSecret
|
||||
```
|
||||
|
||||
```yaml
|
||||
jwtSecret: "gfIwilcc0WjNGKJ5DLeksf2JKfcLgTZU"
|
||||
```
|
||||
|
||||
{{< tabs >}}
|
||||
|
||||
{{< tab "KubeSphere has been installed" >}}
|
||||
|
||||
If you already have a standalone KubeSphere installed, you can change the `clusterRole` to a host cluster by editing the cluster configuration and **wait for a while**.
|
||||
|
||||
- Option A - Use Web Console:
|
||||
|
||||
Use `cluster-admin` account to enter **Cluster Management → CRDs**, search for the keyword `ClusterConfiguration` and enter its detailed page, edit the YAML of `ks-installer`. This is similar to Enable Pluggable Components.
|
||||
|
||||
- Option B - Use Kubectl:
|
||||
|
||||
```shell
|
||||
kubectl edit cc ks-installer -n kubesphere-system
|
||||
```
|
||||
|
||||
Then input the corresponding jwtSecret shown above:
|
||||
|
||||
```yaml
|
||||
authentication:
|
||||
jwtSecret: gfIwilcc0WjNGKJ5DLeksf2JKfcLgTZU
|
||||
```
|
||||
|
||||
Then scroll down and change the value of `clusterRole` to `member`, then click **Update** to make it effective:
|
||||
|
||||
```yaml
|
||||
multicluster:
|
||||
clusterRole: member
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{< tab "KubeSphere has not been installed" >}}
|
||||
|
||||
There is no big difference if you just start the installation. Please fill in the `jwtSecret` with the value shown as above in `config-sample.yaml` or `cluster-configuration.yaml`:
|
||||
|
||||
```yaml
|
||||
authentication:
|
||||
jwtSecret: gfIwilcc0WjNGKJ5DLeksf2JKfcLgTZU
|
||||
```
|
||||
|
||||
Then scroll down and change the `clusterRole` to `member`:
|
||||
|
||||
```
|
||||
multicluster:
|
||||
clusterRole: member
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{</ tabs >}}
|
||||
|
||||
Then you can use the **kubectl** to retrieve the installation logs to verify the status. Wait for a while, you will be able to see the successful logs return if the host cluster is ready.
|
||||
|
||||
```
|
||||
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
|
||||
```
|
||||
|
||||
### Import Cluster
|
||||
|
||||
1. Open the H Cluster Dashboard and click **Add Cluster**.
|
||||
|
||||

|
||||
|
||||
2. Enter the basic information of the cluster and click **Next**.
|
||||
|
||||

|
||||
|
||||
3. In **Connection Method**, select **Direct Connection to Kubernetes cluster**.
|
||||
|
||||
4. [Retrieve the KubeConfig](../retrieve-kubeconfig), then copy the KubeConfig of the Member Cluster and paste it into the box.
|
||||
|
||||
{{< notice tip >}}
|
||||
Please make sure the `server` address in KubeConfig is accessible on any node of the H Cluster. For `KubeSphere API Server` address, you can fill in the KubeSphere APIServer address or leave it blank.
|
||||
{{</ notice >}}
|
||||
|
||||

|
||||
|
||||
5. Click **Import** and wait for cluster initialization to finish.
|
||||
|
||||

|
||||
|
|
@ -0,0 +1,42 @@
|
|||
---
|
||||
title: "Retrieve KubeConfig"
|
||||
keywords: 'kubernetes, kubesphere, multicluster, hybrid-cloud'
|
||||
description: 'Overview'
|
||||
|
||||
|
||||
weight: 2345
|
||||
---
|
||||
|
||||
## Prerequisites
|
||||
|
||||
You have a KubeSphere cluster.
|
||||
|
||||
## Explore KubeConfig File
|
||||
|
||||
Go to `$HOME/.kube`, and see what files are there. Typically, there is a file named config. Use the following command to retrieve the KubeConfig file:
|
||||
|
||||
```bash
|
||||
cat $HOME/.kube/config
|
||||
```
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
clusters:
|
||||
- cluster:
|
||||
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01EZ3dPREE1hqaVE3NXhwbGFQNUgwSm5ySk5peTBacFh6QWxjYzZlV2JlaXJ1VgpUbmZUVjZRY3pxaVcrS3RBdFZVbkl4MCs2VTgzL3FiKzdINHk2RnA0aVhUaDJxRHJ6Qkd4dG1UeFlGdC9OaFZlCmhqMHhEbHVMOTVUWkRjOUNmSFgzdGZJeVh5WFR3eWpnQ2g1RldxbGwxVS9qVUo2RjBLVVExZ1pRTFp4TVJMV0MKREM2ZFhvUGlnQ3BNaVRPVXl5SVNhWUVjYVNBMEo5VWZmSGd4ditVcXVleTc0cEM2emszS0lOT2tGMkI1MllxeApUa09OT2VkV2hDUExMZkUveVJqeGw1aFhPL1Z4REFaVC9HQ1Y1a0JZN0toNmRhendmUllOa21IQkhDMD0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=hqaVE3NXhwbGFQNUgwSm5ySk5peTBacFh6QWxjYzZlV2JlaXJ1VgpUbmZUVjZRY3pxaVcrS3RBdFZVbkl4MCs2VTgzL3FiKzdINHk2RnA0aVhUaDJxRHJ6Qkd4dG1UeFlGdC9OaFZlCmhqMHhEbHVMOTVUWkRjOUNmSFgzdGZJeVh5WFR3eWpnQ2g1RldxbGwxVS9qVUo2RjBLVVExZ1pRTFp4TVJMV0MKREM2ZFhvUGlnQ3BNaVRPVXl5SVNhWUVjYVNBMEo5VWZmSGd4ditVcXVleTc0cEM2emszS0lOT2tGMkI1MllxeApUa09OT2VkV2hDUExMZkUveVJqeGw1aFhPL1Z4REFaVC9HQ1Y1a0JZN0toNmRhendmUllOa21IQkhDMD0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
|
||||
server: https://lb.kubesphere.local:6443
|
||||
name: cluster.local
|
||||
contexts:
|
||||
- context:
|
||||
cluster: cluster.local
|
||||
user: kubernetes-admin
|
||||
name: kubernetes-admin@cluster.local
|
||||
current-context: kubernetes-admin@cluster.local
|
||||
kind: Config
|
||||
preferences: {}
|
||||
users:
|
||||
- name: kubernetes-admin
|
||||
user:
|
||||
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJRzd5REpscVdjdTh3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TURBNE1EZ3dPVEkzTXpkYUZ3MHlNVEE0TURnd09USTNNemhhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnsOTJBUkJDNTRSR3BsZ3VmCmw5a0hPd0lEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFEQ2FUTXNBR1Vhdnhrazg0NDZnOGNRQUJpSmk5RTZiREV5TwphRnJubC8reGRzRmgvOTFiMlNpM3ZwaHFkZ2k5bXRYWkhhaWI5dnQ3aXdtSEFwbGQxUkhBU25sMFoxWFh1dkhzCmMzcXVIU0puY3dmc3JKT0I4UG9NRjVnaG10a0dPV3g0M2RHTTNHQnpGTVJ4ZGcrNmttNjRNUGhneXl6NTJjYUoKbzhPajNja1Uzd1NWNkxvempRcFVaUnZHV25qQjEwUXFPWXBtQUk4VCtlZkxKZzhuY0drK3V3UUVTeXBYWExpYwoxWVQ2QkFJeFhEK2tUUU1hOFhjdUhHZzlWRkdsUm9yK1EvY3l0S3RDeHVncFlxQ2xvbHVpckFUUnpsemRXamxYCkVQaHVjRWs2UUdIZEpObjd0M2NwRGkzSUdYYXJFdGxQQmFwck9nSGpkOHZVOStpWXdoQT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=TJBUkJDNTRSR3BsZ3VmCmw5a0hPd0lEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFEQ2FUTXNBR1Vhdnhrazg0NDZnOGNRQUJpSmk5RTZiREV5TwphRnJubC8reGRzRmgvOTFiMlNpM3ZwaHFkZ2k5bXRYWkhhaWI5dnQ3aXdtSEFwbGQxUkhBU25sMFoxWFh1dkhzCmMzcXVIU0puY3dmc3JKT0I4UG9NRjVnaG10a0dPV3g0M2RHTTNHQnpGTVJ4ZGcrNmttNjRNUGhneXl6NTJjYUoKbzhPajNja1Uzd1NWNkxvempRcFVaUnZHV25qQjEwUXFPWXBtQUk4VCtlZkxKZzhuY0drK3V3UUVTeXBYWExpYwoxWVQ2QkFJeFhEK2tUUU1hOFhjdUhHZzlWRkdsUm9yK1EvY3l0S3RDeHVncFlxQ2xvbHVpckFUUnpsemRXamxYCkVQaHVjRWs2UUdIZEpObjd0M2NwRGkzSUdYYXJFdGxQQmFwck9nSGpkOHZVOStpWXdoQT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
|
||||
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBeXBLWkdtdmdiSHdNaU9pVU80UHZKZXB2MTJaaE1yRUIxK2xlVnM0dHIzMFNGQ0p1Ck8wc09jL2lUNmFuWEJzUU1XNDF6V3hwV1B5elkzWXlUWEJMTlIrM01pWTl2SFhUeWJ6eitTWnNlTzVENytHL3MKQnR5NkovNGpJb2pZZlRZNTFzUUxyRVJydStmVnNGeUU0U2dXbE1HYWdqV0RIMFltM0VJsOTJBUkJDNTRSR3BsZ3VmCmw5a0hPd0lEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFEQ2FUTXNBR1Vhdnhrazg0NDZnOGNRQUJpSmk5RTZiREV5TwphRnJubC8reGRzRmgvOTFiMlNpM3ZwaHFkZ2k5bXRYWkhhaWI5dnQ3aXdtSEFwbGQxUkhBU25sMFoxWFh1dkhzCmMzcXVIU0puY3dmc3JKT0I4UG9NRjVnaG10a0dPV3g0M2RHTTNHQnpGTVJ4ZGcrNmttNjRNUGhneXl6NTJjYUoKbzhPajNja1Uzd1NWNkxvempRcFVaUnZHV25qQjEwUXFPWXBtQUk4VCtlZkxKZzhuY0drK3V3UUVTeXBYWExpYwoxWVQ2QkFJeFhEK2tUUU1hOFhjdUhHZzlWRkdsUm9yK1EvY3l0S3RDeHVncFlxQ2xvbHVpckFUUnpsemRXamxYCkVQaHVjRWs2UUdIZEpObjd0M2NwRGkzSUdYYXJFdGxQQmFwck9nSGpkOHZVOStpWXdoQT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=Ygo3THE3a2tBMURKNTBld2pMUTNTd1Yxd2p6N2ZjeDYvbzUwRnJnK083dEJMVVdQNTNHaDQ1VjJpUEp2NkdPYk1uCjhIWElmem83cW5XRFQvU20ybW5HbitUdVY4THdLVWFXL2wya3FkRUNnWUVBcS9zRmR1RDk2Z3VoT2ZaRnczcWMKblZGekNGQ3JsMkUvVkdYQy92SmV1WnJLQnFtSUtNZFI3ajdLWS9WRFVlMnJocVd6MFh2Wm9Sa1FoMkdwWkdIawpDd3NzcENKTVl4L0hETTVaWlBvcittb1J6VE5HNHlDNGhTRGJ2VEFaTmV1VTZTK1hzL1JSTDJ6WnUwemNQQXk1CjJJRVgwelFpZ1JzK3VzS3Jkc1FVZXZrQ2dZQUUrQUNWeDJnMC94bmFsMVFJNmJsK3Y2TDJrZVJtVGppcHB4Wm0KS1JEd2xnaXpsWGxsTjhyQmZwSGNiK1ZnZ282anN2eHFrb0pkTEhBLzFDME5IMWVuS1NoUTlpZVFpeWNsZngwdQpKOE1oeW1JM0RBZUg1REJyOG1rZ0pwNnJwUXNBc1paYmVhOHlLTzV5eVdCYTN6VGxOVnQvNDRibGg5alpnTWNMCjNyUXFVUUtCZ1FETVlXdEt2S0hOQllXV0p5enFERnFPbS9qY3Z3andvcURibUZVMlU3UGs2aUdNVldBV3VYZ3cKSm5qQWtES01GN0JXSnJRUjR6RHVoQlhvQVMxWVhiQ2lGd2hTcXVjWGhFSGlwQ3Nib0haVVRtT1pXUUh4Vlp4bQowU1NiRXFZU2MvZHBDZ1BHRk9IaW1FdUVic05kc2JjRmRETDQyODZHb0psQUxCOGc3VWRUZUE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
|
||||
```
|
||||
|
|
@ -0,0 +1,7 @@
|
|||
---
|
||||
linkTitle: "Import Cloud-hosted Kubernetes Cluster"
|
||||
weight: 3010
|
||||
|
||||
_build:
|
||||
render: false
|
||||
---
|
||||
|
|
@ -0,0 +1,10 @@
|
|||
---
|
||||
title: "Import Aliyun ACK"
|
||||
keywords: 'kubernetes, kubesphere, multicluster, ACK'
|
||||
description: 'Import Aliyun ACK'
|
||||
|
||||
|
||||
weight: 2340
|
||||
---
|
||||
|
||||
TBD
|
||||
|
|
@ -0,0 +1,10 @@
|
|||
---
|
||||
title: "Import AWS EKS"
|
||||
keywords: 'kubernetes, kubesphere, multicluster, aws-eks'
|
||||
description: 'Import AWS EKS"'
|
||||
|
||||
|
||||
weight: 2340
|
||||
---
|
||||
|
||||
TBD
|
||||
|
|
@ -0,0 +1,7 @@
|
|||
---
|
||||
linkTitle: "Import On-prem Kubernetes Cluster"
|
||||
weight: 3010
|
||||
|
||||
_build:
|
||||
render: false
|
||||
---
|
||||
|
|
@ -0,0 +1,10 @@
|
|||
---
|
||||
title: "Import Kubeadm Kubernetes"
|
||||
keywords: 'kubernetes, kubesphere, multicluster, kubeadm'
|
||||
description: 'Overview'
|
||||
|
||||
|
||||
weight: 2340
|
||||
---
|
||||
|
||||
TBD
|
||||
|
|
@ -0,0 +1,7 @@
|
|||
---
|
||||
linkTitle: "Introduction"
|
||||
weight: 3005
|
||||
|
||||
_build:
|
||||
render: false
|
||||
---
|
||||
|
|
@ -0,0 +1,12 @@
|
|||
---
|
||||
title: "Kubernetes Federation in KubeSphere"
|
||||
keywords: 'kubernetes, kubesphere, multicluster, hybrid-cloud'
|
||||
description: 'Overview'
|
||||
|
||||
|
||||
weight: 2340
|
||||
---
|
||||
|
||||
The multi-cluster feature relates to the network connection among multiple clusters. Therefore, it is important to understand the topological relations of clusters as the workload can be reduced.
|
||||
|
||||
Before you use the multi-cluster feature, you need to create a Host Cluster (hereafter referred to as **H** Cluster), which is actually a KubeSphere cluster that has enabled the multi-cluster feature. All the clusters managed by the H Cluster are called Member Cluster (hereafter referred to as **M** Cluster). They are common KubeSphere clusters that do not have the multi-cluster feature enabled. There can only be one H Cluster while multiple M Clusters can exist at the same time. In a multi-cluster architecture, the network between the H Cluster and the M Cluster can be connected directly or through an agent. The network between M Clusters can be set in a completely isolated environment.
|
||||
|
|
@ -0,0 +1,16 @@
|
|||
---
|
||||
title: "Overview"
|
||||
keywords: 'kubernetes, kubesphere, multicluster, hybrid-cloud'
|
||||
description: 'Overview'
|
||||
|
||||
|
||||
weight: 2335
|
||||
---
|
||||
|
||||
Today, it's very common for organizations to run and manage multiple Kubernetes Clusters on different cloud providers or infrastructures. Each Kubernetes cluster is a relatively self-contained unit. And the upstream community is struggling to research and develop the multi-cluster management solution, such as [kubefed](https://github.com/kubernetes-sigs/kubefed).
|
||||
|
||||
The most common use cases in multi-cluster management including **service traffic load balancing, development and production isolation, decoupling of data processing and data storage, cross-cloud backup and disaster recovery, flexible allocation of computing resources, low latency access with cross-region services, and no vendor lock-in,** etc.
|
||||
|
||||
KubeSphere is developed to address the multi-cluster and multi-cloud management challenges and implement the proceeding user scenarios, providing users with a unified control plane to distribute applications and its replicas to multiple clusters from public cloud to on-premise environment. KubeSphere also provides rich observability cross multiple clusters including centralized monitoring, logging, events, and auditing logs.
|
||||
|
||||

|
||||
|
|
@ -1,10 +0,0 @@
|
|||
---
|
||||
title: "Enable Multicluster Management"
|
||||
keywords: "kubernetes, StorageClass, kubesphere, PVC"
|
||||
description: "Enable Multicluster Management in KubeSphere"
|
||||
|
||||
linkTitle: "Enable Multicluster Management"
|
||||
weight: 200
|
||||
---
|
||||
|
||||
TBD
|
||||
|
|
@ -1,8 +0,0 @@
|
|||
---
|
||||
title: "Kubernetes Federation in KubeSphere"
|
||||
keywords: "kubernetes, multicluster, kubesphere, federation, hybridcloud"
|
||||
description: "Kubernetes and KubeSphere node management"
|
||||
|
||||
linkTitle: "Kubernetes Federation in KubeSphere"
|
||||
weight: 100
|
||||
---
|
||||
|
|
@ -1,10 +0,0 @@
|
|||
---
|
||||
title: "Introduction"
|
||||
keywords: "kubernetes, multicluster, kubesphere, hybridcloud"
|
||||
description: "Upgrade KubeSphere"
|
||||
|
||||
linkTitle: "Introduction"
|
||||
weight: 50
|
||||
---
|
||||
|
||||
TBD
|
||||
|
|
@ -0,0 +1,7 @@
|
|||
---
|
||||
linkTitle: "Remove Cluster"
|
||||
weight: 3010
|
||||
|
||||
_build:
|
||||
render: false
|
||||
---
|
||||
|
|
@ -0,0 +1,10 @@
|
|||
---
|
||||
title: "Remove a Cluster from KubeSphere"
|
||||
keywords: 'kubernetes, kubesphere, multicluster, hybrid-cloud'
|
||||
description: 'Overview'
|
||||
|
||||
|
||||
weight: 2340
|
||||
---
|
||||
|
||||
TBD
|
||||
|
|
@ -0,0 +1,144 @@
|
|||
---
|
||||
title: "KubeSphere App Store"
|
||||
keywords: "Kubernetes, KubeSphere, app-store, OpenPitrix"
|
||||
description: "How to Enable KubeSphere App Store"
|
||||
|
||||
linkTitle: "KubeSphere App Store"
|
||||
weight: 3515
|
||||
---
|
||||
|
||||
## What is KubeSphere App Store
|
||||
|
||||
As an open-source and app-centric container platform, KubeSphere provides users with a Helm-based app store for application lifecycle management on the back of [OpenPitrix](https://github.com/openpitrix/openpitrix), an open-source web-based system to package, deploy and manage different types of apps. KubeSphere App Store allows ISVs, developers and users to upload, test, deploy and release apps with just several clicks in a one-stop shop.
|
||||
|
||||
Internally, KubeSphere App Store can serve as a place for different teams to share data, middleware, and office applications. Externally, it is conducive to setting industry standards of building and delivery. By default, there are 15 apps in the App Store. After you enable this feature, you can add more apps with app templates.
|
||||
|
||||

|
||||
|
||||
For more information, see App Store.
|
||||
|
||||
## Enable App Store before Installation
|
||||
|
||||
### Installing on Linux
|
||||
|
||||
When you install KubeSphere on Linux, you need to create a configuration file, which lists all KubeSphere components.
|
||||
|
||||
1. In the tutorial of [Installing KubeSphere on Linux](https://kubesphere-v3.netlify.app/docs/installing-on-linux/introduction/multioverview/), you create a default file **config-sample.yaml**. Modify the file by executing the following command:
|
||||
|
||||
```bash
|
||||
vi config-sample.yaml
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
If you adopt [All-in-one Installation](https://kubesphere-v3.netlify.app/docs/quick-start/all-in-one-on-linux/), you do not need to create a config-sample.yaml file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable App Store in this mode (e.g. for testing purpose), refer to the following section to see how App Store can be installed after installation.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
2. In this file, navigate to `openpitrix` and change `false` to `true` for `enabled`. Save the file after you finish.
|
||||
|
||||
```bash
|
||||
openpitrix:
|
||||
enabled: true # Change "false" to "true"
|
||||
```
|
||||
|
||||
3. Create a cluster using the configuration file:
|
||||
|
||||
```bash
|
||||
./kk create cluster -f config-sample.yaml
|
||||
```
|
||||
|
||||
### **Installing on Kubernetes**
|
||||
|
||||
When you install KubeSphere on Kubernetes, you need to download the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) for cluster setting. If you want to install App Store, do not use `kubectl apply -f` directly for this file.
|
||||
|
||||
1. In the tutorial of [Installing KubeSphere on Kubernetes](https://kubesphere-v3.netlify.app/docs/installing-on-kubernetes/introduction/overview/), you execute `kubectl apply -f` first for the file [kubesphere-installer.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml). After that, to enable App Store, create a local file cluster-configuration.yaml.
|
||||
|
||||
```bash
|
||||
vi cluster-configuration.yaml
|
||||
```
|
||||
|
||||
2. Copy all the content in the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml) and paste it to the local file just created.
|
||||
3. In this local cluster-configuration.yaml file, navigate to `openpitrix` and enable App Store by changing `false` to `true` for `enabled`. Save the file after you finish.
|
||||
|
||||
```bash
|
||||
openpitrix:
|
||||
enabled: true # Change "false" to "true"
|
||||
```
|
||||
|
||||
4. Execute the following command to start installation:
|
||||
|
||||
```bash
|
||||
kubectl apply -f cluster-configuration.yaml
|
||||
```
|
||||
|
||||
## Enable App Store after Installation
|
||||
|
||||
1. Log in the console as `admin`. Click **Platform** at the top left corner and select **Clusters Management**.
|
||||
|
||||

|
||||
|
||||
2. Click **CRDs** and enter `clusterconfiguration` in the search bar. Click the result to view its detailed page.
|
||||
|
||||
{{< notice info >}}
|
||||
|
||||
A Custom Resource Definition (CRD) allows users to create a new type of resources without adding another API server. They can use these resources like any other native Kubernetes objects.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
3. In **Resource List**, click the three dots on the right of `ks-installer` and select **Edit YAML**.
|
||||
|
||||

|
||||
|
||||
4. In this yaml file, navigate to `openpitrix` and change `false` to `true` for `enabled`. After you finish, click **Update** at the bottom right corner to save the configuration.
|
||||
|
||||
```bash
|
||||
openpitrix:
|
||||
enabled: true # Change "false" to "true"
|
||||
```
|
||||
|
||||
5. You can use the web kubectl to check the installation process by executing the following command:
|
||||
|
||||
```bash
|
||||
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
|
||||
```
|
||||
|
||||
{{< notice tip >}}
|
||||
|
||||
You can find the web kubectl tool by clicking the hammer icon at the bottom right corner of the console.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
## Verify the Installation of Component
|
||||
|
||||
{{< tabs >}}
|
||||
|
||||
{{< tab "Verify the Component in Dashboard" >}}
|
||||
|
||||
Go to **Components** and check the status of OpenPitrix. You may see an image as follows:
|
||||
|
||||

|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{< tab "Verify the Component through kubectl" >}}
|
||||
|
||||
Execute the following command to check the status of pods:
|
||||
|
||||
```bash
|
||||
kubectl get pod -n openpitrix-system
|
||||
```
|
||||
|
||||
The output may look as follows if the component runs successfully:
|
||||
|
||||
```bash
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
hyperpitrix-generate-kubeconfig-pznht 0/2 Completed 0 1h6m
|
||||
hyperpitrix-release-app-job-hzdjf 0/1 Completed 0 1h6m
|
||||
openpitrix-hyperpitrix-deployment-fb76645f4-crvmm 1/1 Running 0 1h6m
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
||||
{{</ tabs >}}
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue