init documentation v3.0.0 for /en and /zh

Signed-off-by: FeynmanZhou <pengfeizhou@yunify.com>
This commit is contained in:
FeynmanZhou 2020-08-19 20:28:03 +08:00
parent aa3fe35d21
commit 0865ed2a64
241 changed files with 14175 additions and 2765 deletions

View File

@ -0,0 +1,23 @@
---
title: "Application Store"
description: "Getting started with KubeSphere DevOps project"
layout: "single"
linkTitle: "Application Store"
weight: 4500
icon: "/images/docs/docs.svg"
---
## Installing KubeSphere and Kubernetes on Linux
In this chapter, we will demonstrate how to use KubeKey to provision a new Kubernetes and KubeSphere cluster based on different infrastructures. Kubekey can help you to quickly build a production-ready cluster architecture on a set of machines from zero to one. It also helps you to easily scale the cluster and install pluggable components on existing architecture.
## Most Popular Pages
Below you will find some of the most common and helpful pages from this chapter. We highly recommend you to review them at first.
{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}}
{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}}

View File

@ -0,0 +1,7 @@
---
linkTitle: "Application Developer Guide"
weight: 2200
_build:
render: false
---

View File

@ -0,0 +1,224 @@
---
title: "Air-Gapped Installation"
keywords: 'kubernetes, kubesphere, air gapped, installation'
description: 'How to install KubeSphere on air-gapped Linux machines'
weight: 2240
---
The air-gapped installation is almost the same as the online installation except it creates a local registry to host the Docker images. We will demonstrate how to install KubeSphere and Kubernetes on air-gapped environment.
> Note: The dependencies in different operating systems may cause upexpected problems. If you encounter any installation problems on air-gapped environment, please describe your OS information and error logs on [GitHub](https://github.com/kubesphere/kubesphere/issues).
## Prerequisites
- If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information.
> - Installer will use `/var/lib/docker` as the default directory where all Docker related files, including the images, are stored. We recommend you to add additional storage to a disk with at least 100G mounted at `/var/lib/docker` and `/mnt/registry` respectively, use the [fdisk](https://www.computerhope.com/unix/fdisk.htm) command for reference.
- Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [OpenEBS](https://openebs.io/) to provide storage service with dynamic provisioning. It is convenient for testing and development. For production, please [configure supported persistent storage service](../storage-configuration) and prepare [high availability configuration](../master-ha) before installation.
- Since the air-gapped machines cannot connect to apt or yum source, please use clean Linux machine to avoid this problem.
## Step 1: Prepare Linux Hosts
The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements.
- Supported OSes: CentOS 7.4 ~ 7.7 (64-bit), Ubuntu 16.04.5/16.04.6/18.04.1/18.04.2/18.04.3 LTS (64-bit)
- Time synchronization is required across all nodes, otherwise the installation may not succeed;
- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`;
- If you are using `Ubuntu 18.04`, you need to use the user `root`.
- Ensure your disk of each node is at least 100G.
- CPU and memory in total of all machines: 2 cores and 4 GB for minimal installation; 8 cores and 16 GB for complete installation.
The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes.
> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide.
| Host IP | Host Name | Role |
| --- | --- | --- |
|192.168.0.1|master|master, etcd|
|192.168.0.2|node1|node|
|192.168.0.3|node2|node|
### Cluster Architecture
#### Single Master, Single Etcd, Two Nodes
![Architecture](/cluster-architecture.svg)
## Step 2: Download Installer Package
Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`.
```bash
curl -L https://kubesphere.io/download/offline/latest > kubesphere-all-offline-v2.1.1.tar.gz \
&& tar -zxf kubesphere-all-offline-v2.1.1.tar.gz && cd kubesphere-all-offline-v2.1.1/conf
```
## Step 3: Configure Host Template
> This step is only for multi-node installation, you can skip this step if you choose all-in-one installation.
Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file.
> Note:
>
> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`.
> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`.
> - master, node1 and node2 are the host names of each node and all host names should be in lowercase.
### hosts.ini
```ini
[all]
master ansible_connection=local ip=192.168.0.1
node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD
node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD
[local-registry]
master
[kube-master]
master
[kube-node]
node1
node2
[etcd]
master
[k8s-cluster:children]
kube-node
kube-master
```
> Note:
>
> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here.
> - Installer will use a node as the local registry for docker images, defaults to "master" in the group `[local-registry]`.
> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively.
> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`.
>
> Parameters Specification:
>
> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection.
> - `ansible_host`: The name of the host to be connected.
> - `ip`: The ip of the host to be connected.
> - `ansible_user`: The default ssh user name to use.
> - `ansible_become_pass`: Allows you to set the privilege escalation password.
> - `ansible_ssh_pass`: The password of the host to be connected using root.
## Step 4: Enable All Components
> This is step is complete installation. You can skip this step if you choose a minimal installation.
Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default.
```yaml
# LOGGING CONFIGURATION
# logging is an optional component when installing KubeSphere, and
# Kubernetes builtin logging APIs will be used if logging_enabled is set to false.
# Builtin logging only provides limited functions, so recommend to enable logging.
logging_enabled: true # Whether to install logging system
elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number
elasticsearch_data_replica: 2 # total number of data nodes
elasticsearch_volume_size: 20Gi # Elasticsearch volume size
log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default.
elk_prefix: logstash # the string making up index names. The index name will be formatted as ks-<elk_prefix>-log
kibana_enabled: false # Kibana Whether to install built-in Grafana
#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption.
#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port
#DevOps Configuration
devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image)
jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default
jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default
jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default
jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters
jenkinsJavaOpts_Xmx: 6g
jenkinsJavaOpts_MaxRAM: 8g
sonarqube_enabled: true # Whether to install built-in SonarQube
#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption.
#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token
# Following components are all optional for KubeSphere,
# Which could be turned on to install it before installation or later by updating its value to true
openpitrix_enabled: true # KubeSphere application store
metrics_server_enabled: true # For KubeSphere HPA to use
servicemesh_enabled: true # KubeSphere service mesh system(Istio-based)
notification_enabled: true # KubeSphere notification system
alerting_enabled: true # KubeSphere alerting system
```
## Step 5: Install KubeSphere to Linux Machines
> Note:
>
> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default.
> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions.
> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation.
> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts.
**1.** Enter `scripts` folder, and execute `install.sh` using `root` user:
```bash
cd ../cripts
./install.sh
```
**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume.
```bash
################################################
KubeSphere Installer Menu
################################################
* 1) All-in-one
* 2) Multi-node
* 3) Quit
################################################
https://kubesphere.io/ 2020-02-24
################################################
Please input an option: 2
```
**3.** Verify the multi-node installation
**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go.
```bash
successsful!
#####################################################
### Welcome to KubeSphere! ###
#####################################################
Console: http://192.168.0.1:30880
Account: admin
Password: P@88w0rd
NOTEPlease modify the default password after login.
#####################################################
```
> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components).
**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in.
![Login](https://pek3b.qingstor.com/kubesphere-docs/png/20191017172215.png)
<font color=red>Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up.</font>
![Landing Page](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png)
## Enable Pluggable Components
If you already have set up minimal installation, you still can edit the ConfigMap of ks-installer using the following command. Please make sure there is enough resource in your machines, see [Pluggable Components Overview](/en/installation/pluggable-components/).
```bash
kubectl edit cm -n kubesphere-system ks-installer
```
## FAQ
If you have further questions please do not hesitate to raise issues on [GitHub](https://github.com/kubesphere/kubesphere/issues).

View File

@ -0,0 +1,7 @@
---
linkTitle: "Built-in Applications"
weight: 2200
_build:
render: false
---

View File

@ -0,0 +1,224 @@
---
title: "Air-Gapped Installation"
keywords: 'kubernetes, kubesphere, air gapped, installation'
description: 'How to install KubeSphere on air-gapped Linux machines'
weight: 2240
---
The air-gapped installation is almost the same as the online installation except it creates a local registry to host the Docker images. We will demonstrate how to install KubeSphere and Kubernetes on air-gapped environment.
> Note: The dependencies in different operating systems may cause upexpected problems. If you encounter any installation problems on air-gapped environment, please describe your OS information and error logs on [GitHub](https://github.com/kubesphere/kubesphere/issues).
## Prerequisites
- If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information.
> - Installer will use `/var/lib/docker` as the default directory where all Docker related files, including the images, are stored. We recommend you to add additional storage to a disk with at least 100G mounted at `/var/lib/docker` and `/mnt/registry` respectively, use the [fdisk](https://www.computerhope.com/unix/fdisk.htm) command for reference.
- Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [OpenEBS](https://openebs.io/) to provide storage service with dynamic provisioning. It is convenient for testing and development. For production, please [configure supported persistent storage service](../storage-configuration) and prepare [high availability configuration](../master-ha) before installation.
- Since the air-gapped machines cannot connect to apt or yum source, please use clean Linux machine to avoid this problem.
## Step 1: Prepare Linux Hosts
The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements.
- Supported OSes: CentOS 7.4 ~ 7.7 (64-bit), Ubuntu 16.04.5/16.04.6/18.04.1/18.04.2/18.04.3 LTS (64-bit)
- Time synchronization is required across all nodes, otherwise the installation may not succeed;
- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`;
- If you are using `Ubuntu 18.04`, you need to use the user `root`.
- Ensure your disk of each node is at least 100G.
- CPU and memory in total of all machines: 2 cores and 4 GB for minimal installation; 8 cores and 16 GB for complete installation.
The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes.
> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide.
| Host IP | Host Name | Role |
| --- | --- | --- |
|192.168.0.1|master|master, etcd|
|192.168.0.2|node1|node|
|192.168.0.3|node2|node|
### Cluster Architecture
#### Single Master, Single Etcd, Two Nodes
![Architecture](/cluster-architecture.svg)
## Step 2: Download Installer Package
Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`.
```bash
curl -L https://kubesphere.io/download/offline/latest > kubesphere-all-offline-v2.1.1.tar.gz \
&& tar -zxf kubesphere-all-offline-v2.1.1.tar.gz && cd kubesphere-all-offline-v2.1.1/conf
```
## Step 3: Configure Host Template
> This step is only for multi-node installation, you can skip this step if you choose all-in-one installation.
Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file.
> Note:
>
> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`.
> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`.
> - master, node1 and node2 are the host names of each node and all host names should be in lowercase.
### hosts.ini
```ini
[all]
master ansible_connection=local ip=192.168.0.1
node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD
node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD
[local-registry]
master
[kube-master]
master
[kube-node]
node1
node2
[etcd]
master
[k8s-cluster:children]
kube-node
kube-master
```
> Note:
>
> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here.
> - Installer will use a node as the local registry for docker images, defaults to "master" in the group `[local-registry]`.
> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively.
> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`.
>
> Parameters Specification:
>
> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection.
> - `ansible_host`: The name of the host to be connected.
> - `ip`: The ip of the host to be connected.
> - `ansible_user`: The default ssh user name to use.
> - `ansible_become_pass`: Allows you to set the privilege escalation password.
> - `ansible_ssh_pass`: The password of the host to be connected using root.
## Step 4: Enable All Components
> This is step is complete installation. You can skip this step if you choose a minimal installation.
Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default.
```yaml
# LOGGING CONFIGURATION
# logging is an optional component when installing KubeSphere, and
# Kubernetes builtin logging APIs will be used if logging_enabled is set to false.
# Builtin logging only provides limited functions, so recommend to enable logging.
logging_enabled: true # Whether to install logging system
elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number
elasticsearch_data_replica: 2 # total number of data nodes
elasticsearch_volume_size: 20Gi # Elasticsearch volume size
log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default.
elk_prefix: logstash # the string making up index names. The index name will be formatted as ks-<elk_prefix>-log
kibana_enabled: false # Kibana Whether to install built-in Grafana
#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption.
#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port
#DevOps Configuration
devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image)
jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default
jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default
jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default
jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters
jenkinsJavaOpts_Xmx: 6g
jenkinsJavaOpts_MaxRAM: 8g
sonarqube_enabled: true # Whether to install built-in SonarQube
#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption.
#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token
# Following components are all optional for KubeSphere,
# Which could be turned on to install it before installation or later by updating its value to true
openpitrix_enabled: true # KubeSphere application store
metrics_server_enabled: true # For KubeSphere HPA to use
servicemesh_enabled: true # KubeSphere service mesh system(Istio-based)
notification_enabled: true # KubeSphere notification system
alerting_enabled: true # KubeSphere alerting system
```
## Step 5: Install KubeSphere to Linux Machines
> Note:
>
> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default.
> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions.
> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation.
> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts.
**1.** Enter `scripts` folder, and execute `install.sh` using `root` user:
```bash
cd ../cripts
./install.sh
```
**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume.
```bash
################################################
KubeSphere Installer Menu
################################################
* 1) All-in-one
* 2) Multi-node
* 3) Quit
################################################
https://kubesphere.io/ 2020-02-24
################################################
Please input an option: 2
```
**3.** Verify the multi-node installation
**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go.
```bash
successsful!
#####################################################
### Welcome to KubeSphere! ###
#####################################################
Console: http://192.168.0.1:30880
Account: admin
Password: P@88w0rd
NOTEPlease modify the default password after login.
#####################################################
```
> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components).
**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in.
![Login](https://pek3b.qingstor.com/kubesphere-docs/png/20191017172215.png)
<font color=red>Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up.</font>
![Landing Page](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png)
## Enable Pluggable Components
If you already have set up minimal installation, you still can edit the ConfigMap of ks-installer using the following command. Please make sure there is enough resource in your machines, see [Pluggable Components Overview](/en/installation/pluggable-components/).
```bash
kubectl edit cm -n kubesphere-system ks-installer
```
## FAQ
If you have further questions please do not hesitate to raise issues on [GitHub](https://github.com/kubesphere/kubesphere/issues).

View File

@ -0,0 +1,22 @@
---
title: "Cluster Administration"
description: "Help you to better understand KubeSphere with detailed graphics and contents"
layout: "single"
linkTitle: "Cluster Administration"
weight: 4100
icon: "/images/docs/docs.svg"
---
## Installing KubeSphere and Kubernetes on Linux
In this chapter, we will demonstrate how to use KubeKey to provision a new Kubernetes and KubeSphere cluster based on different infrastructures. Kubekey can help you to quickly build a production-ready cluster architecture on a set of machines from zero to one. It also helps you to easily scale the cluster and install pluggable components on existing architecture.
## Most Popular Pages
Below you will find some of the most common and helpful pages from this chapter. We highly recommend you to review them at first.
{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}}

View File

@ -0,0 +1,10 @@
---
title: "Nodes"
keywords: "kubernetes, StorageClass, kubesphere, PVC"
description: "Kubernetes Nodes Management"
linkTitle: "Nodes"
weight: 200
---
TBD

View File

@ -0,0 +1,7 @@
---
linkTitle: "DevOps Administration"
weight: 2200
_build:
render: false
---

View File

@ -0,0 +1,224 @@
---
title: "Role and Member Management"
keywords: 'kubernetes, kubesphere, air gapped, installation'
description: 'Role and Member Management'
weight: 2240
---
The air-gapped installation is almost the same as the online installation except it creates a local registry to host the Docker images. We will demonstrate how to install KubeSphere and Kubernetes on air-gapped environment.
> Note: The dependencies in different operating systems may cause upexpected problems. If you encounter any installation problems on air-gapped environment, please describe your OS information and error logs on [GitHub](https://github.com/kubesphere/kubesphere/issues).
## Prerequisites
- If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information.
> - Installer will use `/var/lib/docker` as the default directory where all Docker related files, including the images, are stored. We recommend you to add additional storage to a disk with at least 100G mounted at `/var/lib/docker` and `/mnt/registry` respectively, use the [fdisk](https://www.computerhope.com/unix/fdisk.htm) command for reference.
- Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [OpenEBS](https://openebs.io/) to provide storage service with dynamic provisioning. It is convenient for testing and development. For production, please [configure supported persistent storage service](../storage-configuration) and prepare [high availability configuration](../master-ha) before installation.
- Since the air-gapped machines cannot connect to apt or yum source, please use clean Linux machine to avoid this problem.
## Step 1: Prepare Linux Hosts
The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements.
- Supported OSes: CentOS 7.4 ~ 7.7 (64-bit), Ubuntu 16.04.5/16.04.6/18.04.1/18.04.2/18.04.3 LTS (64-bit)
- Time synchronization is required across all nodes, otherwise the installation may not succeed;
- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`;
- If you are using `Ubuntu 18.04`, you need to use the user `root`.
- Ensure your disk of each node is at least 100G.
- CPU and memory in total of all machines: 2 cores and 4 GB for minimal installation; 8 cores and 16 GB for complete installation.
The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes.
> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide.
| Host IP | Host Name | Role |
| --- | --- | --- |
|192.168.0.1|master|master, etcd|
|192.168.0.2|node1|node|
|192.168.0.3|node2|node|
### Cluster Architecture
#### Single Master, Single Etcd, Two Nodes
![Architecture](/cluster-architecture.svg)
## Step 2: Download Installer Package
Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`.
```bash
curl -L https://kubesphere.io/download/offline/latest > kubesphere-all-offline-v2.1.1.tar.gz \
&& tar -zxf kubesphere-all-offline-v2.1.1.tar.gz && cd kubesphere-all-offline-v2.1.1/conf
```
## Step 3: Configure Host Template
> This step is only for multi-node installation, you can skip this step if you choose all-in-one installation.
Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file.
> Note:
>
> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`.
> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`.
> - master, node1 and node2 are the host names of each node and all host names should be in lowercase.
### hosts.ini
```ini
[all]
master ansible_connection=local ip=192.168.0.1
node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD
node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD
[local-registry]
master
[kube-master]
master
[kube-node]
node1
node2
[etcd]
master
[k8s-cluster:children]
kube-node
kube-master
```
> Note:
>
> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here.
> - Installer will use a node as the local registry for docker images, defaults to "master" in the group `[local-registry]`.
> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively.
> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`.
>
> Parameters Specification:
>
> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection.
> - `ansible_host`: The name of the host to be connected.
> - `ip`: The ip of the host to be connected.
> - `ansible_user`: The default ssh user name to use.
> - `ansible_become_pass`: Allows you to set the privilege escalation password.
> - `ansible_ssh_pass`: The password of the host to be connected using root.
## Step 4: Enable All Components
> This is step is complete installation. You can skip this step if you choose a minimal installation.
Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default.
```yaml
# LOGGING CONFIGURATION
# logging is an optional component when installing KubeSphere, and
# Kubernetes builtin logging APIs will be used if logging_enabled is set to false.
# Builtin logging only provides limited functions, so recommend to enable logging.
logging_enabled: true # Whether to install logging system
elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number
elasticsearch_data_replica: 2 # total number of data nodes
elasticsearch_volume_size: 20Gi # Elasticsearch volume size
log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default.
elk_prefix: logstash # the string making up index names. The index name will be formatted as ks-<elk_prefix>-log
kibana_enabled: false # Kibana Whether to install built-in Grafana
#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption.
#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port
#DevOps Configuration
devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image)
jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default
jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default
jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default
jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters
jenkinsJavaOpts_Xmx: 6g
jenkinsJavaOpts_MaxRAM: 8g
sonarqube_enabled: true # Whether to install built-in SonarQube
#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption.
#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token
# Following components are all optional for KubeSphere,
# Which could be turned on to install it before installation or later by updating its value to true
openpitrix_enabled: true # KubeSphere application store
metrics_server_enabled: true # For KubeSphere HPA to use
servicemesh_enabled: true # KubeSphere service mesh system(Istio-based)
notification_enabled: true # KubeSphere notification system
alerting_enabled: true # KubeSphere alerting system
```
## Step 5: Install KubeSphere to Linux Machines
> Note:
>
> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default.
> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions.
> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation.
> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts.
**1.** Enter `scripts` folder, and execute `install.sh` using `root` user:
```bash
cd ../cripts
./install.sh
```
**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume.
```bash
################################################
KubeSphere Installer Menu
################################################
* 1) All-in-one
* 2) Multi-node
* 3) Quit
################################################
https://kubesphere.io/ 2020-02-24
################################################
Please input an option: 2
```
**3.** Verify the multi-node installation
**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go.
```bash
successsful!
#####################################################
### Welcome to KubeSphere! ###
#####################################################
Console: http://192.168.0.1:30880
Account: admin
Password: P@88w0rd
NOTEPlease modify the default password after login.
#####################################################
```
> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components).
**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in.
![Login](https://pek3b.qingstor.com/kubesphere-docs/png/20191017172215.png)
<font color=red>Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up.</font>
![Landing Page](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png)
## Enable Pluggable Components
If you already have set up minimal installation, you still can edit the ConfigMap of ks-installer using the following command. Please make sure there is enough resource in your machines, see [Pluggable Components Overview](/en/installation/pluggable-components/).
```bash
kubectl edit cm -n kubesphere-system ks-installer
```
## FAQ
If you have further questions please do not hesitate to raise issues on [GitHub](https://github.com/kubesphere/kubesphere/issues).

View File

@ -0,0 +1,8 @@
---
title: "StorageClass"
keywords: "kubernetes, docker, kubesphere, jenkins, istio, prometheus"
description: "Kubernetes and KubeSphere node management"
linkTitle: "StorageClass"
weight: 100
---

View File

@ -0,0 +1,23 @@
---
title: "DevOps User Guide"
description: "Getting started with KubeSphere DevOps project"
layout: "single"
linkTitle: "DevOps User Guide"
weight: 4400
icon: "/images/docs/docs.svg"
---
## Installing KubeSphere and Kubernetes on Linux
In this chapter, we will demonstrate how to use KubeKey to provision a new Kubernetes and KubeSphere cluster based on different infrastructures. Kubekey can help you to quickly build a production-ready cluster architecture on a set of machines from zero to one. It also helps you to easily scale the cluster and install pluggable components on existing architecture.
## Most Popular Pages
Below you will find some of the most common and helpful pages from this chapter. We highly recommend you to review them at first.
{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}}
{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}}

View File

@ -0,0 +1,7 @@
---
linkTitle: "DevOps Administration"
weight: 2200
_build:
render: false
---

View File

@ -0,0 +1,224 @@
---
title: "Role and Member Management"
keywords: 'kubernetes, kubesphere, air gapped, installation'
description: 'Role and Member Management'
weight: 2240
---
The air-gapped installation is almost the same as the online installation except it creates a local registry to host the Docker images. We will demonstrate how to install KubeSphere and Kubernetes on air-gapped environment.
> Note: The dependencies in different operating systems may cause upexpected problems. If you encounter any installation problems on air-gapped environment, please describe your OS information and error logs on [GitHub](https://github.com/kubesphere/kubesphere/issues).
## Prerequisites
- If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information.
> - Installer will use `/var/lib/docker` as the default directory where all Docker related files, including the images, are stored. We recommend you to add additional storage to a disk with at least 100G mounted at `/var/lib/docker` and `/mnt/registry` respectively, use the [fdisk](https://www.computerhope.com/unix/fdisk.htm) command for reference.
- Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [OpenEBS](https://openebs.io/) to provide storage service with dynamic provisioning. It is convenient for testing and development. For production, please [configure supported persistent storage service](../storage-configuration) and prepare [high availability configuration](../master-ha) before installation.
- Since the air-gapped machines cannot connect to apt or yum source, please use clean Linux machine to avoid this problem.
## Step 1: Prepare Linux Hosts
The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements.
- Supported OSes: CentOS 7.4 ~ 7.7 (64-bit), Ubuntu 16.04.5/16.04.6/18.04.1/18.04.2/18.04.3 LTS (64-bit)
- Time synchronization is required across all nodes, otherwise the installation may not succeed;
- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`;
- If you are using `Ubuntu 18.04`, you need to use the user `root`.
- Ensure your disk of each node is at least 100G.
- CPU and memory in total of all machines: 2 cores and 4 GB for minimal installation; 8 cores and 16 GB for complete installation.
The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes.
> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide.
| Host IP | Host Name | Role |
| --- | --- | --- |
|192.168.0.1|master|master, etcd|
|192.168.0.2|node1|node|
|192.168.0.3|node2|node|
### Cluster Architecture
#### Single Master, Single Etcd, Two Nodes
![Architecture](/cluster-architecture.svg)
## Step 2: Download Installer Package
Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`.
```bash
curl -L https://kubesphere.io/download/offline/latest > kubesphere-all-offline-v2.1.1.tar.gz \
&& tar -zxf kubesphere-all-offline-v2.1.1.tar.gz && cd kubesphere-all-offline-v2.1.1/conf
```
## Step 3: Configure Host Template
> This step is only for multi-node installation, you can skip this step if you choose all-in-one installation.
Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file.
> Note:
>
> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`.
> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`.
> - master, node1 and node2 are the host names of each node and all host names should be in lowercase.
### hosts.ini
```ini
[all]
master ansible_connection=local ip=192.168.0.1
node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD
node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD
[local-registry]
master
[kube-master]
master
[kube-node]
node1
node2
[etcd]
master
[k8s-cluster:children]
kube-node
kube-master
```
> Note:
>
> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here.
> - Installer will use a node as the local registry for docker images, defaults to "master" in the group `[local-registry]`.
> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively.
> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`.
>
> Parameters Specification:
>
> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection.
> - `ansible_host`: The name of the host to be connected.
> - `ip`: The ip of the host to be connected.
> - `ansible_user`: The default ssh user name to use.
> - `ansible_become_pass`: Allows you to set the privilege escalation password.
> - `ansible_ssh_pass`: The password of the host to be connected using root.
## Step 4: Enable All Components
> This is step is complete installation. You can skip this step if you choose a minimal installation.
Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default.
```yaml
# LOGGING CONFIGURATION
# logging is an optional component when installing KubeSphere, and
# Kubernetes builtin logging APIs will be used if logging_enabled is set to false.
# Builtin logging only provides limited functions, so recommend to enable logging.
logging_enabled: true # Whether to install logging system
elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number
elasticsearch_data_replica: 2 # total number of data nodes
elasticsearch_volume_size: 20Gi # Elasticsearch volume size
log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default.
elk_prefix: logstash # the string making up index names. The index name will be formatted as ks-<elk_prefix>-log
kibana_enabled: false # Kibana Whether to install built-in Grafana
#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption.
#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port
#DevOps Configuration
devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image)
jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default
jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default
jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default
jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters
jenkinsJavaOpts_Xmx: 6g
jenkinsJavaOpts_MaxRAM: 8g
sonarqube_enabled: true # Whether to install built-in SonarQube
#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption.
#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token
# Following components are all optional for KubeSphere,
# Which could be turned on to install it before installation or later by updating its value to true
openpitrix_enabled: true # KubeSphere application store
metrics_server_enabled: true # For KubeSphere HPA to use
servicemesh_enabled: true # KubeSphere service mesh system(Istio-based)
notification_enabled: true # KubeSphere notification system
alerting_enabled: true # KubeSphere alerting system
```
## Step 5: Install KubeSphere to Linux Machines
> Note:
>
> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default.
> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions.
> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation.
> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts.
**1.** Enter `scripts` folder, and execute `install.sh` using `root` user:
```bash
cd ../cripts
./install.sh
```
**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume.
```bash
################################################
KubeSphere Installer Menu
################################################
* 1) All-in-one
* 2) Multi-node
* 3) Quit
################################################
https://kubesphere.io/ 2020-02-24
################################################
Please input an option: 2
```
**3.** Verify the multi-node installation
**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go.
```bash
successsful!
#####################################################
### Welcome to KubeSphere! ###
#####################################################
Console: http://192.168.0.1:30880
Account: admin
Password: P@88w0rd
NOTEPlease modify the default password after login.
#####################################################
```
> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components).
**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in.
![Login](https://pek3b.qingstor.com/kubesphere-docs/png/20191017172215.png)
<font color=red>Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up.</font>
![Landing Page](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png)
## Enable Pluggable Components
If you already have set up minimal installation, you still can edit the ConfigMap of ks-installer using the following command. Please make sure there is enough resource in your machines, see [Pluggable Components Overview](/en/installation/pluggable-components/).
```bash
kubectl edit cm -n kubesphere-system ks-installer
```
## FAQ
If you have further questions please do not hesitate to raise issues on [GitHub](https://github.com/kubesphere/kubesphere/issues).

View File

@ -0,0 +1,7 @@
---
linkTitle: "DevOps Project Introduction"
weight: 2100
_build:
render: false
---

View File

@ -0,0 +1,93 @@
---
title: "Introduction"
keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus'
description: 'KubeSphere Installation Overview'
linkTitle: "Introduction"
weight: 2110
---
[KubeSphere](https://kubesphere.io/) is an enterprise-grade multi-tenant container platform built on [Kubernetes](https://kubernetes.io). It provides an easy-to-use UI for users to manage application workloads and computing resources with a few clicks, which greatly reduces the learning curve and the complexity of daily work such as development, testing, operation and maintenance. KubeSphere aims to alleviate the pain points of Kubernetes including storage, network, security and ease of use, etc.
KubeSphere supports installing on cloud-hosted and on-premises Kubernetes cluster, e.g. native K8s, GKE, EKS, RKE, etc. It also supports installing on Linux host including virtual machine and bare metal with provisioning fresh Kubernetes cluster. Both of the two methods are easy and friendly to install KubeSphere. Meanwhile, KubeSphere offers not only online installer, but air-gapped installer for such environment with no access to the internet.
KubeSphere is open source project on [GitHub](https://github.com/kubesphere). There are thousands of users are using KunbeSphere, and many of them are running KubeSphere for their production workloads.
In summary, there are several installation options you can choose. Please note not all options are mutually exclusive. For instance, you can deploy KubeSphere with minimal packages on existing K8s cluster on multiple nodes in air-gapped environment. Here is the decision tree shown in the following graph you may reference for your own situation.
- [All-in-One](../all-in-one): Intall KubeSphere on a singe node. It is only for users to quickly get familar with KubeSphere.
- [Multi-Node](../multi-node): Install KubeSphere on multiple nodes. It is for testing or development.
- [Install KubeSphere on Air Gapped Linux](../install-ks-on-linux-airgapped): All images of KubeSphere have been encapsulated into a package, it is convenient for air gapped installation on Linux machines.
- [High Availability Multi-Node](../master-ha): Install high availability KubeSphere on multiple nodes which is used for production environment.
- [KubeSphere on Existing K8s](../install-on-k8s): Deploy KubeSphere on your Kubernetes cluster including cloud-hosted services such as GKE, EKS, etc.
- [KubeSphere on Air-Gapped K8s](../install-on-k8s-airgapped): Install KubeSphere on a disconnected Kubernetes cluster.
- Minimal Packages: Only install minimal required system components of KubeSphere. The minimum of resource requirement is down to 1 core and 2G memory.
- [Full Packages](../complete-installation): Install all available system components of KubeSphere including DevOps, service mesh, application store, etc.
![Installer Options](https://pek3b.qingstor.com/kubesphere-docs/png/20200305093158.png)
## Before Installation
- As the installation will pull images and update operating system from the internet, your environment must have the internet access. If not, then you need to use the air-gapped installer instead.
- For all-in-one installation, the only one node is both the master and the worker.
- For multi-node installation, you are asked to specify the node roles in the configuration file before installation.
- Your linux host must have OpenSSH Server installed.
- Please check the [ports requirements](../port-firewall) before installation.
## Quick Install For Development and Testing
KubeSphere has decoupled some components since v2.1.0. The installer only installs required components by default which brings the benefits of fast installation and minimal resource consumption. If you want to install any optional component, please check the following section [Pluggable Components Overview](../intro#pluggable-components-overview) for details.
The quick install of KubeSphere is only for development or testing since it uses local volume for storage by default. If you want a production install please refer to the section [High Availability Installation for Production Environment](../intro#high-availability-installation-for-production-environment).
### 1. Install KubeSphere on Linux
- [All-in-One](../all-in-one): It means a single-node hassle-free configuration installation with one-click.
- [Multi-Node](../multi-node): It allows you to install KubeSphere on multiple instances using local volume, which means it is not required to install storage server such as Ceph, GlusterFS.
> NoteWith regard to air-gapped installation please refer to [Install KubeSphere on Air Gapped Linux Machines](../install-ks-on-linux-airgapped).
### 2. Install KubeSphere on Existing Kubernetes
You can install KubeSphere on your existing Kubernetes cluster. Please refer [Install KubeSphere on Kubernetes](../install-on-k8s) for instructions.
## High Availability Installation for Production Environment
### 1. Install HA KubeSphere on Linux
KubeSphere installer supports installing a highly available cluster for production with the prerequisites being a load balancer and persistent storage service set up in advance.
- [Persistent Service Configuration](../storage-configuration): By default, KubeSphere Installer uses [Local Volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning in Kubernetes cluster. It is convenient for quick install of testing environment. In production environment, it must have a storage server set up. Please refer [Persistent Service Configuration](../storage-configuration) for details.
- [Load Balancer Configuration for HA install](../master-ha): Before you get started with multi-node installation in production environment, you need to configure a load balancer. Either cloud LB or `HAproxy + keepalived` works for the installation.
### 2. Install HA KubeSphere on Existing Kubernetes
Before you install KubeSphere on existing Kubernetes, please check the prerequisites of the installation on Linux described above, and verify the existing Kubernetes to see if it satisfies these prerequisites or not, i.e., a load balancer and persistent storage service.
If your Kubernetes is ready, please refer [Install KubeSphere on Kubernetes](../install-on-k8s) for instructions.
> You can install KubeSphere on cloud Kubernetes service such as [Installing KubeSphere on GKE cluster](../install-on-gke)
## Pluggable Components Overview
KubeSphere has decoupled some core feature components since v2.1.0. These components are designed to be pluggable, which means you can enable any of them before or after installation. The installer by default does not install the pluggable components. Please check the guide [Enable Pluggable Components Installation](../pluggable-components) for your requirement.
![Pluggable Components](https://pek3b.qingstor.com/kubesphere-docs/png/20191207140846.png)
## Storage Configuration Instruction
The following links explain how to configure different types of persistent storage services. Please refer to [Storage Configuration Instruction](../storage-configuration) for detailed instructions regarding how to configure the storage class in KubeSphere.
- [NFS](https://kubernetes.io/docs/concepts/storage/volumes/#nfs)
- [GlusterFS](https://www.gluster.org/)
- [Ceph RBD](https://ceph.com/)
- [QingCloud Block Storage](https://docs.qingcloud.com/product/storage/volume/)
- [QingStor NeonSAN](https://docs.qingcloud.com/product/storage/volume/super_high_performance_shared_volume/)
## Add New Nodes
KubeSphere Installer allows you to scale the number of nodes, see [Add New Nodes](../add-nodes).
## Uninstall
Uninstall will remove KubeSphere from the machines. This operation is irreversible and dangerous. Please check [Uninstall](../uninstall).

View File

@ -0,0 +1,23 @@
---
title: "Installing on Kubernetes"
description: "Help you to better understand KubeSphere with detailed graphics and contents"
layout: "single"
linkTitle: "Installing on Kubernetes"
weight: 2500
icon: "/images/docs/docs.svg"
---
## Installing KubeSphere and Kubernetes on Linux
In this chapter, we will demonstrate how to use KubeKey to provision a new Kubernetes and KubeSphere cluster based on different infrastructures. Kubekey can help you to quickly build a production-ready cluster architecture on a set of machines from zero to one. It also helps you to easily scale the cluster and install pluggable components on existing architecture.
## Most Popular Pages
Below you will find some of the most common and helpful pages from this chapter. We highly recommend you to review them at first.
{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}}
{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}}

View File

@ -0,0 +1,116 @@
---
title: "All-in-One Installation"
keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus'
description: 'The guide for installing all-in-one KubeSphere for developing or testing'
linkTitle: "All-in-One"
weight: 2210
---
For those who are new to KubeSphere and looking for a quick way to discover the platform, the all-in-one mode is your best choice to install it since it is one-click and hassle-free configuration installation with provisioning KubeSphere and Kubernetes on your machine.
- <font color=red>The following instructions are for the default installation without enabling any optional components as we have made them pluggable since v2.1.0. If you want to enable any one, please see the section [Enable Pluggable Components](../all-in-one#enable-pluggable-components) below.</font>
- <font color=red>If your machine has >= 8 cores and >= 16G memory, we recommend you to install the full package of KubeSphere by [enabling optional components](../complete-installation)</font>.
## Prerequisites
If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirement](../port-firewall) for more information.
## Step 1: Prepare Linux Machine
The following describes the requirements of hardware and operating system.
- For `Ubuntu 16.04` OS, it is recommended to select the latest `16.04.5`.
- If you are using Ubuntu 18.04, you need to use the root user to install.
- If the Debian system does not have the sudo command installed, you need to execute the `apt update && apt install sudo` command using root before installation.
### Hardware Recommendation
| System | Minimum Requirements |
| ------- | ----------- |
| CentOS 7.4 ~ 7.7 (64 bit) | CPU2 Core, Memory4 G, Disk Space100 G |
| Ubuntu 16.04/18.04 LTS (64 bit) | CPU2 Core, Memory4 G, Disk Space100 G |
| Red Hat Enterprise Linux Server 7.4 (64 bit) | CPU2 Core, Memory4 G, Disk Space100 G |
| Debian Stretch 9.5 (64 bit)| CPU2 Core, Memory4 G, Disk Space100 G |
## Step 2: Download Installer Package
Execute the following commands to download Installer 2.1.1 and unpack it.
```bash
curl -L https://kubesphere.io/download/stable/latest > installer.tar.gz \
&& tar -zxf installer.tar.gz && cd kubesphere-all-v2.1.1/scripts
```
## Step 3: Get Started with Installation
You should not do anything except executing one command as follows. The installer will complete all things for you automatically including installing/updating dependency packages, installing Kubernetes with default version 1.16.7, storage service and so on.
> Note:
>
> - Generally speaking, do not modify any configuration.
> - KubeSphere installs `calico` by default. If you would like to use a different network plugin, you are allowed to change the configuration in `conf/common.yaml`. You are also allowed to modify other configurations such as storage class, pluggable components, etc.
> - The default storage class is [OpenEBS](https://openebs.io/) which is a kind of [Local Volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) to provision persistence storage service. OpenEBS supports [dynamic provisioning PV](https://docs.openebs.io/docs/next/uglocalpv.html#Provision-OpenEBS-Local-PV-based-on-hostpath). It will be installed automatically for your testing purpose.
> - Please refer [storage configurations](../storage-configuration) for supported storage class.
> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts.
**1.** Execute the following command:
```bash
./install.sh
```
**2.** Enter `1` to select `All-in-one` mode and type `yes` if your machine satisfies the requirements to start:
```bash
################################################
KubeSphere Installer Menu
################################################
* 1) All-in-one
* 2) Multi-node
* 3) Quit
################################################
https://kubesphere.io/ 2020-02-24
################################################
Please input an option: 1
```
**3.** Verify if KubeSphere is installed successfully or not
**(1).** If you see "Successful" returned after completed, it means the installation is successful. The console service is exposed through nodeport 30880 by default. You may need to bind EIP and configure port forwarding in your environment for outside users to access. Make sure you disable the related firewall.
```bash
successsful!
#####################################################
### Welcome to KubeSphere! ###
#####################################################
Console: http://192.168.0.8:30880
Account: admin
Password: P@88w0rd
NOTEPlease modify the default password after login.
#####################################################
```
> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components).
**(2).** You will be able to use default account and password to log in the console to take a tour of KubeSphere.
<font color=red>Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up.</font>
![Dashboard](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png)
## Enable Pluggable Components
The guide above is only used for minimal installation by default. You can execute the following command to open the configure map and enable pluggable components. Make sure your cluster has enough CPU and memory in advance, see [Enable Pluggable Components](../pluggable-components).
```bash
kubectl edit cm -n kubesphere-system ks-installer
```
## FAQ
The installer has been tested on Aliyun, AWS, Huawei Cloud, QingCloud and Tencent Cloud. Please check the [results](https://github.com/kubesphere/ks-installer/issues/23) for details. Also please read the [FAQ of installation](../../faq/faq-install).
If you have any further questions please do not hesitate to file issues on [GitHub](https://github.com/kubesphere/kubesphere/issues).

View File

@ -0,0 +1,76 @@
---
title: "Install All Optional Components"
keywords: 'kubesphere, kubernetes, docker, devops, service mesh, openpitrix'
description: 'Install KubeSphere with all optional components enabled on Linux machine'
weight: 2260
---
The installer only installs required components (i.e. minimal installation) by default since v2.1.0. Other components are designed to be pluggable, which means you can enable any of them before or after installation. If your machine meets the following minimum requirements, we recommend you to **enable all components before installation**. A complete installation gives you an opportunity to comprehensively discover the container platform.
<font color="red">
Minimum Requirements
- CPU: 8 cores in total of all machines
- Memory: 16 GB in total of all machines
</font>
> Note:
>
> - If your machines do not meet the minimum requirements of a complete installation, you can enable any of components at your will. Please refer to [Enable Pluggable Components Installation](../pluggable-components).
> - It works for [All-in-One](../all-in-one) and [Multi-Node](../multi-node).
This tutorial will walk you through how to enable all components of KubeSphere.
## Download Installer Package
If you do not have the package yet, please run the following commands to download Installer 2.1.1 and unpack it, then enter `conf` folder.
```bash
$ curl -L https://kubesphere.io/download/stable/v2.1.1 > installer.tar.gz \
&& tar -zxf installer.tar.gz && cd kubesphere-all-v2.1.1/conf
```
## Enable All Components
Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default.
```yaml
# LOGGING CONFIGURATION
# logging is an optional component when installing KubeSphere, and
# Kubernetes builtin logging APIs will be used if logging_enabled is set to false.
# Builtin logging only provides limited functions, so recommend to enable logging.
logging_enabled: true # Whether to install logging system
elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number
elasticsearch_data_replica: 2 # total number of data nodes
elasticsearch_volume_size: 20Gi # Elasticsearch volume size
log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default.
elk_prefix: logstash # the string making up index names. The index name will be formatted as ks-<elk_prefix>-log
kibana_enabled: false # Kibana Whether to install built-in Grafana
#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption.
#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port
#DevOps Configuration
devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image)
jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default
jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default
jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default
jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters
jenkinsJavaOpts_Xmx: 6g
jenkinsJavaOpts_MaxRAM: 8g
sonarqube_enabled: true # Whether to install built-in SonarQube
#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption.
#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token
# Following components are all optional for KubeSphere,
# Which could be turned on to install it before installation or later by updating its value to true
openpitrix_enabled: true # KubeSphere application store
metrics_server_enabled: true # For KubeSphere HPA to use
servicemesh_enabled: true # KubeSphere service mesh system(Istio-based)
notification_enabled: true # KubeSphere notification system
alerting_enabled: true # KubeSphere alerting system
```
Save it, then you can continue the installation process.

View File

@ -0,0 +1,224 @@
---
title: "Air-Gapped Installation"
keywords: 'kubernetes, kubesphere, air gapped, installation'
description: 'How to install KubeSphere on air-gapped Linux machines'
weight: 2240
---
The air-gapped installation is almost the same as the online installation except it creates a local registry to host the Docker images. We will demonstrate how to install KubeSphere and Kubernetes on air-gapped environment.
> Note: The dependencies in different operating systems may cause upexpected problems. If you encounter any installation problems on air-gapped environment, please describe your OS information and error logs on [GitHub](https://github.com/kubesphere/kubesphere/issues).
## Prerequisites
- If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information.
> - Installer will use `/var/lib/docker` as the default directory where all Docker related files, including the images, are stored. We recommend you to add additional storage to a disk with at least 100G mounted at `/var/lib/docker` and `/mnt/registry` respectively, use the [fdisk](https://www.computerhope.com/unix/fdisk.htm) command for reference.
- Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [OpenEBS](https://openebs.io/) to provide storage service with dynamic provisioning. It is convenient for testing and development. For production, please [configure supported persistent storage service](../storage-configuration) and prepare [high availability configuration](../master-ha) before installation.
- Since the air-gapped machines cannot connect to apt or yum source, please use clean Linux machine to avoid this problem.
## Step 1: Prepare Linux Hosts
The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements.
- Supported OSes: CentOS 7.4 ~ 7.7 (64-bit), Ubuntu 16.04.5/16.04.6/18.04.1/18.04.2/18.04.3 LTS (64-bit)
- Time synchronization is required across all nodes, otherwise the installation may not succeed;
- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`;
- If you are using `Ubuntu 18.04`, you need to use the user `root`.
- Ensure your disk of each node is at least 100G.
- CPU and memory in total of all machines: 2 cores and 4 GB for minimal installation; 8 cores and 16 GB for complete installation.
The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes.
> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide.
| Host IP | Host Name | Role |
| --- | --- | --- |
|192.168.0.1|master|master, etcd|
|192.168.0.2|node1|node|
|192.168.0.3|node2|node|
### Cluster Architecture
#### Single Master, Single Etcd, Two Nodes
![Architecture](/cluster-architecture.svg)
## Step 2: Download Installer Package
Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`.
```bash
curl -L https://kubesphere.io/download/offline/latest > kubesphere-all-offline-v2.1.1.tar.gz \
&& tar -zxf kubesphere-all-offline-v2.1.1.tar.gz && cd kubesphere-all-offline-v2.1.1/conf
```
## Step 3: Configure Host Template
> This step is only for multi-node installation, you can skip this step if you choose all-in-one installation.
Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file.
> Note:
>
> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`.
> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`.
> - master, node1 and node2 are the host names of each node and all host names should be in lowercase.
### hosts.ini
```ini
[all]
master ansible_connection=local ip=192.168.0.1
node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD
node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD
[local-registry]
master
[kube-master]
master
[kube-node]
node1
node2
[etcd]
master
[k8s-cluster:children]
kube-node
kube-master
```
> Note:
>
> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here.
> - Installer will use a node as the local registry for docker images, defaults to "master" in the group `[local-registry]`.
> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively.
> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`.
>
> Parameters Specification:
>
> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection.
> - `ansible_host`: The name of the host to be connected.
> - `ip`: The ip of the host to be connected.
> - `ansible_user`: The default ssh user name to use.
> - `ansible_become_pass`: Allows you to set the privilege escalation password.
> - `ansible_ssh_pass`: The password of the host to be connected using root.
## Step 4: Enable All Components
> This is step is complete installation. You can skip this step if you choose a minimal installation.
Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default.
```yaml
# LOGGING CONFIGURATION
# logging is an optional component when installing KubeSphere, and
# Kubernetes builtin logging APIs will be used if logging_enabled is set to false.
# Builtin logging only provides limited functions, so recommend to enable logging.
logging_enabled: true # Whether to install logging system
elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number
elasticsearch_data_replica: 2 # total number of data nodes
elasticsearch_volume_size: 20Gi # Elasticsearch volume size
log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default.
elk_prefix: logstash # the string making up index names. The index name will be formatted as ks-<elk_prefix>-log
kibana_enabled: false # Kibana Whether to install built-in Grafana
#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption.
#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port
#DevOps Configuration
devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image)
jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default
jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default
jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default
jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters
jenkinsJavaOpts_Xmx: 6g
jenkinsJavaOpts_MaxRAM: 8g
sonarqube_enabled: true # Whether to install built-in SonarQube
#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption.
#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token
# Following components are all optional for KubeSphere,
# Which could be turned on to install it before installation or later by updating its value to true
openpitrix_enabled: true # KubeSphere application store
metrics_server_enabled: true # For KubeSphere HPA to use
servicemesh_enabled: true # KubeSphere service mesh system(Istio-based)
notification_enabled: true # KubeSphere notification system
alerting_enabled: true # KubeSphere alerting system
```
## Step 5: Install KubeSphere to Linux Machines
> Note:
>
> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default.
> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions.
> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation.
> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts.
**1.** Enter `scripts` folder, and execute `install.sh` using `root` user:
```bash
cd ../cripts
./install.sh
```
**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume.
```bash
################################################
KubeSphere Installer Menu
################################################
* 1) All-in-one
* 2) Multi-node
* 3) Quit
################################################
https://kubesphere.io/ 2020-02-24
################################################
Please input an option: 2
```
**3.** Verify the multi-node installation
**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go.
```bash
successsful!
#####################################################
### Welcome to KubeSphere! ###
#####################################################
Console: http://192.168.0.1:30880
Account: admin
Password: P@88w0rd
NOTEPlease modify the default password after login.
#####################################################
```
> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components).
**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in.
![Login](https://pek3b.qingstor.com/kubesphere-docs/png/20191017172215.png)
<font color=red>Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up.</font>
![Landing Page](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png)
## Enable Pluggable Components
If you already have set up minimal installation, you still can edit the ConfigMap of ks-installer using the following command. Please make sure there is enough resource in your machines, see [Pluggable Components Overview](/en/installation/pluggable-components/).
```bash
kubectl edit cm -n kubesphere-system ks-installer
```
## FAQ
If you have further questions please do not hesitate to raise issues on [GitHub](https://github.com/kubesphere/kubesphere/issues).

View File

@ -0,0 +1,152 @@
---
title: "High Availability Configuration"
keywords: "kubesphere, kubernetes, docker,installation, HA, high availability"
description: "The guide for installing a high availability of KubeSphere cluster"
weight: 2230
---
## Introduction
[Multi-node installation](../multi-node) can help you to quickly set up a single-master cluster on multiple machines for development and testing. However, we need to consider the high availability of the cluster for production. Since the key components on the master node, i.e. kube-apiserver, kube-scheduler, and kube-controller-manager are running on a single master node, Kubernetes and KubeSphere will be unavailable during the master being down. Therefore we need to set up a high availability cluster by provisioning load balancers and multiple masters. You can use any cloud load balancer, or any hardware load balancer (e.g. F5). In addition, keepalved and Haproxy is also an alternative for creating such high-availability cluster.
This document walks you through an example how to create two [QingCloud Load Balancer](https://docs.qingcloud.com/product/network/loadbalancer), serving as internal load balancer and external load balancer respectively, and how to configure the high availability of masters and Etcd using the load balancers.
## Prerequisites
- Please make sure that you already read [Multi-Node installation](../multi-node). This document only demonstrates how to configure load balancers.
- You need a [QingCloud](https://console.qingcloud.com/login) account to create load balancers, or follow the guide of any other cloud provider to create load balancers.
## Architecture
This example prepares six machines of CentOS 7.5. We will create two load balancers, and deploy three masters and Etcd nodes on three of the machines. You can configure these masters and Etcd nodes in `conf/hosts.ini`.
![Master and etcd node high availability architecture](https://pek3b.qingstor.com/kubesphere-docs/png/20200307215924.png)
## Install HA Cluster
### Step 1: Create Load Balancers
This step briefly shows an example of creating a load balancer on QingCloud platform.
#### Create an Internal Load Balancer
1.1. Log in [QingCloud Console](https://console.qingcloud.com/login) and select **Network & CDN → Load Balancers**, then click on the create button and fill in the basic information.
1.2. Choose the VxNet that your machines are created within from the **Network** dropdown list. Here is `kube`. Other settings can be default values as follows. Click **Submit** to complete the creation.
![Create Internal LB on QingCloud](https://pek3b.qingstor.com/kubesphere-docs/png/20200215224125.png)
1.3. Drill into the detail page of the load balancer, then create a listener that listens to the port `6443` of the `TCP` protocol.
- Name: Define a name for this Listener
- Listener Protocol: Select `TCP` protocol
- Port: `6443`
- Load mode: `Poll`
> Note: After creating the listener, please check the firewall rules of the load balancer. Make sure that the port `6443` has been added to the firewall rules and the external traffic can pass through `6443`. Otherwise, the installation will fail.
![Add Listener to LB](https://pek3b.qingstor.com/kubesphere-docs/png/20200215225205.png)
1.4. Click **Add Backend**, choose the VxNet `kube` that we chose. Then click on the button **Advanced Search** and choose the three master nodes under the VxNet and set the port to `6443` which is the default secure port of api-server.
Click **Submit** when you are done.
![Choose Backends](https://pek3b.qingstor.com/kubesphere-docs/png/20200215225550.png)
1.5. Click on the button **Apply Changes** to activate the configurations. At this point, you can find the three masters have been added as the backend servers of the listener that is behind the internal load balancer.
> Please note: The status of all masters might shows `Not available` after you added them as backends. This is normal since the port `6443` of api-server are not active in masters yet. The status will change to `Active` and the port of api-server will be exposed after installation complete, which means the internal load balancer you configured works as expected.
![Apply Changes](https://pek3b.qingstor.com/kubesphere-docs/png/20200215230107.png)
#### Create an External Load Balancer
You need to create an EIP in advance.
1.6. Similarly, create an external load balancer without joining any network, but associate the EIP that you created to this load balancer.
1.7. Enter the load balancer detail page, create a listener that listens to the port `30880` of the `HTTP` protocol which is the nodeport of KubeSphere console..
> Note: After creating the listener, please check the firewall rules of the load balancer. Make sure that the port `30880` has been added to the firewall rules and the external traffic can pass through `6443`. Otherwise, the installation will fail.
![Create external LB](https://pek3b.qingstor.com/kubesphere-docs/png/20200215232114.png)
1.8. Click **Add Backend**, then choose the `six` machines that we are going to install KubeSphere within the VxNet `Kube`, and set the port to `30880`.
Click **Submit** when you are done.
1.9. Click on the button **Apply Changes** to activate the configurations. At this point, you can find the six machines have been added as the backend servers of the listener that is behind the external load balancer.
![Apply Changes](https://pek3b.qingstor.com/kubesphere-docs/png/20200215232445.png)
### Step 2: Modify the host.ini
Go to the taskbox where you downloaded the installer by following the [Multi-node Installation](../multi-node) and complete the following configurations.
| **Parameter** | **Description** |
|--------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `[all]` | node information. Use the following syntax if you run installation as `root` user: <br> - `<node_name> ansible_connection=<host> ip=<ip_address>` <br> - `<node_name> ansible_host=<ip_address> ip=<ip_address> ansible_ssh_pass=<pwd>` <br> If you log in as a non-root user, use the syntax: <br> - `<node_name> ansible_connection=<host> ip=<ip_address> ansible_user=<user> ansible_become_pass=<pwd>` |
| `[kube-master]` | master node names |
| `[kube-node]` | worker node names |
| `[etcd]` | etcd node names. The number of `etcd` nodes needs to be odd. |
| `[k8s-cluster:children]` | group names of `[kube-master]` and `[kube-node]` |
We use **CentOS 7.5** with `root` user to install an HA cluster. Please see the following configuration as an example:
> Note:
> <br>
> If the _taskbox_ cannot establish `ssh` connection with the rest nodes, try to use the non-root user configuration.
#### host.ini example
```ini
[all]
master1 ansible_connection=local ip=192.168.0.1
master2 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD
master3 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD
node1 ansible_host=192.168.0.4 ip=192.168.0.4 ansible_ssh_pass=PASSWORD
node2 ansible_host=192.168.0.5 ip=192.168.0.5 ansible_ssh_pass=PASSWORD
node3 ansible_host=192.168.0.6 ip=192.168.0.6 ansible_ssh_pass=PASSWORD
[kube-master]
master1
master2
master3
[kube-node]
node1
node2
node3
[etcd]
master1
master2
master3
[k8s-cluster:children]
kube-node
kube-master
```
### Step 3: Configure the Load Balancer Parameters
Besides configuring the `common.yaml` by following the [Multi-node Installation](../multi-node), you need to modify the load balancer information in the `common.yaml`. Assume the **VIP** address and listening port of the **internal load balancer** are `192.168.0.253` and `6443`, then you can refer to the following example.
> - Note that address and port should be indented by two spaces in `common.yaml`, and the address should be VIP.
> - The domain name of the load balancer is "lb.kubesphere.local" by default for internal access. If you need to change the domain name, please uncomment and modify it.
#### The configuration sample in common.yaml
```yaml
## External LB example config
## apiserver_loadbalancer_domain_name: "lb.kubesphere.local"
loadbalancer_apiserver:
address: 192.168.0.253
port: 6443
```
Finally, please refer to the [guide](../storage-configuration) to configure the persistent storage service in `common.yaml` and start your HA cluster installation.
Then it is ready to install the high availability KubeSphere cluster.

View File

@ -0,0 +1,176 @@
---
title: "Multi-node Installation"
keywords: 'kubesphere, kubernetes, docker, kubesphere installer'
description: 'The guide for installing KubeSphere on Multi-Node in development or testing environment'
weight: 2220
---
`Multi-Node` installation enables installing KubeSphere on multiple nodes. Typically, use any one node as _taskbox_ to run the installation task. Please note `ssh` communication is required to be established between taskbox and other nodes.
- <font color=red>The following instructions are for the default installation without enabling any optional components as we have made them pluggable since v2.1.0. If you want to enable any one, please read [Enable Pluggable Components](../pluggable-components).</font>
- <font color=red>If your machines in total have >= 8 cores and >= 16G memory, we recommend you to install the full package of KubeSphere by [Enabling Optional Components](../complete-installation)</font>.
- <font color=red> The installation time depends on your network bandwidth, your computer configuration, the number of nodes, etc. </font>
## Prerequisites
If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information.
## Step 1: Prepare Linux Hosts
The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements.
- Time synchronization is required across all nodes, otherwise the installation may not succeed;
- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`;
- If you are using `Ubuntu 18.04`, you need to use the user `root`;
- If the Debian system does not have the sudo command installed, you need to execute `apt update && apt install sudo` command using root before installation.
### Hardware Recommendation
- KubeSphere can be installed on any cloud platform.
- The installation speed can be accelerated by increasing network bandwidth.
- If you choose air-gapped installation, ensure your disk of each node is at least 100G.
| System | Minimum Requirements (Each node) |
| --- | --- |
| CentOS 7.4 ~ 7.7 (64 bit) | CPU2 Core Memory4 G Disk Space40 G |
| Ubuntu 16.04/18.04 LTS (64 bit) | CPU2 Core Memory4 G Disk Space40 G |
| Red Hat Enterprise Linux Server 7.4 (64 bit) | CPU2 Core Memory4 G Disk Space40 G |
| Debian Stretch 9.5 (64 bit)| CPU2 Core Memory4 G Disk Space40 G |
The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes.
> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide.
| Host IP | Host Name | Role |
| --- | --- | --- |
|192.168.0.1|master|master, etcd|
|192.168.0.2|node1|node|
|192.168.0.3|node2|node|
### Cluster Architecture
#### Single Master, Single Etcd, Two Nodes
![Architecture](/cluster-architecture.svg)
## Step 2: Download Installer Package
**1.** Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`.
```bash
curl -L https://kubesphere.io/download/stable/latest > installer.tar.gz \
&& tar -zxf installer.tar.gz && cd kubesphere-all-v2.1.1/conf
```
**2.** Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file.
> Note:
>
> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`.
> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`.
> - master, node1 and node2 are the host names of each node and all host names should be in lowercase.
### hosts.ini
```ini
[all]
master ansible_connection=local ip=192.168.0.1
node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD
node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD
[kube-master]
master
[kube-node]
node1
node2
[etcd]
master
[k8s-cluster:children]
kube-node
kube-master
```
> Note:
>
> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here.
> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively.
> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`.
>
> Parameters Specification:
>
> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection.
> - `ansible_host`: The name of the host to be connected.
> - `ip`: The ip of the host to be connected.
> - `ansible_user`: The default ssh user name to use.
> - `ansible_become_pass`: Allows you to set the privilege escalation password.
> - `ansible_ssh_pass`: The password of the host to be connected using root.
## Step 3: Install KubeSphere to Linux Machines
> Note:
>
> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default.
> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions.
> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation.
> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts.
**1.** Enter `scripts` folder, and execute `install.sh` using `root` user:
```bash
cd ../cripts
./install.sh
```
**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume.
```bash
################################################
KubeSphere Installer Menu
################################################
* 1) All-in-one
* 2) Multi-node
* 3) Quit
################################################
https://kubesphere.io/ 2020-02-24
################################################
Please input an option: 2
```
**3.** Verify the multi-node installation
**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go.
```bash
successsful!
#####################################################
### Welcome to KubeSphere! ###
#####################################################
Console: http://192.168.0.1:30880
Account: admin
Password: P@88w0rd
NOTEPlease modify the default password after login.
#####################################################
```
> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components).
**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in.
![Login](https://pek3b.qingstor.com/kubesphere-docs/png/20191017172215.png)
<font color=red>Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up.</font>
![Landing Page](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png)
## FAQ
The installer has been tested on Aliyun, AWS, Huawei Cloud, QingCloud, Tencent Cloud. Please check the [results](https://github.com/kubesphere/ks-installer/issues/23) for details. Also please read the [FAQ of installation](../../faq/faq-install).
If you have any further questions please do not hesitate to file issues on [GitHub](https://github.com/kubesphere/kubesphere/issues).

View File

@ -0,0 +1,157 @@
---
title: "StorageClass Configuration"
keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus'
description: 'Instructions for Setting up StorageClass for KubeSphere'
weight: 2250
---
Currently, Installer supports the following [Storage Class](https://kubernetes.io/docs/concepts/storage/storage-classes/), providing persistent storage service for KubeSphere (more storage classes will be supported soon).
- NFS
- Ceph RBD
- GlusterFS
- QingCloud Block Storage
- QingStor NeonSAN
- Local Volume (for development and test only)
The versions of storage systems and corresponding CSI plugins in the table listed below have been well tested.
| **Name** | **Version** | **Reference** |
| ----------- | --- |---|
Ceph RBD Server | v0.94.10 | For development and testing, refer to [Install Ceph Storage Server](/zh-CN/appendix/ceph-ks-install/) for details. Please refer to [Ceph Documentation](http://docs.ceph.com/docs/master/) for production. |
Ceph RBD Client | v12.2.5 | Before installing KubeSphere, you need to configure the corresponding parameters in `common.yaml`. Please refer to [Ceph RBD](../storage-configuration/#ceph-rbd) |
GlusterFS Server | v3.7.6 | For development and testing, refer to [Deploying GlusterFS Storage Server](/zh-CN/appendix/glusterfs-ks-install/) for details. Please refer to [Gluster Documentation](https://www.gluster.org/install/) or [Gluster Documentation](http://gluster.readthedocs.io/en/latest/Install-Guide/Install/) for production. Note you need to install [Heketi Manager (V3.0.0)](https://github.com/heketi/heketi/tree/master/docs/admin). |
|GlusterFS Client |v3.12.10|Before installing KubeSphere, you need to configure the corresponding parameters in `common.yaml`. Please refer to [GlusterFS](../storage-configuration/#glusterfs)|
|NFS Client | v3.1.0 | Before installing KubeSphere, you need to configure the corresponding parameters in `common.yaml`. Make sure you have prepared NFS storage server. Please see [NFS Client](../storage-configuration/#nfs) |
QingCloud-CSI|v0.2.0.1|You need to configure the corresponding parameters in `common.yaml` before installing KubeSphere. Please refer to [QingCloud CSI](../storage-configuration/#qingcloud-csi) for details|
NeonSAN-CSI|v0.3.0| Before installing KubeSphere, you need to configure the corresponding parameters in `common.yaml`. Make sure you have prepared QingStor NeonSAN storage server. Please see [Neonsan-CSI](../storage-configuration/#neonsan-csi) |
> Note: You are only allowed to set ONE default storage classes in the cluster. To specify a default storage class, make sure there is no default storage class already exited in the cluster.
## Storage Configuration
After preparing the storage server, you need to refer to the parameters description in the following table. Then modify the corresponding configurations in `conf/common.yaml` accordingly.
The following describes the storage configuration in `common.yaml`.
> Note: Local Volume is configured as the default storage class in `common.yaml` by default. If you are going to set other storage class as the default, disable the Local Volume and modify the configuration for other storage class.
### Local Volume (For developing or testing only)
A [Local Volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) represents a mounted local storage device such as a disk, partition or directory. Local volumes can only be used as a statically created PersistentVolume. We recommend you to use Local volume in testing or development only since it is quick and easy to install KubeSphere without the struggle to set up persistent storage server. Refer to following table for the definition in `conf/common.yaml`.
| **Local volume** | **Description** |
| --- | --- |
| local\_volume\_provisioner\_enabled | Whether to use Local as the persistent storage, defaults to true |
| local\_volume\_provisioner\_storage\_class | Storage class name, default valuelocal |
| local\_volume\_is\_default\_class | Whether to set Local as the default storage class, defaults to true.|
### NFS
An NFS volume allows an existing NFS (Network File System) share to be mounted into your Pod. NFS can be configured in `conf/common.yaml`. Note you need to prepare NFS server in advance.
| **NFS** | **Description** |
| --- | --- |
| nfs\_client\_enable | Whether to use NFS as the persistent storage, defaults to false |
| nfs\_client\_is\_default\_class | Whether to set NFS as default storage class, defaults to false. |
| nfs\_server | The NFS server address, either IP or Hostname |
| nfs\_path | NFS shared directory, which is the file directory shared on the server, see [Kubernetes Documentation](https://kubernetes.io/docs/concepts/storage/volumes/#nfs) |
|nfs\_vers3\_enabled | Specifies which version of the NFS protocol to use, defaults to false which means v4. True means v4 |
|nfs_archiveOnDelete | Archive PVC when deleting. It will automatically remove data from `oldPath` when it sets to false |
### Ceph RBD
The open source [Ceph RBD](https://ceph.com/) distributed storage system can be configured to use in `conf/common.yaml`. You need to prepare Ceph storage server in advance. Please refer to [Kubernetes Documentation](https://kubernetes.io/docs/concepts/storage/storage-classes/#ceph-rbd) for more details.
| **Ceph\_RBD** | **Description** |
| --- | --- |
| ceph\_rbd\_enabled | Whether to use Ceph RBD as the persistent storage, defaults to false |
| ceph\_rbd\_storage\_class | Storage class name |
| ceph\_rbd\_is\_default\_class | Whether to set Ceph RBD as default storage class, defaults to false |
| ceph\_rbd\_monitors | Ceph monitors, comma delimited. This parameter is required, which depends on Ceph RBD server parameters |
| ceph\_rbd\_admin\_id | Ceph client ID that is capable of creating images in the pool. Defaults to “admin” |
| ceph\_rbd\_admin\_secret | Admin_id's secret, secret name for "adminId". This parameter is required. The provided secret must have type “kubernetes.io/rbd” |
| ceph\_rbd\_pool | Ceph RBD pool. Default is “rbd” |
| ceph\_rbd\_user\_id | Ceph client ID that is used to map the RBD image. Default is the same as adminId |
| ceph\_rbd\_user\_secret | Secret for User_id, it is required to create this secret in namespace which used rbd image |
| ceph\_rbd\_fsType | fsType that is supported by Kubernetes. Default: "ext4"|
| ceph\_rbd\_imageFormat | Ceph RBD image format, “1” or “2”. Default is “1” |
|ceph\_rbd\_imageFeatures| This parameter is optional and should only be used if you set imageFormat to “2”. Currently supported features are layering only. Default is “”, and no features are turned on|
> Note:
>
> The ceph secret, which is created in storage class, like "ceph_rbd_admin_secret" and "ceph_rbd_user_secret", is retrieved using following command in Ceph storage server.
```bash
ceph auth get-key client.admin
```
### GlusterFS
[GlusterFS](https://docs.gluster.org/en/latest/) is a scalable network filesystem suitable for data-intensive tasks such as cloud storage and media streaming. You need to prepare GlusterFS storage server in advance. Please refer to [Kubernetes Documentation](https://kubernetes.io/docs/concepts/storage/storage-classes/#glusterfs) for further information.
| **GlusterFSIt requires glusterfs cluster which is managed by heketi**|**Description** |
| --- | --- |
| glusterfs\_provisioner\_enabled | Whether to use GlusterFS as the persistent storage, defaults to false |
| glusterfs\_provisioner\_storage\_class | Storage class name |
| glusterfs\_is\_default\_class | Whether to set GlusterFS as default storage class, defaults to false |
| glusterfs\_provisioner\_restauthenabled | Gluster REST service authentication boolean that enables authentication to the REST server |
| glusterfs\_provisioner\_resturl | Gluster REST service/Heketi service url which provision gluster volumes on demand. The general format should be "IP address:Port" and this is a mandatory parameter for GlusterFS dynamic provisioner|
| glusterfs\_provisioner\_clusterid | Optional, for example, 630372ccdc720a92c681fb928f27b53f is the ID of the cluster which will be used by Heketi when provisioning the volume. It can also be a list of clusterids |
| glusterfs\_provisioner\_restuser | Gluster REST service/Heketi user who has access to create volumes in the Gluster Trusted Pool |
| glusterfs\_provisioner\_secretName | Optional, identification of Secret instance that contains user password to use when talking to Gluster REST service, Installer will automatically create this secret in kube-system |
| glusterfs\_provisioner\_gidMin | The minimum value of GID range for the storage class |
| glusterfs\_provisioner\_gidMax |The maximum value of GID range for the storage class |
| glusterfs\_provisioner\_volumetype | The volume type and its parameters can be configured with this optional value, for example: Replica volume: volumetype: replicate:3 |
| jwt\_admin\_key | "jwt.admin.key" field is from "/etc/heketi/heketi.json" in Heketi server |
**Attention**
> Please note: `"glusterfs_provisioner_clusterid"` could be returned from glusterfs server by running the following command:
```bash
export HEKETI_CLI_SERVER=http://localhost:8080
heketi-cli cluster list
```
### QingCloud Block Storage
[QingCloud Block Storage](https://docs.qingcloud.com/product/Storage/volume/) is supported in KubeSphere as the persistent storage service. If you would like to experience dynamic provisioning when creating volume, we recommend you to use it as your persistent storage solution. KubeSphere integrates [QingCloud-CSI](https://github.com/yunify/qingcloud-csi/blob/master/README_zh.md), and allows you to use various block storage services of QingCloud. With simple configuration, you can quickly expand, clone PVCs and view the topology of PVCs, create/delete snapshot, as well as restore volume from snapshot.
QingCloud-CSI plugin has implemented the standard CSI. You can easily create and manage different types of volumes in KubeSphere, which are provided by QingCloud. The corresponding PVCs will created with ReadWriteOnce access mode and mounted to running Pods.
QingCloud-CSI supports create the following five types of volume in QingCloud:
- High capacity
- Standard
- SSD Enterprise
- Super high performance
- High performance
|**QingCloud-CSI** | **Description**|
| --- | ---|
| qingcloud\_csi\_enabled|Whether to use QingCloud-CSI as the persistent storage volume, defaults to false |
| qingcloud\_csi\_is\_default\_class| Whether to set QingCloud-CSI as default storage class, defaults to false |
qingcloud\_access\_key\_id , <br> qingcloud\_secret\_access\_key| Please obtain it from [QingCloud Console](https://console.qingcloud.com/login) |
|qingcloud\_zone| Zone should be the same as the zone where the Kubernetes cluster is installed, and the CSI plugin will operate on the storage volumes for this zone. For example: zone can be set to these values, such as sh1a (Shanghai 1-A), sh1b (Shanghai 1-B), pek2 (Beijing 2), pek3a (Beijing 3-A), pek3b (Beijing 3-B), pek3c (Beijing 3-C), gd1 (Guangdong 1), gd2a (Guangdong 2-A), ap1 (Asia Pacific 1), ap2a (Asia Pacific 2-A) |
| type | The type of volume in QingCloud platform. In QingCloud platform, 0 represents high performance volume. 3 represents super high performance volume. 1 or 2 represents high capacity volume depending on clusters zone, see [QingCloud Documentation](https://docs.qingcloud.com/product/api/action/volume/create_volumes.html)|
| maxSize, minSize | Limit the range of volume size in GiB|
| stepSize | Set the increment of volumes size in GiB|
| fsType | The file system of the storage volume, which supports ext3, ext4, xfs. The default is ext4|
### QingStor NeonSAN
The NeonSAN-CSI plugin supports the enterprise-level distributed storage [QingStor NeonSAN](https://www.qingcloud.com/products/qingstor-neonsan/) as the persistent storage solution. You need prepare the NeonSAN server, then configure the NeonSAN-CSI plugin to connect to its storage server in `conf/common.yaml`. Please refer to [NeonSAN-CSI Reference](https://github.com/wnxn/qingstor-csi/blob/master/docs/reference_zh.md#storageclass-%E5%8F%82%E6%95%B0) for further information.
| **NeonSAN** | **Description** |
| --- | --- |
| neonsan\_csi\_enabled | Whether to use NeonSAN as the persistent storage, defaults to false |
| neonsan\_csi\_is\_default\_class | Whether to set NeonSAN-CSI as the default storage class, defaults to false|
Neonsan\_csi\_protocol | transportation protocol, user must set the option, such as TCP or RDMA|
| neonsan\_server\_address | NeonSAN server address |
| neonsan\_cluster\_name| NeonSAN server cluster name|
| neonsan\_server\_pool|A comma separated list of pools. Tell plugin to manager these pools. User must set the option, the default value is kube|
| neonsan\_server\_replicas|NeonSAN image replica count. Default: 1|
| neonsan\_server\_stepSize|set the increment of volumes size in GiB. Default: 1|
| neonsan\_server\_fsType|The file system to use for the volume. Default: ext4|

View File

@ -0,0 +1,93 @@
---
title: "Introduction"
keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus'
description: 'KubeSphere Installation Overview'
linkTitle: "Introduction"
weight: 2110
---
[KubeSphere](https://kubesphere.io/) is an enterprise-grade multi-tenant container platform built on [Kubernetes](https://kubernetes.io). It provides an easy-to-use UI for users to manage application workloads and computing resources with a few clicks, which greatly reduces the learning curve and the complexity of daily work such as development, testing, operation and maintenance. KubeSphere aims to alleviate the pain points of Kubernetes including storage, network, security and ease of use, etc.
KubeSphere supports installing on cloud-hosted and on-premises Kubernetes cluster, e.g. native K8s, GKE, EKS, RKE, etc. It also supports installing on Linux host including virtual machine and bare metal with provisioning fresh Kubernetes cluster. Both of the two methods are easy and friendly to install KubeSphere. Meanwhile, KubeSphere offers not only online installer, but air-gapped installer for such environment with no access to the internet.
KubeSphere is open source project on [GitHub](https://github.com/kubesphere). There are thousands of users are using KunbeSphere, and many of them are running KubeSphere for their production workloads.
In summary, there are several installation options you can choose. Please note not all options are mutually exclusive. For instance, you can deploy KubeSphere with minimal packages on existing K8s cluster on multiple nodes in air-gapped environment. Here is the decision tree shown in the following graph you may reference for your own situation.
- [All-in-One](../all-in-one): Intall KubeSphere on a singe node. It is only for users to quickly get familar with KubeSphere.
- [Multi-Node](../multi-node): Install KubeSphere on multiple nodes. It is for testing or development.
- [Install KubeSphere on Air Gapped Linux](../install-ks-on-linux-airgapped): All images of KubeSphere have been encapsulated into a package, it is convenient for air gapped installation on Linux machines.
- [High Availability Multi-Node](../master-ha): Install high availability KubeSphere on multiple nodes which is used for production environment.
- [KubeSphere on Existing K8s](../install-on-k8s): Deploy KubeSphere on your Kubernetes cluster including cloud-hosted services such as GKE, EKS, etc.
- [KubeSphere on Air-Gapped K8s](../install-on-k8s-airgapped): Install KubeSphere on a disconnected Kubernetes cluster.
- Minimal Packages: Only install minimal required system components of KubeSphere. The minimum of resource requirement is down to 1 core and 2G memory.
- [Full Packages](../complete-installation): Install all available system components of KubeSphere including DevOps, service mesh, application store, etc.
![Installer Options](https://pek3b.qingstor.com/kubesphere-docs/png/20200305093158.png)
## Before Installation
- As the installation will pull images and update operating system from the internet, your environment must have the internet access. If not, then you need to use the air-gapped installer instead.
- For all-in-one installation, the only one node is both the master and the worker.
- For multi-node installation, you are asked to specify the node roles in the configuration file before installation.
- Your linux host must have OpenSSH Server installed.
- Please check the [ports requirements](../port-firewall) before installation.
## Quick Install For Development and Testing
KubeSphere has decoupled some components since v2.1.0. The installer only installs required components by default which brings the benefits of fast installation and minimal resource consumption. If you want to install any optional component, please check the following section [Pluggable Components Overview](../intro#pluggable-components-overview) for details.
The quick install of KubeSphere is only for development or testing since it uses local volume for storage by default. If you want a production install please refer to the section [High Availability Installation for Production Environment](../intro#high-availability-installation-for-production-environment).
### 1. Install KubeSphere on Linux
- [All-in-One](../all-in-one): It means a single-node hassle-free configuration installation with one-click.
- [Multi-Node](../multi-node): It allows you to install KubeSphere on multiple instances using local volume, which means it is not required to install storage server such as Ceph, GlusterFS.
> NoteWith regard to air-gapped installation please refer to [Install KubeSphere on Air Gapped Linux Machines](../install-ks-on-linux-airgapped).
### 2. Install KubeSphere on Existing Kubernetes
You can install KubeSphere on your existing Kubernetes cluster. Please refer [Install KubeSphere on Kubernetes](../install-on-k8s) for instructions.
## High Availability Installation for Production Environment
### 1. Install HA KubeSphere on Linux
KubeSphere installer supports installing a highly available cluster for production with the prerequisites being a load balancer and persistent storage service set up in advance.
- [Persistent Service Configuration](../storage-configuration): By default, KubeSphere Installer uses [Local Volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning in Kubernetes cluster. It is convenient for quick install of testing environment. In production environment, it must have a storage server set up. Please refer [Persistent Service Configuration](../storage-configuration) for details.
- [Load Balancer Configuration for HA install](../master-ha): Before you get started with multi-node installation in production environment, you need to configure a load balancer. Either cloud LB or `HAproxy + keepalived` works for the installation.
### 2. Install HA KubeSphere on Existing Kubernetes
Before you install KubeSphere on existing Kubernetes, please check the prerequisites of the installation on Linux described above, and verify the existing Kubernetes to see if it satisfies these prerequisites or not, i.e., a load balancer and persistent storage service.
If your Kubernetes is ready, please refer [Install KubeSphere on Kubernetes](../install-on-k8s) for instructions.
> You can install KubeSphere on cloud Kubernetes service such as [Installing KubeSphere on GKE cluster](../install-on-gke)
## Pluggable Components Overview
KubeSphere has decoupled some core feature components since v2.1.0. These components are designed to be pluggable, which means you can enable any of them before or after installation. The installer by default does not install the pluggable components. Please check the guide [Enable Pluggable Components Installation](../pluggable-components) for your requirement.
![Pluggable Components](https://pek3b.qingstor.com/kubesphere-docs/png/20191207140846.png)
## Storage Configuration Instruction
The following links explain how to configure different types of persistent storage services. Please refer to [Storage Configuration Instruction](../storage-configuration) for detailed instructions regarding how to configure the storage class in KubeSphere.
- [NFS](https://kubernetes.io/docs/concepts/storage/volumes/#nfs)
- [GlusterFS](https://www.gluster.org/)
- [Ceph RBD](https://ceph.com/)
- [QingCloud Block Storage](https://docs.qingcloud.com/product/storage/volume/)
- [QingStor NeonSAN](https://docs.qingcloud.com/product/storage/volume/super_high_performance_shared_volume/)
## Add New Nodes
KubeSphere Installer allows you to scale the number of nodes, see [Add New Nodes](../add-nodes).
## Uninstall
Uninstall will remove KubeSphere from the machines. This operation is irreversible and dangerous. Please check [Uninstall](../uninstall).

View File

@ -0,0 +1,7 @@
---
linkTitle: "Install on Linux"
weight: 2200
_build:
render: false
---

View File

@ -0,0 +1,224 @@
---
title: "Air-Gapped Installation"
keywords: 'kubernetes, kubesphere, air gapped, installation'
description: 'How to install KubeSphere on air-gapped Linux machines'
weight: 2240
---
The air-gapped installation is almost the same as the online installation except it creates a local registry to host the Docker images. We will demonstrate how to install KubeSphere and Kubernetes on air-gapped environment.
> Note: The dependencies in different operating systems may cause upexpected problems. If you encounter any installation problems on air-gapped environment, please describe your OS information and error logs on [GitHub](https://github.com/kubesphere/kubesphere/issues).
## Prerequisites
- If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information.
> - Installer will use `/var/lib/docker` as the default directory where all Docker related files, including the images, are stored. We recommend you to add additional storage to a disk with at least 100G mounted at `/var/lib/docker` and `/mnt/registry` respectively, use the [fdisk](https://www.computerhope.com/unix/fdisk.htm) command for reference.
- Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [OpenEBS](https://openebs.io/) to provide storage service with dynamic provisioning. It is convenient for testing and development. For production, please [configure supported persistent storage service](../storage-configuration) and prepare [high availability configuration](../master-ha) before installation.
- Since the air-gapped machines cannot connect to apt or yum source, please use clean Linux machine to avoid this problem.
## Step 1: Prepare Linux Hosts
The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements.
- Supported OSes: CentOS 7.4 ~ 7.7 (64-bit), Ubuntu 16.04.5/16.04.6/18.04.1/18.04.2/18.04.3 LTS (64-bit)
- Time synchronization is required across all nodes, otherwise the installation may not succeed;
- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`;
- If you are using `Ubuntu 18.04`, you need to use the user `root`.
- Ensure your disk of each node is at least 100G.
- CPU and memory in total of all machines: 2 cores and 4 GB for minimal installation; 8 cores and 16 GB for complete installation.
The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes.
> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide.
| Host IP | Host Name | Role |
| --- | --- | --- |
|192.168.0.1|master|master, etcd|
|192.168.0.2|node1|node|
|192.168.0.3|node2|node|
### Cluster Architecture
#### Single Master, Single Etcd, Two Nodes
![Architecture](/cluster-architecture.svg)
## Step 2: Download Installer Package
Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`.
```bash
curl -L https://kubesphere.io/download/offline/latest > kubesphere-all-offline-v2.1.1.tar.gz \
&& tar -zxf kubesphere-all-offline-v2.1.1.tar.gz && cd kubesphere-all-offline-v2.1.1/conf
```
## Step 3: Configure Host Template
> This step is only for multi-node installation, you can skip this step if you choose all-in-one installation.
Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file.
> Note:
>
> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`.
> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`.
> - master, node1 and node2 are the host names of each node and all host names should be in lowercase.
### hosts.ini
```ini
[all]
master ansible_connection=local ip=192.168.0.1
node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD
node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD
[local-registry]
master
[kube-master]
master
[kube-node]
node1
node2
[etcd]
master
[k8s-cluster:children]
kube-node
kube-master
```
> Note:
>
> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here.
> - Installer will use a node as the local registry for docker images, defaults to "master" in the group `[local-registry]`.
> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively.
> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`.
>
> Parameters Specification:
>
> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection.
> - `ansible_host`: The name of the host to be connected.
> - `ip`: The ip of the host to be connected.
> - `ansible_user`: The default ssh user name to use.
> - `ansible_become_pass`: Allows you to set the privilege escalation password.
> - `ansible_ssh_pass`: The password of the host to be connected using root.
## Step 4: Enable All Components
> This is step is complete installation. You can skip this step if you choose a minimal installation.
Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default.
```yaml
# LOGGING CONFIGURATION
# logging is an optional component when installing KubeSphere, and
# Kubernetes builtin logging APIs will be used if logging_enabled is set to false.
# Builtin logging only provides limited functions, so recommend to enable logging.
logging_enabled: true # Whether to install logging system
elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number
elasticsearch_data_replica: 2 # total number of data nodes
elasticsearch_volume_size: 20Gi # Elasticsearch volume size
log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default.
elk_prefix: logstash # the string making up index names. The index name will be formatted as ks-<elk_prefix>-log
kibana_enabled: false # Kibana Whether to install built-in Grafana
#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption.
#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port
#DevOps Configuration
devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image)
jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default
jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default
jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default
jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters
jenkinsJavaOpts_Xmx: 6g
jenkinsJavaOpts_MaxRAM: 8g
sonarqube_enabled: true # Whether to install built-in SonarQube
#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption.
#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token
# Following components are all optional for KubeSphere,
# Which could be turned on to install it before installation or later by updating its value to true
openpitrix_enabled: true # KubeSphere application store
metrics_server_enabled: true # For KubeSphere HPA to use
servicemesh_enabled: true # KubeSphere service mesh system(Istio-based)
notification_enabled: true # KubeSphere notification system
alerting_enabled: true # KubeSphere alerting system
```
## Step 5: Install KubeSphere to Linux Machines
> Note:
>
> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default.
> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions.
> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation.
> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts.
**1.** Enter `scripts` folder, and execute `install.sh` using `root` user:
```bash
cd ../cripts
./install.sh
```
**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume.
```bash
################################################
KubeSphere Installer Menu
################################################
* 1) All-in-one
* 2) Multi-node
* 3) Quit
################################################
https://kubesphere.io/ 2020-02-24
################################################
Please input an option: 2
```
**3.** Verify the multi-node installation
**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go.
```bash
successsful!
#####################################################
### Welcome to KubeSphere! ###
#####################################################
Console: http://192.168.0.1:30880
Account: admin
Password: P@88w0rd
NOTEPlease modify the default password after login.
#####################################################
```
> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components).
**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in.
![Login](https://pek3b.qingstor.com/kubesphere-docs/png/20191017172215.png)
<font color=red>Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up.</font>
![Landing Page](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png)
## Enable Pluggable Components
If you already have set up minimal installation, you still can edit the ConfigMap of ks-installer using the following command. Please make sure there is enough resource in your machines, see [Pluggable Components Overview](/en/installation/pluggable-components/).
```bash
kubectl edit cm -n kubesphere-system ks-installer
```
## FAQ
If you have further questions please do not hesitate to raise issues on [GitHub](https://github.com/kubesphere/kubesphere/issues).

View File

@ -1,9 +1,9 @@
---
title: "Installation"
title: "Installing on Linux"
description: "Help you to better understand KubeSphere with detailed graphics and contents"
layout: "single"
linkTitle: "Installation"
linkTitle: "Installing on Linux"
weight: 2000
icon: "/images/docs/docs.svg"
@ -20,4 +20,4 @@ Below you will find some of the most common and helpful pages from this chapter.
{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}}
{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}}
{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}}

View File

@ -0,0 +1,7 @@
---
linkTitle: "Installation"
weight: 2100
_build:
render: false
---

View File

@ -0,0 +1,93 @@
---
title: "Introduction"
keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus'
description: 'KubeSphere Installation Overview'
linkTitle: "Introduction"
weight: 2110
---
[KubeSphere](https://kubesphere.io/) is an enterprise-grade multi-tenant container platform built on [Kubernetes](https://kubernetes.io). It provides an easy-to-use UI for users to manage application workloads and computing resources with a few clicks, which greatly reduces the learning curve and the complexity of daily work such as development, testing, operation and maintenance. KubeSphere aims to alleviate the pain points of Kubernetes including storage, network, security and ease of use, etc.
KubeSphere supports installing on cloud-hosted and on-premises Kubernetes cluster, e.g. native K8s, GKE, EKS, RKE, etc. It also supports installing on Linux host including virtual machine and bare metal with provisioning fresh Kubernetes cluster. Both of the two methods are easy and friendly to install KubeSphere. Meanwhile, KubeSphere offers not only online installer, but air-gapped installer for such environment with no access to the internet.
KubeSphere is open source project on [GitHub](https://github.com/kubesphere). There are thousands of users are using KunbeSphere, and many of them are running KubeSphere for their production workloads.
In summary, there are several installation options you can choose. Please note not all options are mutually exclusive. For instance, you can deploy KubeSphere with minimal packages on existing K8s cluster on multiple nodes in air-gapped environment. Here is the decision tree shown in the following graph you may reference for your own situation.
- [All-in-One](../all-in-one): Intall KubeSphere on a singe node. It is only for users to quickly get familar with KubeSphere.
- [Multi-Node](../multi-node): Install KubeSphere on multiple nodes. It is for testing or development.
- [Install KubeSphere on Air Gapped Linux](../install-ks-on-linux-airgapped): All images of KubeSphere have been encapsulated into a package, it is convenient for air gapped installation on Linux machines.
- [High Availability Multi-Node](../master-ha): Install high availability KubeSphere on multiple nodes which is used for production environment.
- [KubeSphere on Existing K8s](../install-on-k8s): Deploy KubeSphere on your Kubernetes cluster including cloud-hosted services such as GKE, EKS, etc.
- [KubeSphere on Air-Gapped K8s](../install-on-k8s-airgapped): Install KubeSphere on a disconnected Kubernetes cluster.
- Minimal Packages: Only install minimal required system components of KubeSphere. The minimum of resource requirement is down to 1 core and 2G memory.
- [Full Packages](../complete-installation): Install all available system components of KubeSphere including DevOps, service mesh, application store, etc.
![Installer Options](https://pek3b.qingstor.com/kubesphere-docs/png/20200305093158.png)
## Before Installation
- As the installation will pull images and update operating system from the internet, your environment must have the internet access. If not, then you need to use the air-gapped installer instead.
- For all-in-one installation, the only one node is both the master and the worker.
- For multi-node installation, you are asked to specify the node roles in the configuration file before installation.
- Your linux host must have OpenSSH Server installed.
- Please check the [ports requirements](../port-firewall) before installation.
## Quick Install For Development and Testing
KubeSphere has decoupled some components since v2.1.0. The installer only installs required components by default which brings the benefits of fast installation and minimal resource consumption. If you want to install any optional component, please check the following section [Pluggable Components Overview](../intro#pluggable-components-overview) for details.
The quick install of KubeSphere is only for development or testing since it uses local volume for storage by default. If you want a production install please refer to the section [High Availability Installation for Production Environment](../intro#high-availability-installation-for-production-environment).
### 1. Install KubeSphere on Linux
- [All-in-One](../all-in-one): It means a single-node hassle-free configuration installation with one-click.
- [Multi-Node](../multi-node): It allows you to install KubeSphere on multiple instances using local volume, which means it is not required to install storage server such as Ceph, GlusterFS.
> NoteWith regard to air-gapped installation please refer to [Install KubeSphere on Air Gapped Linux Machines](../install-ks-on-linux-airgapped).
### 2. Install KubeSphere on Existing Kubernetes
You can install KubeSphere on your existing Kubernetes cluster. Please refer [Install KubeSphere on Kubernetes](../install-on-k8s) for instructions.
## High Availability Installation for Production Environment
### 1. Install HA KubeSphere on Linux
KubeSphere installer supports installing a highly available cluster for production with the prerequisites being a load balancer and persistent storage service set up in advance.
- [Persistent Service Configuration](../storage-configuration): By default, KubeSphere Installer uses [Local Volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning in Kubernetes cluster. It is convenient for quick install of testing environment. In production environment, it must have a storage server set up. Please refer [Persistent Service Configuration](../storage-configuration) for details.
- [Load Balancer Configuration for HA install](../master-ha): Before you get started with multi-node installation in production environment, you need to configure a load balancer. Either cloud LB or `HAproxy + keepalived` works for the installation.
### 2. Install HA KubeSphere on Existing Kubernetes
Before you install KubeSphere on existing Kubernetes, please check the prerequisites of the installation on Linux described above, and verify the existing Kubernetes to see if it satisfies these prerequisites or not, i.e., a load balancer and persistent storage service.
If your Kubernetes is ready, please refer [Install KubeSphere on Kubernetes](../install-on-k8s) for instructions.
> You can install KubeSphere on cloud Kubernetes service such as [Installing KubeSphere on GKE cluster](../install-on-gke)
## Pluggable Components Overview
KubeSphere has decoupled some core feature components since v2.1.0. These components are designed to be pluggable, which means you can enable any of them before or after installation. The installer by default does not install the pluggable components. Please check the guide [Enable Pluggable Components Installation](../pluggable-components) for your requirement.
![Pluggable Components](https://pek3b.qingstor.com/kubesphere-docs/png/20191207140846.png)
## Storage Configuration Instruction
The following links explain how to configure different types of persistent storage services. Please refer to [Storage Configuration Instruction](../storage-configuration) for detailed instructions regarding how to configure the storage class in KubeSphere.
- [NFS](https://kubernetes.io/docs/concepts/storage/volumes/#nfs)
- [GlusterFS](https://www.gluster.org/)
- [Ceph RBD](https://ceph.com/)
- [QingCloud Block Storage](https://docs.qingcloud.com/product/storage/volume/)
- [QingStor NeonSAN](https://docs.qingcloud.com/product/storage/volume/super_high_performance_shared_volume/)
## Add New Nodes
KubeSphere Installer allows you to scale the number of nodes, see [Add New Nodes](../add-nodes).
## Uninstall
Uninstall will remove KubeSphere from the machines. This operation is irreversible and dangerous. Please check [Uninstall](../uninstall).

View File

@ -0,0 +1,33 @@
---
title: "Port Requirements"
keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus'
description: ''
linkTitle: "Requirements"
weight: 2120
---
KubeSphere requires certain ports to communicate among services, so you need to make sure the following ports open for use.
| Service | Protocol | Action | Start Port | End Port | Notes |
|---|---|---|---|---|---|
| ssh | TCP | allow | 22 | | |
| etcd | TCP | allow | 2379 | 2380 | |
| apiserver | TCP | allow | 6443 | | |
| calico | TCP | allow | 9099 | 9100 | |
| bgp | TCP | allow | 179 | | |
| nodeport | TCP | allow | 30000 | 32767 | |
| master | TCP | allow | 10250 | 10258 | |
| dns | TCP | allow | 53 | | |
| dns | UDP | allow | 53 | | |
| local-registry | TCP | allow | 5000 | | Required for air gapped environment|
| local-apt | TCP | allow | 5080 | | Required for air gapped environment|
| rpcbind | TCP | allow | 111 | | When using NFS as storage server |
| ipip | IPIP | allow | | | Calico network requires ipip protocol |
**Note**
Please note when you use Calico network plugin and run your cluster in classic network in cloud environment, you need to open IPIP protocol for souce IP. For instance, the following is the sample on QingCloud showing how to open IPIP protocol.
![](https://pek3b.qingstor.com/kubesphere-docs/png/20200304200605.png)

View File

@ -0,0 +1,107 @@
---
title: "Common Configurations"
keywords: 'KubeSphere, kubernetes, docker, helm, jenkins, istio, prometheus'
description: 'Configure cluster parameters before installing'
linkTitle: "Kubernetes Cluster Configuration"
weight: 2130
---
This tutorial explains how to customize KubeSphere configurations in `conf/common.yaml`. You can reference the following section to understand each parameter.
```yaml
######################### Kubernetes #########################
# The default k8s version will be installed
kube_version: v1.16.7
# The default etcd version will be installed
etcd_version: v3.2.18
# Configure a cron job to backup etcd data, which is running on etcd machines.
# Period of running backup etcd job, the unit is minutes.
# The default value 30 means backup etcd every 30 minutes.
etcd_backup_period: 30
# How many backup replicas to keep.
# The default value5 means to keep latest 5 backups, older ones will be deleted by order.
keep_backup_number: 5
# The location to store etcd backups files on etcd machines.
etcd_backup_dir: "/var/backups/kube_etcd"
# Add other registry. (For users who need to accelerate image download)
docker_registry_mirrors:
- https://docker.mirrors.ustc.edu.cn
- https://registry.docker-cn.com
- https://mirror.aliyuncs.com
# Kubernetes network plugin, Calico will be installed by default. Note that Calico and flannel are recommended, which are tested and verified by KubeSphere.
kube_network_plugin: calico
# A valid CIDR range for Kubernetes services,
# 1. should not overlap with node subnet
# 2. should not overlap with Kubernetes pod subnet
kube_service_addresses: 10.233.0.0/18
# A valid CIDR range for Kubernetes pod subnet,
# 1. should not overlap with node subnet
# 2. should not overlap with Kubernetes services subnet
kube_pods_subnet: 10.233.64.0/18
# Kube-proxy proxyMode configuration, either ipvs, or iptables
kube_proxy_mode: ipvs
# Maximum pods allowed to run on every node.
kubelet_max_pods: 110
# Enable nodelocal dns cache, see https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.md#nodelocal-dns-cache for further information
enable_nodelocaldns: true
# Highly Available loadbalancer example config
# apiserver_loadbalancer_domain_name: "lb.kubesphere.local" # Loadbalancer domain name
# loadbalancer_apiserver: # Loadbalancer apiserver configuration, please uncomment this line when you prepare HA install
# address: 192.168.0.10 # Loadbalancer apiserver IP address
# port: 6443 # apiserver port
######################### KubeSphere #########################
# Version of KubeSphere
ks_version: v2.1.0
# KubeSphere console port, range 30000-32767,
# but 30180/30280/30380 are reserved for internal service
console_port: 30880 # KubeSphere console nodeport
#CommonComponent
mysql_volume_size: 20Gi # MySQL PVC size
minio_volume_size: 20Gi # Minio PVC size
etcd_volume_size: 20Gi # etcd PVC size
openldap_volume_size: 2Gi # openldap PVC size
redis_volume_size: 2Gi # Redis PVC size
# Monitoring
prometheus_replica: 2 # Prometheus replicas with 2 by default which are responsible for monitoring different segments of data source and provide high availability as well.
prometheus_memory_request: 400Mi # Prometheus request memory
prometheus_volume_size: 20Gi # Prometheus PVC size
grafana_enabled: true # enable grafana or not
## Container Engine Acceleration
## Use nvidia gpu acceleration in containers
# nvidia_accelerator_enabled: true # enable Nvidia GPU accelerator or not. It supports hybrid node with GPU and CPU installed.
# nvidia_gpu_nodes: # The GPU nodes specified in hosts.ini. FOr now we only support Ubuntu 16.04
# - kube-gpu-001 # The host name of the GPU node specified in hosts.ini
```
## How to Configure a GPU Node
You may want to use GPU nodes for special purpose such as machine learning. Let's say you have a GPU node called `node2` in `hosts.ini`, then in the file `common.yaml` specify the following configuration. Please be aware the `- node2` has two spaces indent.
```yaml
nvidia_accelerator_enabled: true
nvidia_gpu_nodes:
- node2
```
> Note: The GPU node now only supports Ubuntu 16.04.

View File

@ -0,0 +1,7 @@
---
linkTitle: "Install on Linux"
weight: 2200
_build:
render: false
---

View File

@ -0,0 +1,224 @@
---
title: "Air-Gapped Installation"
keywords: 'kubernetes, kubesphere, air gapped, installation'
description: 'How to install KubeSphere on air-gapped Linux machines'
weight: 2240
---
The air-gapped installation is almost the same as the online installation except it creates a local registry to host the Docker images. We will demonstrate how to install KubeSphere and Kubernetes on air-gapped environment.
> Note: The dependencies in different operating systems may cause upexpected problems. If you encounter any installation problems on air-gapped environment, please describe your OS information and error logs on [GitHub](https://github.com/kubesphere/kubesphere/issues).
## Prerequisites
- If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information.
> - Installer will use `/var/lib/docker` as the default directory where all Docker related files, including the images, are stored. We recommend you to add additional storage to a disk with at least 100G mounted at `/var/lib/docker` and `/mnt/registry` respectively, use the [fdisk](https://www.computerhope.com/unix/fdisk.htm) command for reference.
- Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [OpenEBS](https://openebs.io/) to provide storage service with dynamic provisioning. It is convenient for testing and development. For production, please [configure supported persistent storage service](../storage-configuration) and prepare [high availability configuration](../master-ha) before installation.
- Since the air-gapped machines cannot connect to apt or yum source, please use clean Linux machine to avoid this problem.
## Step 1: Prepare Linux Hosts
The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements.
- Supported OSes: CentOS 7.4 ~ 7.7 (64-bit), Ubuntu 16.04.5/16.04.6/18.04.1/18.04.2/18.04.3 LTS (64-bit)
- Time synchronization is required across all nodes, otherwise the installation may not succeed;
- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`;
- If you are using `Ubuntu 18.04`, you need to use the user `root`.
- Ensure your disk of each node is at least 100G.
- CPU and memory in total of all machines: 2 cores and 4 GB for minimal installation; 8 cores and 16 GB for complete installation.
The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes.
> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide.
| Host IP | Host Name | Role |
| --- | --- | --- |
|192.168.0.1|master|master, etcd|
|192.168.0.2|node1|node|
|192.168.0.3|node2|node|
### Cluster Architecture
#### Single Master, Single Etcd, Two Nodes
![Architecture](/cluster-architecture.svg)
## Step 2: Download Installer Package
Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`.
```bash
curl -L https://kubesphere.io/download/offline/latest > kubesphere-all-offline-v2.1.1.tar.gz \
&& tar -zxf kubesphere-all-offline-v2.1.1.tar.gz && cd kubesphere-all-offline-v2.1.1/conf
```
## Step 3: Configure Host Template
> This step is only for multi-node installation, you can skip this step if you choose all-in-one installation.
Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file.
> Note:
>
> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`.
> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`.
> - master, node1 and node2 are the host names of each node and all host names should be in lowercase.
### hosts.ini
```ini
[all]
master ansible_connection=local ip=192.168.0.1
node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD
node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD
[local-registry]
master
[kube-master]
master
[kube-node]
node1
node2
[etcd]
master
[k8s-cluster:children]
kube-node
kube-master
```
> Note:
>
> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here.
> - Installer will use a node as the local registry for docker images, defaults to "master" in the group `[local-registry]`.
> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively.
> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`.
>
> Parameters Specification:
>
> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection.
> - `ansible_host`: The name of the host to be connected.
> - `ip`: The ip of the host to be connected.
> - `ansible_user`: The default ssh user name to use.
> - `ansible_become_pass`: Allows you to set the privilege escalation password.
> - `ansible_ssh_pass`: The password of the host to be connected using root.
## Step 4: Enable All Components
> This is step is complete installation. You can skip this step if you choose a minimal installation.
Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default.
```yaml
# LOGGING CONFIGURATION
# logging is an optional component when installing KubeSphere, and
# Kubernetes builtin logging APIs will be used if logging_enabled is set to false.
# Builtin logging only provides limited functions, so recommend to enable logging.
logging_enabled: true # Whether to install logging system
elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number
elasticsearch_data_replica: 2 # total number of data nodes
elasticsearch_volume_size: 20Gi # Elasticsearch volume size
log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default.
elk_prefix: logstash # the string making up index names. The index name will be formatted as ks-<elk_prefix>-log
kibana_enabled: false # Kibana Whether to install built-in Grafana
#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption.
#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port
#DevOps Configuration
devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image)
jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default
jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default
jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default
jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters
jenkinsJavaOpts_Xmx: 6g
jenkinsJavaOpts_MaxRAM: 8g
sonarqube_enabled: true # Whether to install built-in SonarQube
#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption.
#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token
# Following components are all optional for KubeSphere,
# Which could be turned on to install it before installation or later by updating its value to true
openpitrix_enabled: true # KubeSphere application store
metrics_server_enabled: true # For KubeSphere HPA to use
servicemesh_enabled: true # KubeSphere service mesh system(Istio-based)
notification_enabled: true # KubeSphere notification system
alerting_enabled: true # KubeSphere alerting system
```
## Step 5: Install KubeSphere to Linux Machines
> Note:
>
> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default.
> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions.
> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation.
> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts.
**1.** Enter `scripts` folder, and execute `install.sh` using `root` user:
```bash
cd ../cripts
./install.sh
```
**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume.
```bash
################################################
KubeSphere Installer Menu
################################################
* 1) All-in-one
* 2) Multi-node
* 3) Quit
################################################
https://kubesphere.io/ 2020-02-24
################################################
Please input an option: 2
```
**3.** Verify the multi-node installation
**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go.
```bash
successsful!
#####################################################
### Welcome to KubeSphere! ###
#####################################################
Console: http://192.168.0.1:30880
Account: admin
Password: P@88w0rd
NOTEPlease modify the default password after login.
#####################################################
```
> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components).
**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in.
![Login](https://pek3b.qingstor.com/kubesphere-docs/png/20191017172215.png)
<font color=red>Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up.</font>
![Landing Page](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png)
## Enable Pluggable Components
If you already have set up minimal installation, you still can edit the ConfigMap of ks-installer using the following command. Please make sure there is enough resource in your machines, see [Pluggable Components Overview](/en/installation/pluggable-components/).
```bash
kubectl edit cm -n kubesphere-system ks-installer
```
## FAQ
If you have further questions please do not hesitate to raise issues on [GitHub](https://github.com/kubesphere/kubesphere/issues).

View File

@ -0,0 +1,7 @@
---
linkTitle: "Install on Linux"
weight: 2200
_build:
render: false
---

View File

@ -0,0 +1,116 @@
---
title: "All-in-One Installation"
keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus'
description: 'The guide for installing all-in-one KubeSphere for developing or testing'
linkTitle: "All-in-One"
weight: 2210
---
For those who are new to KubeSphere and looking for a quick way to discover the platform, the all-in-one mode is your best choice to install it since it is one-click and hassle-free configuration installation with provisioning KubeSphere and Kubernetes on your machine.
- <font color=red>The following instructions are for the default installation without enabling any optional components as we have made them pluggable since v2.1.0. If you want to enable any one, please see the section [Enable Pluggable Components](../all-in-one#enable-pluggable-components) below.</font>
- <font color=red>If your machine has >= 8 cores and >= 16G memory, we recommend you to install the full package of KubeSphere by [enabling optional components](../complete-installation)</font>.
## Prerequisites
If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirement](../port-firewall) for more information.
## Step 1: Prepare Linux Machine
The following describes the requirements of hardware and operating system.
- For `Ubuntu 16.04` OS, it is recommended to select the latest `16.04.5`.
- If you are using Ubuntu 18.04, you need to use the root user to install.
- If the Debian system does not have the sudo command installed, you need to execute the `apt update && apt install sudo` command using root before installation.
### Hardware Recommendation
| System | Minimum Requirements |
| ------- | ----------- |
| CentOS 7.4 ~ 7.7 (64 bit) | CPU2 Core, Memory4 G, Disk Space100 G |
| Ubuntu 16.04/18.04 LTS (64 bit) | CPU2 Core, Memory4 G, Disk Space100 G |
| Red Hat Enterprise Linux Server 7.4 (64 bit) | CPU2 Core, Memory4 G, Disk Space100 G |
| Debian Stretch 9.5 (64 bit)| CPU2 Core, Memory4 G, Disk Space100 G |
## Step 2: Download Installer Package
Execute the following commands to download Installer 2.1.1 and unpack it.
```bash
curl -L https://kubesphere.io/download/stable/latest > installer.tar.gz \
&& tar -zxf installer.tar.gz && cd kubesphere-all-v2.1.1/scripts
```
## Step 3: Get Started with Installation
You should not do anything except executing one command as follows. The installer will complete all things for you automatically including installing/updating dependency packages, installing Kubernetes with default version 1.16.7, storage service and so on.
> Note:
>
> - Generally speaking, do not modify any configuration.
> - KubeSphere installs `calico` by default. If you would like to use a different network plugin, you are allowed to change the configuration in `conf/common.yaml`. You are also allowed to modify other configurations such as storage class, pluggable components, etc.
> - The default storage class is [OpenEBS](https://openebs.io/) which is a kind of [Local Volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) to provision persistence storage service. OpenEBS supports [dynamic provisioning PV](https://docs.openebs.io/docs/next/uglocalpv.html#Provision-OpenEBS-Local-PV-based-on-hostpath). It will be installed automatically for your testing purpose.
> - Please refer [storage configurations](../storage-configuration) for supported storage class.
> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts.
**1.** Execute the following command:
```bash
./install.sh
```
**2.** Enter `1` to select `All-in-one` mode and type `yes` if your machine satisfies the requirements to start:
```bash
################################################
KubeSphere Installer Menu
################################################
* 1) All-in-one
* 2) Multi-node
* 3) Quit
################################################
https://kubesphere.io/ 2020-02-24
################################################
Please input an option: 1
```
**3.** Verify if KubeSphere is installed successfully or not
**(1).** If you see "Successful" returned after completed, it means the installation is successful. The console service is exposed through nodeport 30880 by default. You may need to bind EIP and configure port forwarding in your environment for outside users to access. Make sure you disable the related firewall.
```bash
successsful!
#####################################################
### Welcome to KubeSphere! ###
#####################################################
Console: http://192.168.0.8:30880
Account: admin
Password: P@88w0rd
NOTEPlease modify the default password after login.
#####################################################
```
> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components).
**(2).** You will be able to use default account and password to log in the console to take a tour of KubeSphere.
<font color=red>Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up.</font>
![Dashboard](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png)
## Enable Pluggable Components
The guide above is only used for minimal installation by default. You can execute the following command to open the configure map and enable pluggable components. Make sure your cluster has enough CPU and memory in advance, see [Enable Pluggable Components](../pluggable-components).
```bash
kubectl edit cm -n kubesphere-system ks-installer
```
## FAQ
The installer has been tested on Aliyun, AWS, Huawei Cloud, QingCloud and Tencent Cloud. Please check the [results](https://github.com/kubesphere/ks-installer/issues/23) for details. Also please read the [FAQ of installation](../../faq/faq-install).
If you have any further questions please do not hesitate to file issues on [GitHub](https://github.com/kubesphere/kubesphere/issues).

View File

@ -0,0 +1,76 @@
---
title: "Install All Optional Components"
keywords: 'kubesphere, kubernetes, docker, devops, service mesh, openpitrix'
description: 'Install KubeSphere with all optional components enabled on Linux machine'
weight: 2260
---
The installer only installs required components (i.e. minimal installation) by default since v2.1.0. Other components are designed to be pluggable, which means you can enable any of them before or after installation. If your machine meets the following minimum requirements, we recommend you to **enable all components before installation**. A complete installation gives you an opportunity to comprehensively discover the container platform.
<font color="red">
Minimum Requirements
- CPU: 8 cores in total of all machines
- Memory: 16 GB in total of all machines
</font>
> Note:
>
> - If your machines do not meet the minimum requirements of a complete installation, you can enable any of components at your will. Please refer to [Enable Pluggable Components Installation](../pluggable-components).
> - It works for [All-in-One](../all-in-one) and [Multi-Node](../multi-node).
This tutorial will walk you through how to enable all components of KubeSphere.
## Download Installer Package
If you do not have the package yet, please run the following commands to download Installer 2.1.1 and unpack it, then enter `conf` folder.
```bash
$ curl -L https://kubesphere.io/download/stable/v2.1.1 > installer.tar.gz \
&& tar -zxf installer.tar.gz && cd kubesphere-all-v2.1.1/conf
```
## Enable All Components
Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default.
```yaml
# LOGGING CONFIGURATION
# logging is an optional component when installing KubeSphere, and
# Kubernetes builtin logging APIs will be used if logging_enabled is set to false.
# Builtin logging only provides limited functions, so recommend to enable logging.
logging_enabled: true # Whether to install logging system
elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number
elasticsearch_data_replica: 2 # total number of data nodes
elasticsearch_volume_size: 20Gi # Elasticsearch volume size
log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default.
elk_prefix: logstash # the string making up index names. The index name will be formatted as ks-<elk_prefix>-log
kibana_enabled: false # Kibana Whether to install built-in Grafana
#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption.
#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port
#DevOps Configuration
devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image)
jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default
jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default
jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default
jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters
jenkinsJavaOpts_Xmx: 6g
jenkinsJavaOpts_MaxRAM: 8g
sonarqube_enabled: true # Whether to install built-in SonarQube
#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption.
#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token
# Following components are all optional for KubeSphere,
# Which could be turned on to install it before installation or later by updating its value to true
openpitrix_enabled: true # KubeSphere application store
metrics_server_enabled: true # For KubeSphere HPA to use
servicemesh_enabled: true # KubeSphere service mesh system(Istio-based)
notification_enabled: true # KubeSphere notification system
alerting_enabled: true # KubeSphere alerting system
```
Save it, then you can continue the installation process.

View File

@ -0,0 +1,224 @@
---
title: "Air-Gapped Installation"
keywords: 'kubernetes, kubesphere, air gapped, installation'
description: 'How to install KubeSphere on air-gapped Linux machines'
weight: 2240
---
The air-gapped installation is almost the same as the online installation except it creates a local registry to host the Docker images. We will demonstrate how to install KubeSphere and Kubernetes on air-gapped environment.
> Note: The dependencies in different operating systems may cause upexpected problems. If you encounter any installation problems on air-gapped environment, please describe your OS information and error logs on [GitHub](https://github.com/kubesphere/kubesphere/issues).
## Prerequisites
- If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information.
> - Installer will use `/var/lib/docker` as the default directory where all Docker related files, including the images, are stored. We recommend you to add additional storage to a disk with at least 100G mounted at `/var/lib/docker` and `/mnt/registry` respectively, use the [fdisk](https://www.computerhope.com/unix/fdisk.htm) command for reference.
- Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [OpenEBS](https://openebs.io/) to provide storage service with dynamic provisioning. It is convenient for testing and development. For production, please [configure supported persistent storage service](../storage-configuration) and prepare [high availability configuration](../master-ha) before installation.
- Since the air-gapped machines cannot connect to apt or yum source, please use clean Linux machine to avoid this problem.
## Step 1: Prepare Linux Hosts
The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements.
- Supported OSes: CentOS 7.4 ~ 7.7 (64-bit), Ubuntu 16.04.5/16.04.6/18.04.1/18.04.2/18.04.3 LTS (64-bit)
- Time synchronization is required across all nodes, otherwise the installation may not succeed;
- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`;
- If you are using `Ubuntu 18.04`, you need to use the user `root`.
- Ensure your disk of each node is at least 100G.
- CPU and memory in total of all machines: 2 cores and 4 GB for minimal installation; 8 cores and 16 GB for complete installation.
The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes.
> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide.
| Host IP | Host Name | Role |
| --- | --- | --- |
|192.168.0.1|master|master, etcd|
|192.168.0.2|node1|node|
|192.168.0.3|node2|node|
### Cluster Architecture
#### Single Master, Single Etcd, Two Nodes
![Architecture](/cluster-architecture.svg)
## Step 2: Download Installer Package
Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`.
```bash
curl -L https://kubesphere.io/download/offline/latest > kubesphere-all-offline-v2.1.1.tar.gz \
&& tar -zxf kubesphere-all-offline-v2.1.1.tar.gz && cd kubesphere-all-offline-v2.1.1/conf
```
## Step 3: Configure Host Template
> This step is only for multi-node installation, you can skip this step if you choose all-in-one installation.
Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file.
> Note:
>
> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`.
> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`.
> - master, node1 and node2 are the host names of each node and all host names should be in lowercase.
### hosts.ini
```ini
[all]
master ansible_connection=local ip=192.168.0.1
node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD
node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD
[local-registry]
master
[kube-master]
master
[kube-node]
node1
node2
[etcd]
master
[k8s-cluster:children]
kube-node
kube-master
```
> Note:
>
> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here.
> - Installer will use a node as the local registry for docker images, defaults to "master" in the group `[local-registry]`.
> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively.
> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`.
>
> Parameters Specification:
>
> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection.
> - `ansible_host`: The name of the host to be connected.
> - `ip`: The ip of the host to be connected.
> - `ansible_user`: The default ssh user name to use.
> - `ansible_become_pass`: Allows you to set the privilege escalation password.
> - `ansible_ssh_pass`: The password of the host to be connected using root.
## Step 4: Enable All Components
> This is step is complete installation. You can skip this step if you choose a minimal installation.
Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default.
```yaml
# LOGGING CONFIGURATION
# logging is an optional component when installing KubeSphere, and
# Kubernetes builtin logging APIs will be used if logging_enabled is set to false.
# Builtin logging only provides limited functions, so recommend to enable logging.
logging_enabled: true # Whether to install logging system
elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number
elasticsearch_data_replica: 2 # total number of data nodes
elasticsearch_volume_size: 20Gi # Elasticsearch volume size
log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default.
elk_prefix: logstash # the string making up index names. The index name will be formatted as ks-<elk_prefix>-log
kibana_enabled: false # Kibana Whether to install built-in Grafana
#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption.
#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port
#DevOps Configuration
devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image)
jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default
jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default
jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default
jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters
jenkinsJavaOpts_Xmx: 6g
jenkinsJavaOpts_MaxRAM: 8g
sonarqube_enabled: true # Whether to install built-in SonarQube
#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption.
#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token
# Following components are all optional for KubeSphere,
# Which could be turned on to install it before installation or later by updating its value to true
openpitrix_enabled: true # KubeSphere application store
metrics_server_enabled: true # For KubeSphere HPA to use
servicemesh_enabled: true # KubeSphere service mesh system(Istio-based)
notification_enabled: true # KubeSphere notification system
alerting_enabled: true # KubeSphere alerting system
```
## Step 5: Install KubeSphere to Linux Machines
> Note:
>
> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default.
> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions.
> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation.
> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts.
**1.** Enter `scripts` folder, and execute `install.sh` using `root` user:
```bash
cd ../cripts
./install.sh
```
**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume.
```bash
################################################
KubeSphere Installer Menu
################################################
* 1) All-in-one
* 2) Multi-node
* 3) Quit
################################################
https://kubesphere.io/ 2020-02-24
################################################
Please input an option: 2
```
**3.** Verify the multi-node installation
**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go.
```bash
successsful!
#####################################################
### Welcome to KubeSphere! ###
#####################################################
Console: http://192.168.0.1:30880
Account: admin
Password: P@88w0rd
NOTEPlease modify the default password after login.
#####################################################
```
> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components).
**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in.
![Login](https://pek3b.qingstor.com/kubesphere-docs/png/20191017172215.png)
<font color=red>Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up.</font>
![Landing Page](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png)
## Enable Pluggable Components
If you already have set up minimal installation, you still can edit the ConfigMap of ks-installer using the following command. Please make sure there is enough resource in your machines, see [Pluggable Components Overview](/en/installation/pluggable-components/).
```bash
kubectl edit cm -n kubesphere-system ks-installer
```
## FAQ
If you have further questions please do not hesitate to raise issues on [GitHub](https://github.com/kubesphere/kubesphere/issues).

View File

@ -0,0 +1,152 @@
---
title: "High Availability Configuration"
keywords: "kubesphere, kubernetes, docker,installation, HA, high availability"
description: "The guide for installing a high availability of KubeSphere cluster"
weight: 2230
---
## Introduction
[Multi-node installation](../multi-node) can help you to quickly set up a single-master cluster on multiple machines for development and testing. However, we need to consider the high availability of the cluster for production. Since the key components on the master node, i.e. kube-apiserver, kube-scheduler, and kube-controller-manager are running on a single master node, Kubernetes and KubeSphere will be unavailable during the master being down. Therefore we need to set up a high availability cluster by provisioning load balancers and multiple masters. You can use any cloud load balancer, or any hardware load balancer (e.g. F5). In addition, keepalved and Haproxy is also an alternative for creating such high-availability cluster.
This document walks you through an example how to create two [QingCloud Load Balancer](https://docs.qingcloud.com/product/network/loadbalancer), serving as internal load balancer and external load balancer respectively, and how to configure the high availability of masters and Etcd using the load balancers.
## Prerequisites
- Please make sure that you already read [Multi-Node installation](../multi-node). This document only demonstrates how to configure load balancers.
- You need a [QingCloud](https://console.qingcloud.com/login) account to create load balancers, or follow the guide of any other cloud provider to create load balancers.
## Architecture
This example prepares six machines of CentOS 7.5. We will create two load balancers, and deploy three masters and Etcd nodes on three of the machines. You can configure these masters and Etcd nodes in `conf/hosts.ini`.
![Master and etcd node high availability architecture](https://pek3b.qingstor.com/kubesphere-docs/png/20200307215924.png)
## Install HA Cluster
### Step 1: Create Load Balancers
This step briefly shows an example of creating a load balancer on QingCloud platform.
#### Create an Internal Load Balancer
1.1. Log in [QingCloud Console](https://console.qingcloud.com/login) and select **Network & CDN → Load Balancers**, then click on the create button and fill in the basic information.
1.2. Choose the VxNet that your machines are created within from the **Network** dropdown list. Here is `kube`. Other settings can be default values as follows. Click **Submit** to complete the creation.
![Create Internal LB on QingCloud](https://pek3b.qingstor.com/kubesphere-docs/png/20200215224125.png)
1.3. Drill into the detail page of the load balancer, then create a listener that listens to the port `6443` of the `TCP` protocol.
- Name: Define a name for this Listener
- Listener Protocol: Select `TCP` protocol
- Port: `6443`
- Load mode: `Poll`
> Note: After creating the listener, please check the firewall rules of the load balancer. Make sure that the port `6443` has been added to the firewall rules and the external traffic can pass through `6443`. Otherwise, the installation will fail.
![Add Listener to LB](https://pek3b.qingstor.com/kubesphere-docs/png/20200215225205.png)
1.4. Click **Add Backend**, choose the VxNet `kube` that we chose. Then click on the button **Advanced Search** and choose the three master nodes under the VxNet and set the port to `6443` which is the default secure port of api-server.
Click **Submit** when you are done.
![Choose Backends](https://pek3b.qingstor.com/kubesphere-docs/png/20200215225550.png)
1.5. Click on the button **Apply Changes** to activate the configurations. At this point, you can find the three masters have been added as the backend servers of the listener that is behind the internal load balancer.
> Please note: The status of all masters might shows `Not available` after you added them as backends. This is normal since the port `6443` of api-server are not active in masters yet. The status will change to `Active` and the port of api-server will be exposed after installation complete, which means the internal load balancer you configured works as expected.
![Apply Changes](https://pek3b.qingstor.com/kubesphere-docs/png/20200215230107.png)
#### Create an External Load Balancer
You need to create an EIP in advance.
1.6. Similarly, create an external load balancer without joining any network, but associate the EIP that you created to this load balancer.
1.7. Enter the load balancer detail page, create a listener that listens to the port `30880` of the `HTTP` protocol which is the nodeport of KubeSphere console..
> Note: After creating the listener, please check the firewall rules of the load balancer. Make sure that the port `30880` has been added to the firewall rules and the external traffic can pass through `6443`. Otherwise, the installation will fail.
![Create external LB](https://pek3b.qingstor.com/kubesphere-docs/png/20200215232114.png)
1.8. Click **Add Backend**, then choose the `six` machines that we are going to install KubeSphere within the VxNet `Kube`, and set the port to `30880`.
Click **Submit** when you are done.
1.9. Click on the button **Apply Changes** to activate the configurations. At this point, you can find the six machines have been added as the backend servers of the listener that is behind the external load balancer.
![Apply Changes](https://pek3b.qingstor.com/kubesphere-docs/png/20200215232445.png)
### Step 2: Modify the host.ini
Go to the taskbox where you downloaded the installer by following the [Multi-node Installation](../multi-node) and complete the following configurations.
| **Parameter** | **Description** |
|--------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `[all]` | node information. Use the following syntax if you run installation as `root` user: <br> - `<node_name> ansible_connection=<host> ip=<ip_address>` <br> - `<node_name> ansible_host=<ip_address> ip=<ip_address> ansible_ssh_pass=<pwd>` <br> If you log in as a non-root user, use the syntax: <br> - `<node_name> ansible_connection=<host> ip=<ip_address> ansible_user=<user> ansible_become_pass=<pwd>` |
| `[kube-master]` | master node names |
| `[kube-node]` | worker node names |
| `[etcd]` | etcd node names. The number of `etcd` nodes needs to be odd. |
| `[k8s-cluster:children]` | group names of `[kube-master]` and `[kube-node]` |
We use **CentOS 7.5** with `root` user to install an HA cluster. Please see the following configuration as an example:
> Note:
> <br>
> If the _taskbox_ cannot establish `ssh` connection with the rest nodes, try to use the non-root user configuration.
#### host.ini example
```ini
[all]
master1 ansible_connection=local ip=192.168.0.1
master2 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD
master3 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD
node1 ansible_host=192.168.0.4 ip=192.168.0.4 ansible_ssh_pass=PASSWORD
node2 ansible_host=192.168.0.5 ip=192.168.0.5 ansible_ssh_pass=PASSWORD
node3 ansible_host=192.168.0.6 ip=192.168.0.6 ansible_ssh_pass=PASSWORD
[kube-master]
master1
master2
master3
[kube-node]
node1
node2
node3
[etcd]
master1
master2
master3
[k8s-cluster:children]
kube-node
kube-master
```
### Step 3: Configure the Load Balancer Parameters
Besides configuring the `common.yaml` by following the [Multi-node Installation](../multi-node), you need to modify the load balancer information in the `common.yaml`. Assume the **VIP** address and listening port of the **internal load balancer** are `192.168.0.253` and `6443`, then you can refer to the following example.
> - Note that address and port should be indented by two spaces in `common.yaml`, and the address should be VIP.
> - The domain name of the load balancer is "lb.kubesphere.local" by default for internal access. If you need to change the domain name, please uncomment and modify it.
#### The configuration sample in common.yaml
```yaml
## External LB example config
## apiserver_loadbalancer_domain_name: "lb.kubesphere.local"
loadbalancer_apiserver:
address: 192.168.0.253
port: 6443
```
Finally, please refer to the [guide](../storage-configuration) to configure the persistent storage service in `common.yaml` and start your HA cluster installation.
Then it is ready to install the high availability KubeSphere cluster.

View File

@ -0,0 +1,176 @@
---
title: "Multi-node Installation"
keywords: 'kubesphere, kubernetes, docker, kubesphere installer'
description: 'The guide for installing KubeSphere on Multi-Node in development or testing environment'
weight: 2220
---
`Multi-Node` installation enables installing KubeSphere on multiple nodes. Typically, use any one node as _taskbox_ to run the installation task. Please note `ssh` communication is required to be established between taskbox and other nodes.
- <font color=red>The following instructions are for the default installation without enabling any optional components as we have made them pluggable since v2.1.0. If you want to enable any one, please read [Enable Pluggable Components](../pluggable-components).</font>
- <font color=red>If your machines in total have >= 8 cores and >= 16G memory, we recommend you to install the full package of KubeSphere by [Enabling Optional Components](../complete-installation)</font>.
- <font color=red> The installation time depends on your network bandwidth, your computer configuration, the number of nodes, etc. </font>
## Prerequisites
If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information.
## Step 1: Prepare Linux Hosts
The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements.
- Time synchronization is required across all nodes, otherwise the installation may not succeed;
- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`;
- If you are using `Ubuntu 18.04`, you need to use the user `root`;
- If the Debian system does not have the sudo command installed, you need to execute `apt update && apt install sudo` command using root before installation.
### Hardware Recommendation
- KubeSphere can be installed on any cloud platform.
- The installation speed can be accelerated by increasing network bandwidth.
- If you choose air-gapped installation, ensure your disk of each node is at least 100G.
| System | Minimum Requirements (Each node) |
| --- | --- |
| CentOS 7.4 ~ 7.7 (64 bit) | CPU2 Core Memory4 G Disk Space40 G |
| Ubuntu 16.04/18.04 LTS (64 bit) | CPU2 Core Memory4 G Disk Space40 G |
| Red Hat Enterprise Linux Server 7.4 (64 bit) | CPU2 Core Memory4 G Disk Space40 G |
| Debian Stretch 9.5 (64 bit)| CPU2 Core Memory4 G Disk Space40 G |
The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes.
> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide.
| Host IP | Host Name | Role |
| --- | --- | --- |
|192.168.0.1|master|master, etcd|
|192.168.0.2|node1|node|
|192.168.0.3|node2|node|
### Cluster Architecture
#### Single Master, Single Etcd, Two Nodes
![Architecture](/cluster-architecture.svg)
## Step 2: Download Installer Package
**1.** Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`.
```bash
curl -L https://kubesphere.io/download/stable/latest > installer.tar.gz \
&& tar -zxf installer.tar.gz && cd kubesphere-all-v2.1.1/conf
```
**2.** Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file.
> Note:
>
> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`.
> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`.
> - master, node1 and node2 are the host names of each node and all host names should be in lowercase.
### hosts.ini
```ini
[all]
master ansible_connection=local ip=192.168.0.1
node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD
node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD
[kube-master]
master
[kube-node]
node1
node2
[etcd]
master
[k8s-cluster:children]
kube-node
kube-master
```
> Note:
>
> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here.
> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively.
> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`.
>
> Parameters Specification:
>
> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection.
> - `ansible_host`: The name of the host to be connected.
> - `ip`: The ip of the host to be connected.
> - `ansible_user`: The default ssh user name to use.
> - `ansible_become_pass`: Allows you to set the privilege escalation password.
> - `ansible_ssh_pass`: The password of the host to be connected using root.
## Step 3: Install KubeSphere to Linux Machines
> Note:
>
> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default.
> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions.
> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation.
> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts.
**1.** Enter `scripts` folder, and execute `install.sh` using `root` user:
```bash
cd ../cripts
./install.sh
```
**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume.
```bash
################################################
KubeSphere Installer Menu
################################################
* 1) All-in-one
* 2) Multi-node
* 3) Quit
################################################
https://kubesphere.io/ 2020-02-24
################################################
Please input an option: 2
```
**3.** Verify the multi-node installation
**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go.
```bash
successsful!
#####################################################
### Welcome to KubeSphere! ###
#####################################################
Console: http://192.168.0.1:30880
Account: admin
Password: P@88w0rd
NOTEPlease modify the default password after login.
#####################################################
```
> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components).
**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in.
![Login](https://pek3b.qingstor.com/kubesphere-docs/png/20191017172215.png)
<font color=red>Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up.</font>
![Landing Page](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png)
## FAQ
The installer has been tested on Aliyun, AWS, Huawei Cloud, QingCloud, Tencent Cloud. Please check the [results](https://github.com/kubesphere/ks-installer/issues/23) for details. Also please read the [FAQ of installation](../../faq/faq-install).
If you have any further questions please do not hesitate to file issues on [GitHub](https://github.com/kubesphere/kubesphere/issues).

View File

@ -0,0 +1,157 @@
---
title: "StorageClass Configuration"
keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus'
description: 'Instructions for Setting up StorageClass for KubeSphere'
weight: 2250
---
Currently, Installer supports the following [Storage Class](https://kubernetes.io/docs/concepts/storage/storage-classes/), providing persistent storage service for KubeSphere (more storage classes will be supported soon).
- NFS
- Ceph RBD
- GlusterFS
- QingCloud Block Storage
- QingStor NeonSAN
- Local Volume (for development and test only)
The versions of storage systems and corresponding CSI plugins in the table listed below have been well tested.
| **Name** | **Version** | **Reference** |
| ----------- | --- |---|
Ceph RBD Server | v0.94.10 | For development and testing, refer to [Install Ceph Storage Server](/zh-CN/appendix/ceph-ks-install/) for details. Please refer to [Ceph Documentation](http://docs.ceph.com/docs/master/) for production. |
Ceph RBD Client | v12.2.5 | Before installing KubeSphere, you need to configure the corresponding parameters in `common.yaml`. Please refer to [Ceph RBD](../storage-configuration/#ceph-rbd) |
GlusterFS Server | v3.7.6 | For development and testing, refer to [Deploying GlusterFS Storage Server](/zh-CN/appendix/glusterfs-ks-install/) for details. Please refer to [Gluster Documentation](https://www.gluster.org/install/) or [Gluster Documentation](http://gluster.readthedocs.io/en/latest/Install-Guide/Install/) for production. Note you need to install [Heketi Manager (V3.0.0)](https://github.com/heketi/heketi/tree/master/docs/admin). |
|GlusterFS Client |v3.12.10|Before installing KubeSphere, you need to configure the corresponding parameters in `common.yaml`. Please refer to [GlusterFS](../storage-configuration/#glusterfs)|
|NFS Client | v3.1.0 | Before installing KubeSphere, you need to configure the corresponding parameters in `common.yaml`. Make sure you have prepared NFS storage server. Please see [NFS Client](../storage-configuration/#nfs) |
QingCloud-CSI|v0.2.0.1|You need to configure the corresponding parameters in `common.yaml` before installing KubeSphere. Please refer to [QingCloud CSI](../storage-configuration/#qingcloud-csi) for details|
NeonSAN-CSI|v0.3.0| Before installing KubeSphere, you need to configure the corresponding parameters in `common.yaml`. Make sure you have prepared QingStor NeonSAN storage server. Please see [Neonsan-CSI](../storage-configuration/#neonsan-csi) |
> Note: You are only allowed to set ONE default storage classes in the cluster. To specify a default storage class, make sure there is no default storage class already exited in the cluster.
## Storage Configuration
After preparing the storage server, you need to refer to the parameters description in the following table. Then modify the corresponding configurations in `conf/common.yaml` accordingly.
The following describes the storage configuration in `common.yaml`.
> Note: Local Volume is configured as the default storage class in `common.yaml` by default. If you are going to set other storage class as the default, disable the Local Volume and modify the configuration for other storage class.
### Local Volume (For developing or testing only)
A [Local Volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) represents a mounted local storage device such as a disk, partition or directory. Local volumes can only be used as a statically created PersistentVolume. We recommend you to use Local volume in testing or development only since it is quick and easy to install KubeSphere without the struggle to set up persistent storage server. Refer to following table for the definition in `conf/common.yaml`.
| **Local volume** | **Description** |
| --- | --- |
| local\_volume\_provisioner\_enabled | Whether to use Local as the persistent storage, defaults to true |
| local\_volume\_provisioner\_storage\_class | Storage class name, default valuelocal |
| local\_volume\_is\_default\_class | Whether to set Local as the default storage class, defaults to true.|
### NFS
An NFS volume allows an existing NFS (Network File System) share to be mounted into your Pod. NFS can be configured in `conf/common.yaml`. Note you need to prepare NFS server in advance.
| **NFS** | **Description** |
| --- | --- |
| nfs\_client\_enable | Whether to use NFS as the persistent storage, defaults to false |
| nfs\_client\_is\_default\_class | Whether to set NFS as default storage class, defaults to false. |
| nfs\_server | The NFS server address, either IP or Hostname |
| nfs\_path | NFS shared directory, which is the file directory shared on the server, see [Kubernetes Documentation](https://kubernetes.io/docs/concepts/storage/volumes/#nfs) |
|nfs\_vers3\_enabled | Specifies which version of the NFS protocol to use, defaults to false which means v4. True means v4 |
|nfs_archiveOnDelete | Archive PVC when deleting. It will automatically remove data from `oldPath` when it sets to false |
### Ceph RBD
The open source [Ceph RBD](https://ceph.com/) distributed storage system can be configured to use in `conf/common.yaml`. You need to prepare Ceph storage server in advance. Please refer to [Kubernetes Documentation](https://kubernetes.io/docs/concepts/storage/storage-classes/#ceph-rbd) for more details.
| **Ceph\_RBD** | **Description** |
| --- | --- |
| ceph\_rbd\_enabled | Whether to use Ceph RBD as the persistent storage, defaults to false |
| ceph\_rbd\_storage\_class | Storage class name |
| ceph\_rbd\_is\_default\_class | Whether to set Ceph RBD as default storage class, defaults to false |
| ceph\_rbd\_monitors | Ceph monitors, comma delimited. This parameter is required, which depends on Ceph RBD server parameters |
| ceph\_rbd\_admin\_id | Ceph client ID that is capable of creating images in the pool. Defaults to “admin” |
| ceph\_rbd\_admin\_secret | Admin_id's secret, secret name for "adminId". This parameter is required. The provided secret must have type “kubernetes.io/rbd” |
| ceph\_rbd\_pool | Ceph RBD pool. Default is “rbd” |
| ceph\_rbd\_user\_id | Ceph client ID that is used to map the RBD image. Default is the same as adminId |
| ceph\_rbd\_user\_secret | Secret for User_id, it is required to create this secret in namespace which used rbd image |
| ceph\_rbd\_fsType | fsType that is supported by Kubernetes. Default: "ext4"|
| ceph\_rbd\_imageFormat | Ceph RBD image format, “1” or “2”. Default is “1” |
|ceph\_rbd\_imageFeatures| This parameter is optional and should only be used if you set imageFormat to “2”. Currently supported features are layering only. Default is “”, and no features are turned on|
> Note:
>
> The ceph secret, which is created in storage class, like "ceph_rbd_admin_secret" and "ceph_rbd_user_secret", is retrieved using following command in Ceph storage server.
```bash
ceph auth get-key client.admin
```
### GlusterFS
[GlusterFS](https://docs.gluster.org/en/latest/) is a scalable network filesystem suitable for data-intensive tasks such as cloud storage and media streaming. You need to prepare GlusterFS storage server in advance. Please refer to [Kubernetes Documentation](https://kubernetes.io/docs/concepts/storage/storage-classes/#glusterfs) for further information.
| **GlusterFSIt requires glusterfs cluster which is managed by heketi**|**Description** |
| --- | --- |
| glusterfs\_provisioner\_enabled | Whether to use GlusterFS as the persistent storage, defaults to false |
| glusterfs\_provisioner\_storage\_class | Storage class name |
| glusterfs\_is\_default\_class | Whether to set GlusterFS as default storage class, defaults to false |
| glusterfs\_provisioner\_restauthenabled | Gluster REST service authentication boolean that enables authentication to the REST server |
| glusterfs\_provisioner\_resturl | Gluster REST service/Heketi service url which provision gluster volumes on demand. The general format should be "IP address:Port" and this is a mandatory parameter for GlusterFS dynamic provisioner|
| glusterfs\_provisioner\_clusterid | Optional, for example, 630372ccdc720a92c681fb928f27b53f is the ID of the cluster which will be used by Heketi when provisioning the volume. It can also be a list of clusterids |
| glusterfs\_provisioner\_restuser | Gluster REST service/Heketi user who has access to create volumes in the Gluster Trusted Pool |
| glusterfs\_provisioner\_secretName | Optional, identification of Secret instance that contains user password to use when talking to Gluster REST service, Installer will automatically create this secret in kube-system |
| glusterfs\_provisioner\_gidMin | The minimum value of GID range for the storage class |
| glusterfs\_provisioner\_gidMax |The maximum value of GID range for the storage class |
| glusterfs\_provisioner\_volumetype | The volume type and its parameters can be configured with this optional value, for example: Replica volume: volumetype: replicate:3 |
| jwt\_admin\_key | "jwt.admin.key" field is from "/etc/heketi/heketi.json" in Heketi server |
**Attention**
> Please note: `"glusterfs_provisioner_clusterid"` could be returned from glusterfs server by running the following command:
```bash
export HEKETI_CLI_SERVER=http://localhost:8080
heketi-cli cluster list
```
### QingCloud Block Storage
[QingCloud Block Storage](https://docs.qingcloud.com/product/Storage/volume/) is supported in KubeSphere as the persistent storage service. If you would like to experience dynamic provisioning when creating volume, we recommend you to use it as your persistent storage solution. KubeSphere integrates [QingCloud-CSI](https://github.com/yunify/qingcloud-csi/blob/master/README_zh.md), and allows you to use various block storage services of QingCloud. With simple configuration, you can quickly expand, clone PVCs and view the topology of PVCs, create/delete snapshot, as well as restore volume from snapshot.
QingCloud-CSI plugin has implemented the standard CSI. You can easily create and manage different types of volumes in KubeSphere, which are provided by QingCloud. The corresponding PVCs will created with ReadWriteOnce access mode and mounted to running Pods.
QingCloud-CSI supports create the following five types of volume in QingCloud:
- High capacity
- Standard
- SSD Enterprise
- Super high performance
- High performance
|**QingCloud-CSI** | **Description**|
| --- | ---|
| qingcloud\_csi\_enabled|Whether to use QingCloud-CSI as the persistent storage volume, defaults to false |
| qingcloud\_csi\_is\_default\_class| Whether to set QingCloud-CSI as default storage class, defaults to false |
qingcloud\_access\_key\_id , <br> qingcloud\_secret\_access\_key| Please obtain it from [QingCloud Console](https://console.qingcloud.com/login) |
|qingcloud\_zone| Zone should be the same as the zone where the Kubernetes cluster is installed, and the CSI plugin will operate on the storage volumes for this zone. For example: zone can be set to these values, such as sh1a (Shanghai 1-A), sh1b (Shanghai 1-B), pek2 (Beijing 2), pek3a (Beijing 3-A), pek3b (Beijing 3-B), pek3c (Beijing 3-C), gd1 (Guangdong 1), gd2a (Guangdong 2-A), ap1 (Asia Pacific 1), ap2a (Asia Pacific 2-A) |
| type | The type of volume in QingCloud platform. In QingCloud platform, 0 represents high performance volume. 3 represents super high performance volume. 1 or 2 represents high capacity volume depending on clusters zone, see [QingCloud Documentation](https://docs.qingcloud.com/product/api/action/volume/create_volumes.html)|
| maxSize, minSize | Limit the range of volume size in GiB|
| stepSize | Set the increment of volumes size in GiB|
| fsType | The file system of the storage volume, which supports ext3, ext4, xfs. The default is ext4|
### QingStor NeonSAN
The NeonSAN-CSI plugin supports the enterprise-level distributed storage [QingStor NeonSAN](https://www.qingcloud.com/products/qingstor-neonsan/) as the persistent storage solution. You need prepare the NeonSAN server, then configure the NeonSAN-CSI plugin to connect to its storage server in `conf/common.yaml`. Please refer to [NeonSAN-CSI Reference](https://github.com/wnxn/qingstor-csi/blob/master/docs/reference_zh.md#storageclass-%E5%8F%82%E6%95%B0) for further information.
| **NeonSAN** | **Description** |
| --- | --- |
| neonsan\_csi\_enabled | Whether to use NeonSAN as the persistent storage, defaults to false |
| neonsan\_csi\_is\_default\_class | Whether to set NeonSAN-CSI as the default storage class, defaults to false|
Neonsan\_csi\_protocol | transportation protocol, user must set the option, such as TCP or RDMA|
| neonsan\_server\_address | NeonSAN server address |
| neonsan\_cluster\_name| NeonSAN server cluster name|
| neonsan\_server\_pool|A comma separated list of pools. Tell plugin to manager these pools. User must set the option, the default value is kube|
| neonsan\_server\_replicas|NeonSAN image replica count. Default: 1|
| neonsan\_server\_stepSize|set the increment of volumes size in GiB. Default: 1|
| neonsan\_server\_fsType|The file system to use for the volume. Default: ext4|

View File

@ -1,9 +1,9 @@
---
title: "introduction"
title: "Introduction"
description: "Help you to better understand KubeSphere with detailed graphics and contents"
layout: "single"
linkTitle: "introduction"
linkTitle: "Introduction"
weight: 1000
@ -19,4 +19,4 @@ In this chapter, we will demonstrate how to use KubeKey to provision a new Kuber
Below you will find some of the most common and helpful pages from this chapter. We highly recommend you to review them at first.
{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}}
{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}}

View File

@ -0,0 +1,22 @@
---
title: "Multi-cluster Management"
description: "Import a hosted or on-premise Kubernetes cluster into KubeSphere"
layout: "single"
linkTitle: "Multi-cluster Management"
weight: 3000
icon: "/images/docs/docs.svg"
---
## Installing KubeSphere and Kubernetes on Linux
In this chapter, we will demonstrate how to use KubeKey to provision a new Kubernetes and KubeSphere cluster based on different infrastructures. Kubekey can help you to quickly build a production-ready cluster architecture on a set of machines from zero to one. It also helps you to easily scale the cluster and install pluggable components on existing architecture.
## Most Popular Pages
Below you will find some of the most common and helpful pages from this chapter. We highly recommend you to review them at first.
{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}}

View File

@ -0,0 +1,10 @@
---
title: "Enable Multicluster Management"
keywords: "kubernetes, StorageClass, kubesphere, PVC"
description: "Enable Multicluster Management in KubeSphere"
linkTitle: "Enable Multicluster Management"
weight: 200
---
TBD

View File

@ -0,0 +1,8 @@
---
title: "Kubernetes Federation in KubeSphere"
keywords: "kubernetes, multicluster, kubesphere, federation, hybridcloud"
description: "Kubernetes and KubeSphere node management"
linkTitle: "Kubernetes Federation in KubeSphere"
weight: 100
---

View File

@ -0,0 +1,10 @@
---
title: "Introduction"
keywords: "kubernetes, multicluster, kubesphere, hybridcloud"
description: "Upgrade KubeSphere"
linkTitle: "Introduction"
weight: 50
---
TBD

View File

@ -0,0 +1,22 @@
---
title: "Enable Pluggable Components"
description: "Enable KubeSphere Pluggable Components"
layout: "single"
linkTitle: "Enable Pluggable Components"
weight: 3500
icon: "/images/docs/docs.svg"
---
## Installing KubeSphere and Kubernetes on Linux
In this chapter, we will demonstrate how to use KubeKey to provision a new Kubernetes and KubeSphere cluster based on different infrastructures. Kubekey can help you to quickly build a production-ready cluster architecture on a set of machines from zero to one. It also helps you to easily scale the cluster and install pluggable components on existing architecture.
## Most Popular Pages
Below you will find some of the most common and helpful pages from this chapter. We highly recommend you to review them at first.
{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}}

View File

@ -0,0 +1,92 @@
---
title: "Release Notes For 2.0.0"
keywords: "kubernetes, docker, kubesphere, jenkins, istio, prometheus"
description: "KubeSphere Release Notes For 2.0.0"
linkTitle: "Release Notes - 2.0.0"
weight: 500
---
KubeSphere 2.0.0 was released on **May 18th, 2019**.
## What's New in 2.0.0
### Component Upgrades
- Support Kubernetes [Kubernetes 1.13.5](https://github.com/kubernetes/kubernetes/releases/tag/v1.13.5)
- Integrate [QingCloud Cloud Controller](https://github.com/yunify/qingcloud-cloud-controller-manager). After installing load balancer, QingCloud load balancer can be created through KubeSphere console and the backend workload is bound automatically. 
- Integrate [QingStor CSI v0.3.0](https://github.com/yunify/qingstor-csi/tree/v0.3.0) storage plugin and support physical NeonSAN storage system. Support SAN storage service with high availability and high performance.
- Integrate [QingCloud CSI v0.2.1](https://github.com/yunify/qingcloud-csi/tree/v0.2.1) storage plugin and support many types of volume to create QingCloud block services.
- Harbor is upgraded to 1.7.5.
- GitLab is upgraded to 11.8.1.
- Prometheus is upgraded to 2.5.0.
### Microservice Governance
- Integrate Istio 1.1.1 and support visualization of service mesh management.
- Enable the access to the project's external websites and the application traffic governance.
- Provide built-in sample microservice [Bookinfo Application](https://istio.io/docs/examples/bookinfo/).
- Support traffic governance.
- Support traffic images.
- Provide load balancing of microservice based on Istio.
- Support canary release.
- Enable blue-green deployment.
- Enable circuit breaking.
- Enable microservice tracing.
### DevOps (CI/CD Pipeline)
- CI/CD pipeline provides email notification and supports the email notification during construction.
- Enhance CI/CD graphical editing pipelines, and more pipelines for common plugins and execution conditions.
- Provide source code vulnerability scanning based on SonarQube 7.4.
- Support [Source to Image](https://github.com/kubesphere/s2ioperator) feature.
### Monitoring
- Provide Kubernetes component independent monitoring page including etcd, kube-apiserver and kube-scheduler.
- Optimize several monitoring algorithm.
- Optimize monitoring resources. Reduce Prometheus storage and the disk usage up to 80%.
### Logging
- Provide unified log console in terms of tenant.
- Enable accurate and fuzzy retrieval.
- Support real-time and history logs.
- Support combined log query based on namespace, workload, Pod, container, key words and time limit.  
- Support detail page of single and direct logs. Pods and containers can be switched.
- [FluentBit Operator](https://github.com/kubesphere/fluentbit-operator) supports logging gathering settings: ElasticSearch, Kafka and Fluentd can be added, activated or turned off as log collectors. Before sending to log collectors, you can configure filtering conditions for needed logs.
### Alerting and Notifications
- Email notifications are available for cluster nodes and workload resources. 
- Notification rules: combined multiple monitoring resources are available. Different warning levels, detection cycle, push times and threshold can be configured.
- Time and notifiers can be set.
- Enable notification repeating rules for different levels.
### Security Enhancement
- Fix RunC Container Escape Vulnerability [Runc container breakout](https://log.qingcloud.com/archives/5127)
- Fix Alpine Docker's image Vulnerability [Alpine container shadow breakout](https://www.alpinelinux.org/posts/Docker-image-vulnerability-CVE-2019-5021.html)
- Support single and multi-login configuration items.
- Verification code is required after multiple invalid logins.
- Enhance passwords' policy and prevent weak passwords.
- Others security enhancements.
### Interface Optimization
- Optimize multiple user experience of console, such as the switch between DevOps project and other projects.
- Optimize many Chinese-English webpages.
### Others
- Support Etcd backup and recovery.
- Support regular cleanup of the docker's image.
## Bugs Fixes
- Fix delay updates of the resource and deleted pages.
- Fix the left dirty data after deleting the HPA workload.
- Fix incorrect Job status display.
- Correct resource quota, Pod usage and storage metrics algorithm.
- Adjust CPU usage percentages.
- many more bugfix

View File

@ -0,0 +1,19 @@
---
title: "Release Notes For 2.0.1"
keywords: "kubernetes, docker, kubesphere, jenkins, istio, prometheus"
description: "KubeSphere Release Notes For 2.0.1"
linkTitle: "Release Notes - 2.0.1"
weight: 400
---
KubeSphere 2.0.1 was released on **June 9th, 2019**.
## Bug Fix
- Fix the issue that CI/CD pipeline cannot recognize correct special characters in the code branch.
- Fix CI/CD pipeline's issue of being unable to check logs.
- Fix no-log data output problem caused by index document fragmentation abnormity during the log query.
- Fix prompt exceptions when searching for logs that do not exist.
- Fix the line-overlap problem on traffic governance topology and fixed invalid image strategy application.
- Many more bugfix

View File

@ -0,0 +1,40 @@
---
title: "Release Notes For 2.0.2"
keywords: "kubernetes, docker, kubesphere, jenkins, istio, prometheus"
description: "KubeSphere Release Notes For 2.0.2"
linkTitle: "Release Notes - 2.0.2"
weight: 300
---
KubeSphere 2.0.2 was released on July 9, 2019, which fixes known bugs and enhances existing feature. If you have installed versions of 1.0.x, 2.0.0 or 2.0.1, please download KubeSphere installer v2.0.2 to upgrade.
## What's New in 2.0.2
### Enhanced Features
- [API docs](/api-reference/api-docs/) are available on the official website.
- Block brute-force attacks.
- Standardize the maximum length of resource names.
- Upgrade the gateway of project (Ingress Controller) to the version of 0.24.1. Support Ingress grayscale release.
## List of Fixed Bugs
- Fix the issue that traffic topology displays resources outside of this project.
- Fix the extra service component issue from traffic topology under specific circumstances.
- Fix the execution issue when "Source to Image" reconstructs images under specific circumstances.
- Fix the page display problem when "Source to Image" job fails.
- Fix the log checking problem when Pod status is abnormal.
- Fix the issue that disk monitor cannot detect some types of volume mounting, such as LVM volume.
- Fix the problem of detecting deployed applications.
- Fix incorrect status of application component.
- Fix host node's number calculation errors.
- Fix input data loss caused by switching reference configuration buttons when adding environmental variables.
- Fix the rerun job issue that the Operator role cannot execute.
- Fix the initialization issue on IPv4 environment uuid.
- Fix the issue that the log detail page cannot be scrolled down to check past logs.
- Fix wrong APIServer addresses in KubeConfig files.
- Fix the issue that DevOps project's name cannot be changed.
- Fix the issue that container logs cannot specify query time.
- Fix the saving problem on relevant repository's secrets under certain circumstances.
- Fix the issue that application's service component creation page does not have image registry's secrets.

View File

@ -0,0 +1,155 @@
---
title: "Release Notes For 2.1.0"
keywords: "kubernetes, docker, kubesphere, jenkins, istio, prometheus"
description: "KubeSphere Release Notes For 2.1.0"
linkTitle: "Release Notes - 2.1.0"
weight: 200
---
KubeSphere 2.1.0 was released on Nov 11th, 2019, which fixes known bugs, adds some new features and brings some enhancement. If you have installed versions of 2.0.x, please upgrade it and enjoy the better user experience of v2.1.0.
## Installer Enhancement
- Decouple some components and make components including DevOps, service mesh, app store, logging, alerting and notification optional and pluggable
- Add Grafana (v5.2.4) as the optional component
- Upgrade Kubernetes to 1.15.5. It is also compatible with 1.14.x and 1.13.x
- Upgrade [OpenPitrix](https://openpitrix.io/) to v0.4.5
- Upgrade the log forwarder Fluent Bit to v1.3.2
- Upgrade Jenkins to v2.176.2
- Upgrade Istio to 1.3.3
- Optimize the high availability for core components
## App Store
### Features
Support upload / test / review / deploy / publish/ classify / upgrade / deploy and delete apps, and provide nine built-in applications
### Upgrade & Enhancement
- The application repository configuration is moved from global to each workspace
- Support adding application repository to share applications in a workspace
## Storage
### Features
- Support Local Volume with dynamic provisioning
- Provide the real-time monitoring feature for QingCloud block storage
### Upgrade & Enhancement
QingCloud CSI is adapted to CSI 1.1.0, supports upgrade, topology, create or delete a snapshot. It also supports creating PVC based on a snapshot
### BUG Fixes
Fix the StorageClass list display problem
## Observability
### Features
- Support for collecting the file logs on the disk. It is used for the Pod which preserves the logs as the file on the disk
- Support integrating with external ElasticSearch 7.x
- Ability to search logs containinh Chinese words
- Add initContainer log display
- Ability to export logs
- Support for canceling the notification from alerting
### UPGRADE & ENHANCEMENT
- Improve the performance of log search
- Refine the hints when the logging service is abnormal
- Optimize the information when the monitoring metrics request is abnormal
- Support pod anti-affinity rule for Prometheus
### BUG FIXES
- Fix the mistaken highlights in the logs search result
- Fix log search not matching phrases correctly
- Fix the issue that log could not be retrieved for a deleted workload when it is searched by workload name
- Fix the issue where the results were truncated when the log is highlighted
- Fix some metrics exceptions: node `inode`, maximum pod tolerance
- Fix the issue with an incorrect number of alerting targets
- Fix filter failure problem of multi-metric monitoring
- Fix the problem of no logging and monitoring information on taint nodes (Adjust the toleration attributes of node-exporter and fluent-bit to deploy on all nodes by default, ignoring taints)
## DevOps
### Features
- Add support for branch exchange and git log export in S2I
- Add B2I, ability to build Binary/WAR/JAR package and release to Kubernetes
- Support dependency cache for the pipeline, S2I, and B2I
- Support delete Kubernetes resource action in `kubernetesDeploy` step
- Multi-branch pipeline supports trigger other pipelines when create or delete the branch
### Upgrades & Enhancement
- Support BitBucket in the pipeline
- Support Cron script validation in the pipeline
- Support Jenkinsfile syntax validation
- Support custom the link in SonarQube
- Support event trigger build in the pipeline
- Optimize the agent node selection in the pipeline
- Accelerate the start speed of the pipeline
- Use dynamical volume as the work directory of the Agent in the pipeline, also contributes to Jenkins [#589](https://github.com/jenkinsci/kubernetes-plugin/pull/598)
- Optimize the Jenkins kubernetesDeploy plugin, add more resources and versions (v1, app/v1, extensions/v1beta1、apps/v1beta2、apps/v1beta1、autoscaling/v1、autoscaling/v2beta1、autoscaling/v2beta2、networking.k8s.io/v1、batch/v1beta1、batch/v2alpha1), also contributes to Jenkins [#614](https://github.com/jenkinsci/kubernetes-plugin/pull/614)
- Add support for PV, PVC, Network Policy in deploy step of the pipeline, also contributes to Jenkins [#87](https://github.com/jenkinsci/kubernetes-cd-plugin/pull/87)、[#88](https://github.com/jenkinsci/kubernetes-cd-plugin/pull/88)
### Bug Fixes
- Fix the issue that 400 bad request in GitHub Webhook
- incompatible change: DevOps Webhook's URL prefix is changed from `/webhook/xxx` to `/devops_webhook/xxx`
## Authentication and authority
### Features
Support sync and authenticate with AD account
### Upgrades & Enhancement
- Reduce the LDAP component's RAM consumption
- Add protection against brute force attacks
### Bug Fixes
- Fix LDAP connection pool leak
- Fix the issue where users could not be added in the workspace
- Fix sensitive data transmission leaks
## User Experience
### Features
Ability to wizard management of projects (namespace) that are not assigned to the workspace
### Upgrades & Enhancement
- Support bash-completion in web kubectl
- Optimize the host information display
- Add connection test of the email server
- Add prompt on resource list page
- Optimize the project overview page and project basic information
- Simplify the service creation process
- Simplify the workload creation process
- Support real-time status update in the resource list
- optimize YAML editing
- Support image search and image information display
- Add the pod list to the workload page
- Update the web terminal theme
- Support container switching in container terminal
- Optimize Pod information display, and add Pod scheduling information
- More detailed workload status display
### Bug Fixes
- Fix the issue where the default request resource of the project is displayed incorrectly
- Optimize the web terminal design, make it much easier to find
- Fix the Pod status update delay
- Fix the issue where a host could not be searched based on roles
- Fix DevOps project quantity error in workspace detail page
- Fix the issue with the workspace list pages not turning properly
- Fix the problem of inconsistent result ordering after query on workspace list page

View File

@ -0,0 +1,122 @@
---
title: "Release Notes For 2.1.1"
keywords: "kubernetes, docker, kubesphere, jenkins, istio, prometheus"
description: "KubeSphere Release Notes For 2.1.1"
linkTitle: "Release Notes - 2.1.1"
weight: 100
---
KubeSphere 2.1.1 was released on Feb 23rd, 2020, which has fixed known bugs and brought some enhancements. For the users who have installed versions of 2.0.x or 2.1.0, make sure to read the user manual carefully about how to upgrade before doing that, and feel free to raise any questions on [GitHub](https://github.com/kubesphere/kubesphere/issues).
## What's New in 2.1.1
## Installer
### UPGRADE & ENHANCEMENT
- Support Kubernetes v1.14.x、v1.15.x、v1.16.x、v1.17.xalso solve the issue of Kubernetes API Compatibility#[1829](https://github.com/kubesphere/kubesphere/issues/1829)
- Simplify the steps of installation on existing Kubernetes, and remove the step of specifying cluster's CA certification, also specifying Etcd certification is no longer mandatory step if users don't need Etcd monitoring metrics
- Backup the configuration of CoreDNS before upgrading
### BUG FIXES
- Fix the issue of importing apps to App Store
## App Store
### UPGRADE & ENHANCEMENT
- Upgrade OpenPitrix to v0.4.8
### BUG FIXES
- Fix the latest version display issue for the published app #[1130](https://github.com/kubesphere/kubesphere/issues/1130)
- Fix the column name display issue in app approval list page #[1498](https://github.com/kubesphere/kubesphere/issues/1498)
- Fix the searching issue by app name/workspace #[1497](https://github.com/kubesphere/kubesphere/issues/1497)
- Fix the issue of failing to create app with the same name of previously deleted app #[1821](https://github.com/kubesphere/kubesphere/pull/1821) #[1564](https://github.com/kubesphere/kubesphere/issues/1564)
- Fix the issue of failing to deploy apps in some cases #[1619](https://github.com/kubesphere/kubesphere/issues/1619) #[1730](https://github.com/kubesphere/kubesphere/issues/1730)
## Storage
### UPGRADE & ENHANCEMENT
- Support CSI plugins of Alibaba Cloud and Tencent Cloud
### BUG FIXES
- Fix the paging issue of storage class list page #[1583](https://github.com/kubesphere/kubesphere/issues/1583) #[1591](https://github.com/kubesphere/kubesphere/issues/1591)
- Fix the issue that the value of imageFeatures parameter displays '2' when creating ceph storage class #[1593](https://github.com/kubesphere/kubesphere/issues/1593)
- Fix the issue that search filter fails to work in persistent volumes list page #[1582](https://github.com/kubesphere/kubesphere/issues/1582)
- Fix the display issue for abnormal persistent volume #[1581](https://github.com/kubesphere/kubesphere/issues/1581)
- Fix the display issue for the persistent volumes which associated storage class is deleted #[1580](https://github.com/kubesphere/kubesphere/issues/1580) #[1579](https://github.com/kubesphere/kubesphere/issues/1579)
## Observability
### UPGRADE & ENHANCEMENT
- Upgrade Fluent Bit to v1.3.5 #[1505](https://github.com/kubesphere/kubesphere/issues/1505)
- Upgrade Kube-state-metrics to v1.7.2
- Upgrade Elastic Curator to v5.7.6 #[517](https://github.com/kubesphere/ks-installer/issues/517)
- Fluent Bit Operator support to detect the location of soft linked docker log folder dynamically on host machines
- Fluent Bit Operator support to manage the instance of Fluent Bit by declarative configuration through updating the ConfigMap of Operator
- Fix the issue of sort orders in alert list page #[1397](https://github.com/kubesphere/kubesphere/issues/1397)
- Adjust the metric of container memory usage with 'container_memory_working_set_bytes'
### BUG FIXES
- Fix the lag issue of container logs #[1650](https://github.com/kubesphere/kubesphere/issues/1650)
- Fix the display issue that some replicas of workload have no logs on container detail log page #[1505](https://github.com/kubesphere/kubesphere/issues/1505)
- Fix the compatibility issue of Curator to support ElasticSearch 7.x #[517](https://github.com/kubesphere/ks-installer/issues/517)
- Fix the display issue of container log page during container initialization #[1518](https://github.com/kubesphere/kubesphere/issues/1518)
- Fix the blank node issue when these nodes are resized #[1464](https://github.com/kubesphere/kubesphere/issues/1464)
- Fix the display issue of components status in monitor center, to keep them up-to date #[1858](https://github.com/kubesphere/kubesphere/issues/1858)
- Fix the wrong monitoring targets number in alert detail page #[61](https://github.com/kubesphere/console/issues/61)
## DevOps
### BUG FIXES
- Fix the issue of UNSTABLE state not visible in the pipeline #[1428](https://github.com/kubesphere/kubesphere/issues/1428)
- Fix the format issue of KubeConfig in DevOps pipeline #[1529](https://github.com/kubesphere/kubesphere/issues/1529)
- Fix the image repo compatibility issue in B2I, to support image repo of Alibaba Cloud #[1500](https://github.com/kubesphere/kubesphere/issues/1500)
- Fix the paging issue in DevOps pipelines' branches list page #[1517](https://github.com/kubesphere/kubesphere/issues/1517)
- Fix the issue of failing to display pipeline configuration after modifying it #[1522](https://github.com/kubesphere/kubesphere/issues/1522)
- Fix the issue of failing to download generated artifact in S2I job #[1547](https://github.com/kubesphere/kubesphere/issues/1547)
- Fix the issue of [data loss occasionally after restarting Jenkins]( https://kubesphere.com.cn/forum/d/283-jenkins)
- Fix the issue that only 'PR-HEAD' is fetched when binding pipeline with GitHub #[1780](https://github.com/kubesphere/kubesphere/issues/1780)
- Fix 414 issue when updating DevOps credential #[1824](https://github.com/kubesphere/kubesphere/issues/1824)
- Fix wrong s2ib/s2ir naming issue from B2I/S2I #[1840](https://github.com/kubesphere/kubesphere/issues/1840)
- Fix the issue of failing to drag and drop tasks on pipeline editing page #[62](https://github.com/kubesphere/console/issues/62)
## Authentication and Authorization
### UPGRADE & ENHANCEMENT
- Generate client certification through CSR #[1449](https://github.com/kubesphere/kubesphere/issues/1449)
### BUG FIXES
- Fix content loss issue in KubeConfig token file #[1529](https://github.com/kubesphere/kubesphere/issues/1529)
- Fix the issue that users with different permission fail to log in on the same browser #[1600](https://github.com/kubesphere/kubesphere/issues/1600)
## User Experience
### UPGRADE & ENHANCEMENT
- Support to edit SecurityContext in workload editing page #[1530](https://github.com/kubesphere/kubesphere/issues/1530)
- Support to configure init container in workload editing page #[1488](https://github.com/kubesphere/kubesphere/issues/1488)
- Add support of startupProbe, also add periodSeconds, successThreshold, failureThreshold parameters in probe editing page #[1487](https://github.com/kubesphere/kubesphere/issues/1487)
- Optimize the status update display of Pods #[1187](https://github.com/kubesphere/kubesphere/issues/1187)
- Optimize the error message report on console #[43](https://github.com/kubesphere/console/issues/43)
### BUG FIXES
- Fix the status display issue for the Pods that are not under running status #[1187](https://github.com/kubesphere/kubesphere/issues/1187)
- Fix the issue that the added annotation can't be deleted when creating service of QingCloud LoadBalancer #[1395](https://github.com/kubesphere/kubesphere/issues/1395)
- Fix the display issue when selecting workload on service editing page #[1596](https://github.com/kubesphere/kubesphere/issues/1596)
- Fix the issue of failing to edit configuration file when editing 'Job' #[1521](https://github.com/kubesphere/kubesphere/issues/1521)
- Fix the issue of failing to update the service of 'StatefulSet' #[1513](https://github.com/kubesphere/kubesphere/issues/1513)
- Fix the issue of image searching for QingCloud and Alibaba Cloud image repos #[1627](https://github.com/kubesphere/kubesphere/issues/1627)
- Fix resource ordering issue with the same creation timestamp #[1750](https://github.com/kubesphere/kubesphere/pull/1750)
- Fix the issue of failing to edit configuration file when editing service #[41](https://github.com/kubesphere/console/issues/41)

View File

@ -0,0 +1,10 @@
---
title: "Release Notes For 3.0.0"
keywords: "kubernetes, docker, kubesphere, jenkins, istio, prometheus"
description: "KubeSphere Release Notes For 3.0.0"
linkTitle: "Release Notes - 3.0.0"
weight: 50
---
TBD

View File

@ -0,0 +1,23 @@
---
title: "Project User Guide"
description: "Help you to better manage resources in a KubeSphere project"
layout: "single"
linkTitle: "Project User Guide"
weight: 4300
icon: "/images/docs/docs.svg"
---
## Installing KubeSphere and Kubernetes on Linux
In this chapter, we will demonstrate how to use KubeKey to provision a new Kubernetes and KubeSphere cluster based on different infrastructures. Kubekey can help you to quickly build a production-ready cluster architecture on a set of machines from zero to one. It also helps you to easily scale the cluster and install pluggable components on existing architecture.
## Most Popular Pages
Below you will find some of the most common and helpful pages from this chapter. We highly recommend you to review them at first.
{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}}
{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}}

View File

@ -0,0 +1,7 @@
---
linkTitle: "Application Workloads"
weight: 2200
_build:
render: false
---

View File

@ -0,0 +1,44 @@
---
title: "Application Template"
keywords: 'kubernetes, chart, helm, KubeSphere, application'
description: 'Application Template'
linkTitle: "Application Template"
weight: 2210
---
TBD
{{< notice note >}}
### This is a simple note.
{{</ notice >}}
{{< notice tip >}}
This is a simple tip.
{{</ notice >}}
{{< notice info >}}
This is a simple info.
{{</ notice >}}
{{< notice warning >}}
This is a simple warning.
{{</ notice >}}
{{< tabs >}}
{{< tab "first" >}}
### Why KubeSphere
{{</ tab >}}
{{< tab "second" >}}
```
console.log('test')
```
{{</ tab >}}
{{< tab "third" >}}
this is third tab
{{</ tab >}}
{{</ tabs >}}

View File

@ -0,0 +1,44 @@
---
title: "Composing an App for Microservices"
keywords: 'kubesphere, kubernetes, docker, devops, service mesh, openpitrix'
description: 'Composing an app for microservices'
weight: 2260
---
TBD
{{< notice note >}}
### This is a simple note.
{{</ notice >}}
{{< notice tip >}}
This is a simple tip.
{{</ notice >}}
{{< notice info >}}
This is a simple info.
{{</ notice >}}
{{< notice warning >}}
This is a simple warning.
{{</ notice >}}
{{< tabs >}}
{{< tab "first" >}}
### Why KubeSphere
{{</ tab >}}
{{< tab "second" >}}
```
console.log('test')
```
{{</ tab >}}
{{< tab "third" >}}
this is third tab
{{</ tab >}}
{{</ tabs >}}

View File

@ -0,0 +1,44 @@
---
title: "CronJobs"
keywords: 'kubesphere, kubernetes, jobs, cronjobs'
description: 'Create a Kubernetes CronJob'
weight: 2260
---
TBD
{{< notice note >}}
### This is a simple note.
{{</ notice >}}
{{< notice tip >}}
This is a simple tip.
{{</ notice >}}
{{< notice info >}}
This is a simple info.
{{</ notice >}}
{{< notice warning >}}
This is a simple warning.
{{</ notice >}}
{{< tabs >}}
{{< tab "first" >}}
### Why KubeSphere
{{</ tab >}}
{{< tab "second" >}}
```
console.log('test')
```
{{</ tab >}}
{{< tab "third" >}}
this is third tab
{{</ tab >}}
{{</ tabs >}}

View File

@ -0,0 +1,44 @@
---
title: "DaemonSets"
keywords: 'kubesphere, kubernetes, docker, devops, service mesh, openpitrix'
description: 'Kubernetes DaemonSets'
weight: 2250
---
TBD
{{< notice note >}}
### This is a simple note.
{{</ notice >}}
{{< notice tip >}}
This is a simple tip.
{{</ notice >}}
{{< notice info >}}
This is a simple info.
{{</ notice >}}
{{< notice warning >}}
This is a simple warning.
{{</ notice >}}
{{< tabs >}}
{{< tab "first" >}}
### Why KubeSphere
{{</ tab >}}
{{< tab "second" >}}
```
console.log('test')
```
{{</ tab >}}
{{< tab "third" >}}
this is third tab
{{</ tab >}}
{{</ tabs >}}

View File

@ -0,0 +1,44 @@
---
title: "Deployments"
keywords: 'kubesphere, kubernetes, docker, devops, service mesh, openpitrix'
description: 'Kubernetes Deployments'
weight: 2230
---
TBD
{{< notice note >}}
### This is a simple note.
{{</ notice >}}
{{< notice tip >}}
This is a simple tip.
{{</ notice >}}
{{< notice info >}}
This is a simple info.
{{</ notice >}}
{{< notice warning >}}
This is a simple warning.
{{</ notice >}}
{{< tabs >}}
{{< tab "first" >}}
### Why KubeSphere
{{</ tab >}}
{{< tab "second" >}}
```
console.log('test')
```
{{</ tab >}}
{{< tab "third" >}}
this is third tab
{{</ tab >}}
{{</ tabs >}}

View File

@ -0,0 +1,44 @@
---
title: "Jobs"
keywords: 'kubesphere, kubernetes, docker, jobs'
description: 'Create a Kubernetes Job'
weight: 2260
---
TBD
{{< notice note >}}
### This is a simple note.
{{</ notice >}}
{{< notice tip >}}
This is a simple tip.
{{</ notice >}}
{{< notice info >}}
This is a simple info.
{{</ notice >}}
{{< notice warning >}}
This is a simple warning.
{{</ notice >}}
{{< tabs >}}
{{< tab "first" >}}
### Why KubeSphere
{{</ tab >}}
{{< tab "second" >}}
```
console.log('test')
```
{{</ tab >}}
{{< tab "third" >}}
this is third tab
{{</ tab >}}
{{</ tabs >}}

View File

@ -0,0 +1,44 @@
---
title: "Jobs"
keywords: 'kubesphere, kubernetes, docker, jobs'
description: 'Create a Kubernetes Job'
weight: 2260
---
TBD
{{< notice note >}}
### This is a simple note.
{{</ notice >}}
{{< notice tip >}}
This is a simple tip.
{{</ notice >}}
{{< notice info >}}
This is a simple info.
{{</ notice >}}
{{< notice warning >}}
This is a simple warning.
{{</ notice >}}
{{< tabs >}}
{{< tab "first" >}}
### Why KubeSphere
{{</ tab >}}
{{< tab "second" >}}
```
console.log('test')
```
{{</ tab >}}
{{< tab "third" >}}
this is third tab
{{</ tab >}}
{{</ tabs >}}

View File

@ -0,0 +1,44 @@
---
title: "Jobs"
keywords: 'kubesphere, kubernetes, docker, jobs'
description: 'Create a Kubernetes Job'
weight: 2260
---
TBD
{{< notice note >}}
### This is a simple note.
{{</ notice >}}
{{< notice tip >}}
This is a simple tip.
{{</ notice >}}
{{< notice info >}}
This is a simple info.
{{</ notice >}}
{{< notice warning >}}
This is a simple warning.
{{</ notice >}}
{{< tabs >}}
{{< tab "first" >}}
### Why KubeSphere
{{</ tab >}}
{{< tab "second" >}}
```
console.log('test')
```
{{</ tab >}}
{{< tab "third" >}}
this is third tab
{{</ tab >}}
{{</ tabs >}}

View File

@ -0,0 +1,44 @@
---
title: "Jobs"
keywords: 'kubesphere, kubernetes, docker, jobs'
description: 'Create a Kubernetes Job'
weight: 2260
---
TBD
{{< notice note >}}
### This is a simple note.
{{</ notice >}}
{{< notice tip >}}
This is a simple tip.
{{</ notice >}}
{{< notice info >}}
This is a simple info.
{{</ notice >}}
{{< notice warning >}}
This is a simple warning.
{{</ notice >}}
{{< tabs >}}
{{< tab "first" >}}
### Why KubeSphere
{{</ tab >}}
{{< tab "second" >}}
```
console.log('test')
```
{{</ tab >}}
{{< tab "third" >}}
this is third tab
{{</ tab >}}
{{</ tabs >}}

View File

@ -0,0 +1,44 @@
---
title: "StatefulSets"
keywords: 'kubesphere, kubernetes, StatefulSets, dashboard, service'
description: 'Kubernetes StatefulSets'
weight: 2240
---
TBD
{{< notice note >}}
### This is a simple note.
{{</ notice >}}
{{< notice tip >}}
This is a simple tip.
{{</ notice >}}
{{< notice info >}}
This is a simple info.
{{</ notice >}}
{{< notice warning >}}
This is a simple warning.
{{</ notice >}}
{{< tabs >}}
{{< tab "first" >}}
### Why KubeSphere
{{</ tab >}}
{{< tab "second" >}}
```
console.log('test')
```
{{</ tab >}}
{{< tab "third" >}}
this is third tab
{{</ tab >}}
{{</ tabs >}}

View File

@ -0,0 +1,7 @@
---
linkTitle: "Installation"
weight: 2100
_build:
render: false
---

View File

@ -0,0 +1,44 @@
---
title: "ConfigMaps"
keywords: 'kubernetes, docker, helm, ConfigMaps'
description: 'Create a Kubernetes ConfigMap'
linkTitle: "ConfigMaps"
weight: 2110
---
TBD
{{< notice note >}}
### This is a simple note.
{{</ notice >}}
{{< notice tip >}}
This is a simple tip.
{{</ notice >}}
{{< notice info >}}
This is a simple info.
{{</ notice >}}
{{< notice warning >}}
This is a simple warning.
{{</ notice >}}
{{< tabs >}}
{{< tab "first" >}}
### Why KubeSphere
{{</ tab >}}
{{< tab "second" >}}
```
console.log('test')
```
{{</ tab >}}
{{< tab "third" >}}
this is third tab
{{</ tab >}}
{{</ tabs >}}

View File

@ -0,0 +1,44 @@
---
title: "Secrets"
keywords: 'KubeSphere, kubernetes, docker, Secrets'
description: 'Create a Kubernetes Secret'
linkTitle: "Secrets"
weight: 2130
---
TBD
{{< notice note >}}
### This is a simple note.
{{</ notice >}}
{{< notice tip >}}
This is a simple tip.
{{</ notice >}}
{{< notice info >}}
This is a simple info.
{{</ notice >}}
{{< notice warning >}}
This is a simple warning.
{{</ notice >}}
{{< tabs >}}
{{< tab "first" >}}
### Why KubeSphere
{{</ tab >}}
{{< tab "second" >}}
```
console.log('test')
```
{{</ tab >}}
{{< tab "third" >}}
this is third tab
{{</ tab >}}
{{</ tabs >}}

View File

@ -0,0 +1,44 @@
---
title: "Secrets"
keywords: 'KubeSphere, kubernetes, docker, Secrets'
description: 'Create a Kubernetes Secret'
linkTitle: "Secrets"
weight: 2130
---
TBD
{{< notice note >}}
### This is a simple note.
{{</ notice >}}
{{< notice tip >}}
This is a simple tip.
{{</ notice >}}
{{< notice info >}}
This is a simple info.
{{</ notice >}}
{{< notice warning >}}
This is a simple warning.
{{</ notice >}}
{{< tabs >}}
{{< tab "first" >}}
### Why KubeSphere
{{</ tab >}}
{{< tab "second" >}}
```
console.log('test')
```
{{</ tab >}}
{{< tab "third" >}}
this is third tab
{{</ tab >}}
{{</ tabs >}}

View File

@ -0,0 +1,7 @@
---
linkTitle: "Installation"
weight: 2100
_build:
render: false
---

View File

@ -0,0 +1,107 @@
---
title: "Volume Snapshots"
keywords: 'KubeSphere, kubernetes, docker, helm, jenkins, istio, prometheus'
description: 'Volume Snapshots'
linkTitle: "Volume Snapshots"
weight: 2130
---
This tutorial explains how to customize KubeSphere configurations in `conf/common.yaml`. You can reference the following section to understand each parameter.
```yaml
######################### Kubernetes #########################
# The default k8s version will be installed
kube_version: v1.16.7
# The default etcd version will be installed
etcd_version: v3.2.18
# Configure a cron job to backup etcd data, which is running on etcd machines.
# Period of running backup etcd job, the unit is minutes.
# The default value 30 means backup etcd every 30 minutes.
etcd_backup_period: 30
# How many backup replicas to keep.
# The default value5 means to keep latest 5 backups, older ones will be deleted by order.
keep_backup_number: 5
# The location to store etcd backups files on etcd machines.
etcd_backup_dir: "/var/backups/kube_etcd"
# Add other registry. (For users who need to accelerate image download)
docker_registry_mirrors:
- https://docker.mirrors.ustc.edu.cn
- https://registry.docker-cn.com
- https://mirror.aliyuncs.com
# Kubernetes network plugin, Calico will be installed by default. Note that Calico and flannel are recommended, which are tested and verified by KubeSphere.
kube_network_plugin: calico
# A valid CIDR range for Kubernetes services,
# 1. should not overlap with node subnet
# 2. should not overlap with Kubernetes pod subnet
kube_service_addresses: 10.233.0.0/18
# A valid CIDR range for Kubernetes pod subnet,
# 1. should not overlap with node subnet
# 2. should not overlap with Kubernetes services subnet
kube_pods_subnet: 10.233.64.0/18
# Kube-proxy proxyMode configuration, either ipvs, or iptables
kube_proxy_mode: ipvs
# Maximum pods allowed to run on every node.
kubelet_max_pods: 110
# Enable nodelocal dns cache, see https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.md#nodelocal-dns-cache for further information
enable_nodelocaldns: true
# Highly Available loadbalancer example config
# apiserver_loadbalancer_domain_name: "lb.kubesphere.local" # Loadbalancer domain name
# loadbalancer_apiserver: # Loadbalancer apiserver configuration, please uncomment this line when you prepare HA install
# address: 192.168.0.10 # Loadbalancer apiserver IP address
# port: 6443 # apiserver port
######################### KubeSphere #########################
# Version of KubeSphere
ks_version: v2.1.0
# KubeSphere console port, range 30000-32767,
# but 30180/30280/30380 are reserved for internal service
console_port: 30880 # KubeSphere console nodeport
#CommonComponent
mysql_volume_size: 20Gi # MySQL PVC size
minio_volume_size: 20Gi # Minio PVC size
etcd_volume_size: 20Gi # etcd PVC size
openldap_volume_size: 2Gi # openldap PVC size
redis_volume_size: 2Gi # Redis PVC size
# Monitoring
prometheus_replica: 2 # Prometheus replicas with 2 by default which are responsible for monitoring different segments of data source and provide high availability as well.
prometheus_memory_request: 400Mi # Prometheus request memory
prometheus_volume_size: 20Gi # Prometheus PVC size
grafana_enabled: true # enable grafana or not
## Container Engine Acceleration
## Use nvidia gpu acceleration in containers
# nvidia_accelerator_enabled: true # enable Nvidia GPU accelerator or not. It supports hybrid node with GPU and CPU installed.
# nvidia_gpu_nodes: # The GPU nodes specified in hosts.ini. FOr now we only support Ubuntu 16.04
# - kube-gpu-001 # The host name of the GPU node specified in hosts.ini
```
## How to Configure a GPU Node
You may want to use GPU nodes for special purpose such as machine learning. Let's say you have a GPU node called `node2` in `hosts.ini`, then in the file `common.yaml` specify the following configuration. Please be aware the `- node2` has two spaces indent.
```yaml
nvidia_accelerator_enabled: true
nvidia_gpu_nodes:
- node2
```
> Note: The GPU node now only supports Ubuntu 16.04.

View File

@ -0,0 +1,107 @@
---
title: "Volume Snapshots"
keywords: 'KubeSphere, kubernetes, docker, helm, jenkins, istio, prometheus'
description: 'Volume Snapshots'
linkTitle: "Volume Snapshots"
weight: 2130
---
This tutorial explains how to customize KubeSphere configurations in `conf/common.yaml`. You can reference the following section to understand each parameter.
```yaml
######################### Kubernetes #########################
# The default k8s version will be installed
kube_version: v1.16.7
# The default etcd version will be installed
etcd_version: v3.2.18
# Configure a cron job to backup etcd data, which is running on etcd machines.
# Period of running backup etcd job, the unit is minutes.
# The default value 30 means backup etcd every 30 minutes.
etcd_backup_period: 30
# How many backup replicas to keep.
# The default value5 means to keep latest 5 backups, older ones will be deleted by order.
keep_backup_number: 5
# The location to store etcd backups files on etcd machines.
etcd_backup_dir: "/var/backups/kube_etcd"
# Add other registry. (For users who need to accelerate image download)
docker_registry_mirrors:
- https://docker.mirrors.ustc.edu.cn
- https://registry.docker-cn.com
- https://mirror.aliyuncs.com
# Kubernetes network plugin, Calico will be installed by default. Note that Calico and flannel are recommended, which are tested and verified by KubeSphere.
kube_network_plugin: calico
# A valid CIDR range for Kubernetes services,
# 1. should not overlap with node subnet
# 2. should not overlap with Kubernetes pod subnet
kube_service_addresses: 10.233.0.0/18
# A valid CIDR range for Kubernetes pod subnet,
# 1. should not overlap with node subnet
# 2. should not overlap with Kubernetes services subnet
kube_pods_subnet: 10.233.64.0/18
# Kube-proxy proxyMode configuration, either ipvs, or iptables
kube_proxy_mode: ipvs
# Maximum pods allowed to run on every node.
kubelet_max_pods: 110
# Enable nodelocal dns cache, see https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.md#nodelocal-dns-cache for further information
enable_nodelocaldns: true
# Highly Available loadbalancer example config
# apiserver_loadbalancer_domain_name: "lb.kubesphere.local" # Loadbalancer domain name
# loadbalancer_apiserver: # Loadbalancer apiserver configuration, please uncomment this line when you prepare HA install
# address: 192.168.0.10 # Loadbalancer apiserver IP address
# port: 6443 # apiserver port
######################### KubeSphere #########################
# Version of KubeSphere
ks_version: v2.1.0
# KubeSphere console port, range 30000-32767,
# but 30180/30280/30380 are reserved for internal service
console_port: 30880 # KubeSphere console nodeport
#CommonComponent
mysql_volume_size: 20Gi # MySQL PVC size
minio_volume_size: 20Gi # Minio PVC size
etcd_volume_size: 20Gi # etcd PVC size
openldap_volume_size: 2Gi # openldap PVC size
redis_volume_size: 2Gi # Redis PVC size
# Monitoring
prometheus_replica: 2 # Prometheus replicas with 2 by default which are responsible for monitoring different segments of data source and provide high availability as well.
prometheus_memory_request: 400Mi # Prometheus request memory
prometheus_volume_size: 20Gi # Prometheus PVC size
grafana_enabled: true # enable grafana or not
## Container Engine Acceleration
## Use nvidia gpu acceleration in containers
# nvidia_accelerator_enabled: true # enable Nvidia GPU accelerator or not. It supports hybrid node with GPU and CPU installed.
# nvidia_gpu_nodes: # The GPU nodes specified in hosts.ini. FOr now we only support Ubuntu 16.04
# - kube-gpu-001 # The host name of the GPU node specified in hosts.ini
```
## How to Configure a GPU Node
You may want to use GPU nodes for special purpose such as machine learning. Let's say you have a GPU node called `node2` in `hosts.ini`, then in the file `common.yaml` specify the following configuration. Please be aware the `- node2` has two spaces indent.
```yaml
nvidia_accelerator_enabled: true
nvidia_gpu_nodes:
- node2
```
> Note: The GPU node now only supports Ubuntu 16.04.

View File

@ -0,0 +1,10 @@
---
title: "Volumes"
keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus'
description: 'Create Volumes (PVCs)'
linkTitle: "Volumes"
weight: 2110
---
TBD

View File

@ -0,0 +1,107 @@
---
title: "Volume Snapshots"
keywords: 'KubeSphere, kubernetes, docker, helm, jenkins, istio, prometheus'
description: 'Volume Snapshots'
linkTitle: "Volume Snapshots"
weight: 2130
---
This tutorial explains how to customize KubeSphere configurations in `conf/common.yaml`. You can reference the following section to understand each parameter.
```yaml
######################### Kubernetes #########################
# The default k8s version will be installed
kube_version: v1.16.7
# The default etcd version will be installed
etcd_version: v3.2.18
# Configure a cron job to backup etcd data, which is running on etcd machines.
# Period of running backup etcd job, the unit is minutes.
# The default value 30 means backup etcd every 30 minutes.
etcd_backup_period: 30
# How many backup replicas to keep.
# The default value5 means to keep latest 5 backups, older ones will be deleted by order.
keep_backup_number: 5
# The location to store etcd backups files on etcd machines.
etcd_backup_dir: "/var/backups/kube_etcd"
# Add other registry. (For users who need to accelerate image download)
docker_registry_mirrors:
- https://docker.mirrors.ustc.edu.cn
- https://registry.docker-cn.com
- https://mirror.aliyuncs.com
# Kubernetes network plugin, Calico will be installed by default. Note that Calico and flannel are recommended, which are tested and verified by KubeSphere.
kube_network_plugin: calico
# A valid CIDR range for Kubernetes services,
# 1. should not overlap with node subnet
# 2. should not overlap with Kubernetes pod subnet
kube_service_addresses: 10.233.0.0/18
# A valid CIDR range for Kubernetes pod subnet,
# 1. should not overlap with node subnet
# 2. should not overlap with Kubernetes services subnet
kube_pods_subnet: 10.233.64.0/18
# Kube-proxy proxyMode configuration, either ipvs, or iptables
kube_proxy_mode: ipvs
# Maximum pods allowed to run on every node.
kubelet_max_pods: 110
# Enable nodelocal dns cache, see https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.md#nodelocal-dns-cache for further information
enable_nodelocaldns: true
# Highly Available loadbalancer example config
# apiserver_loadbalancer_domain_name: "lb.kubesphere.local" # Loadbalancer domain name
# loadbalancer_apiserver: # Loadbalancer apiserver configuration, please uncomment this line when you prepare HA install
# address: 192.168.0.10 # Loadbalancer apiserver IP address
# port: 6443 # apiserver port
######################### KubeSphere #########################
# Version of KubeSphere
ks_version: v2.1.0
# KubeSphere console port, range 30000-32767,
# but 30180/30280/30380 are reserved for internal service
console_port: 30880 # KubeSphere console nodeport
#CommonComponent
mysql_volume_size: 20Gi # MySQL PVC size
minio_volume_size: 20Gi # Minio PVC size
etcd_volume_size: 20Gi # etcd PVC size
openldap_volume_size: 2Gi # openldap PVC size
redis_volume_size: 2Gi # Redis PVC size
# Monitoring
prometheus_replica: 2 # Prometheus replicas with 2 by default which are responsible for monitoring different segments of data source and provide high availability as well.
prometheus_memory_request: 400Mi # Prometheus request memory
prometheus_volume_size: 20Gi # Prometheus PVC size
grafana_enabled: true # enable grafana or not
## Container Engine Acceleration
## Use nvidia gpu acceleration in containers
# nvidia_accelerator_enabled: true # enable Nvidia GPU accelerator or not. It supports hybrid node with GPU and CPU installed.
# nvidia_gpu_nodes: # The GPU nodes specified in hosts.ini. FOr now we only support Ubuntu 16.04
# - kube-gpu-001 # The host name of the GPU node specified in hosts.ini
```
## How to Configure a GPU Node
You may want to use GPU nodes for special purpose such as machine learning. Let's say you have a GPU node called `node2` in `hosts.ini`, then in the file `common.yaml` specify the following configuration. Please be aware the `- node2` has two spaces indent.
```yaml
nvidia_accelerator_enabled: true
nvidia_gpu_nodes:
- node2
```
> Note: The GPU node now only supports Ubuntu 16.04.

View File

@ -0,0 +1,7 @@
---
linkTitle: "Installation"
weight: 2100
_build:
render: false
---

View File

@ -0,0 +1,107 @@
---
title: "Volume Snapshots"
keywords: 'KubeSphere, kubernetes, docker, helm, jenkins, istio, prometheus'
description: 'Volume Snapshots'
linkTitle: "Volume Snapshots"
weight: 2130
---
This tutorial explains how to customize KubeSphere configurations in `conf/common.yaml`. You can reference the following section to understand each parameter.
```yaml
######################### Kubernetes #########################
# The default k8s version will be installed
kube_version: v1.16.7
# The default etcd version will be installed
etcd_version: v3.2.18
# Configure a cron job to backup etcd data, which is running on etcd machines.
# Period of running backup etcd job, the unit is minutes.
# The default value 30 means backup etcd every 30 minutes.
etcd_backup_period: 30
# How many backup replicas to keep.
# The default value5 means to keep latest 5 backups, older ones will be deleted by order.
keep_backup_number: 5
# The location to store etcd backups files on etcd machines.
etcd_backup_dir: "/var/backups/kube_etcd"
# Add other registry. (For users who need to accelerate image download)
docker_registry_mirrors:
- https://docker.mirrors.ustc.edu.cn
- https://registry.docker-cn.com
- https://mirror.aliyuncs.com
# Kubernetes network plugin, Calico will be installed by default. Note that Calico and flannel are recommended, which are tested and verified by KubeSphere.
kube_network_plugin: calico
# A valid CIDR range for Kubernetes services,
# 1. should not overlap with node subnet
# 2. should not overlap with Kubernetes pod subnet
kube_service_addresses: 10.233.0.0/18
# A valid CIDR range for Kubernetes pod subnet,
# 1. should not overlap with node subnet
# 2. should not overlap with Kubernetes services subnet
kube_pods_subnet: 10.233.64.0/18
# Kube-proxy proxyMode configuration, either ipvs, or iptables
kube_proxy_mode: ipvs
# Maximum pods allowed to run on every node.
kubelet_max_pods: 110
# Enable nodelocal dns cache, see https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.md#nodelocal-dns-cache for further information
enable_nodelocaldns: true
# Highly Available loadbalancer example config
# apiserver_loadbalancer_domain_name: "lb.kubesphere.local" # Loadbalancer domain name
# loadbalancer_apiserver: # Loadbalancer apiserver configuration, please uncomment this line when you prepare HA install
# address: 192.168.0.10 # Loadbalancer apiserver IP address
# port: 6443 # apiserver port
######################### KubeSphere #########################
# Version of KubeSphere
ks_version: v2.1.0
# KubeSphere console port, range 30000-32767,
# but 30180/30280/30380 are reserved for internal service
console_port: 30880 # KubeSphere console nodeport
#CommonComponent
mysql_volume_size: 20Gi # MySQL PVC size
minio_volume_size: 20Gi # Minio PVC size
etcd_volume_size: 20Gi # etcd PVC size
openldap_volume_size: 2Gi # openldap PVC size
redis_volume_size: 2Gi # Redis PVC size
# Monitoring
prometheus_replica: 2 # Prometheus replicas with 2 by default which are responsible for monitoring different segments of data source and provide high availability as well.
prometheus_memory_request: 400Mi # Prometheus request memory
prometheus_volume_size: 20Gi # Prometheus PVC size
grafana_enabled: true # enable grafana or not
## Container Engine Acceleration
## Use nvidia gpu acceleration in containers
# nvidia_accelerator_enabled: true # enable Nvidia GPU accelerator or not. It supports hybrid node with GPU and CPU installed.
# nvidia_gpu_nodes: # The GPU nodes specified in hosts.ini. FOr now we only support Ubuntu 16.04
# - kube-gpu-001 # The host name of the GPU node specified in hosts.ini
```
## How to Configure a GPU Node
You may want to use GPU nodes for special purpose such as machine learning. Let's say you have a GPU node called `node2` in `hosts.ini`, then in the file `common.yaml` specify the following configuration. Please be aware the `- node2` has two spaces indent.
```yaml
nvidia_accelerator_enabled: true
nvidia_gpu_nodes:
- node2
```
> Note: The GPU node now only supports Ubuntu 16.04.

View File

@ -0,0 +1,107 @@
---
title: "StorageClass"
keywords: 'KubeSphere, kubernetes, docker, helm, jenkins, istio, prometheus'
description: 'StorageClass'
linkTitle: "Volume Snapshots"
weight: 2130
---
This tutorial explains how to customize KubeSphere configurations in `conf/common.yaml`. You can reference the following section to understand each parameter.
```yaml
######################### Kubernetes #########################
# The default k8s version will be installed
kube_version: v1.16.7
# The default etcd version will be installed
etcd_version: v3.2.18
# Configure a cron job to backup etcd data, which is running on etcd machines.
# Period of running backup etcd job, the unit is minutes.
# The default value 30 means backup etcd every 30 minutes.
etcd_backup_period: 30
# How many backup replicas to keep.
# The default value5 means to keep latest 5 backups, older ones will be deleted by order.
keep_backup_number: 5
# The location to store etcd backups files on etcd machines.
etcd_backup_dir: "/var/backups/kube_etcd"
# Add other registry. (For users who need to accelerate image download)
docker_registry_mirrors:
- https://docker.mirrors.ustc.edu.cn
- https://registry.docker-cn.com
- https://mirror.aliyuncs.com
# Kubernetes network plugin, Calico will be installed by default. Note that Calico and flannel are recommended, which are tested and verified by KubeSphere.
kube_network_plugin: calico
# A valid CIDR range for Kubernetes services,
# 1. should not overlap with node subnet
# 2. should not overlap with Kubernetes pod subnet
kube_service_addresses: 10.233.0.0/18
# A valid CIDR range for Kubernetes pod subnet,
# 1. should not overlap with node subnet
# 2. should not overlap with Kubernetes services subnet
kube_pods_subnet: 10.233.64.0/18
# Kube-proxy proxyMode configuration, either ipvs, or iptables
kube_proxy_mode: ipvs
# Maximum pods allowed to run on every node.
kubelet_max_pods: 110
# Enable nodelocal dns cache, see https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.md#nodelocal-dns-cache for further information
enable_nodelocaldns: true
# Highly Available loadbalancer example config
# apiserver_loadbalancer_domain_name: "lb.kubesphere.local" # Loadbalancer domain name
# loadbalancer_apiserver: # Loadbalancer apiserver configuration, please uncomment this line when you prepare HA install
# address: 192.168.0.10 # Loadbalancer apiserver IP address
# port: 6443 # apiserver port
######################### KubeSphere #########################
# Version of KubeSphere
ks_version: v2.1.0
# KubeSphere console port, range 30000-32767,
# but 30180/30280/30380 are reserved for internal service
console_port: 30880 # KubeSphere console nodeport
#CommonComponent
mysql_volume_size: 20Gi # MySQL PVC size
minio_volume_size: 20Gi # Minio PVC size
etcd_volume_size: 20Gi # etcd PVC size
openldap_volume_size: 2Gi # openldap PVC size
redis_volume_size: 2Gi # Redis PVC size
# Monitoring
prometheus_replica: 2 # Prometheus replicas with 2 by default which are responsible for monitoring different segments of data source and provide high availability as well.
prometheus_memory_request: 400Mi # Prometheus request memory
prometheus_volume_size: 20Gi # Prometheus PVC size
grafana_enabled: true # enable grafana or not
## Container Engine Acceleration
## Use nvidia gpu acceleration in containers
# nvidia_accelerator_enabled: true # enable Nvidia GPU accelerator or not. It supports hybrid node with GPU and CPU installed.
# nvidia_gpu_nodes: # The GPU nodes specified in hosts.ini. FOr now we only support Ubuntu 16.04
# - kube-gpu-001 # The host name of the GPU node specified in hosts.ini
```
## How to Configure a GPU Node
You may want to use GPU nodes for special purpose such as machine learning. Let's say you have a GPU node called `node2` in `hosts.ini`, then in the file `common.yaml` specify the following configuration. Please be aware the `- node2` has two spaces indent.
```yaml
nvidia_accelerator_enabled: true
nvidia_gpu_nodes:
- node2
```
> Note: The GPU node now only supports Ubuntu 16.04.

View File

@ -0,0 +1,10 @@
---
title: "Volumes"
keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus'
description: 'Create Volumes (PVCs)'
linkTitle: "Volumes"
weight: 2110
---
TBD

View File

@ -0,0 +1,107 @@
---
title: "Volume Snapshots"
keywords: 'KubeSphere, kubernetes, docker, helm, jenkins, istio, prometheus'
description: 'Volume Snapshots'
linkTitle: "Volume Snapshots"
weight: 2130
---
This tutorial explains how to customize KubeSphere configurations in `conf/common.yaml`. You can reference the following section to understand each parameter.
```yaml
######################### Kubernetes #########################
# The default k8s version will be installed
kube_version: v1.16.7
# The default etcd version will be installed
etcd_version: v3.2.18
# Configure a cron job to backup etcd data, which is running on etcd machines.
# Period of running backup etcd job, the unit is minutes.
# The default value 30 means backup etcd every 30 minutes.
etcd_backup_period: 30
# How many backup replicas to keep.
# The default value5 means to keep latest 5 backups, older ones will be deleted by order.
keep_backup_number: 5
# The location to store etcd backups files on etcd machines.
etcd_backup_dir: "/var/backups/kube_etcd"
# Add other registry. (For users who need to accelerate image download)
docker_registry_mirrors:
- https://docker.mirrors.ustc.edu.cn
- https://registry.docker-cn.com
- https://mirror.aliyuncs.com
# Kubernetes network plugin, Calico will be installed by default. Note that Calico and flannel are recommended, which are tested and verified by KubeSphere.
kube_network_plugin: calico
# A valid CIDR range for Kubernetes services,
# 1. should not overlap with node subnet
# 2. should not overlap with Kubernetes pod subnet
kube_service_addresses: 10.233.0.0/18
# A valid CIDR range for Kubernetes pod subnet,
# 1. should not overlap with node subnet
# 2. should not overlap with Kubernetes services subnet
kube_pods_subnet: 10.233.64.0/18
# Kube-proxy proxyMode configuration, either ipvs, or iptables
kube_proxy_mode: ipvs
# Maximum pods allowed to run on every node.
kubelet_max_pods: 110
# Enable nodelocal dns cache, see https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.md#nodelocal-dns-cache for further information
enable_nodelocaldns: true
# Highly Available loadbalancer example config
# apiserver_loadbalancer_domain_name: "lb.kubesphere.local" # Loadbalancer domain name
# loadbalancer_apiserver: # Loadbalancer apiserver configuration, please uncomment this line when you prepare HA install
# address: 192.168.0.10 # Loadbalancer apiserver IP address
# port: 6443 # apiserver port
######################### KubeSphere #########################
# Version of KubeSphere
ks_version: v2.1.0
# KubeSphere console port, range 30000-32767,
# but 30180/30280/30380 are reserved for internal service
console_port: 30880 # KubeSphere console nodeport
#CommonComponent
mysql_volume_size: 20Gi # MySQL PVC size
minio_volume_size: 20Gi # Minio PVC size
etcd_volume_size: 20Gi # etcd PVC size
openldap_volume_size: 2Gi # openldap PVC size
redis_volume_size: 2Gi # Redis PVC size
# Monitoring
prometheus_replica: 2 # Prometheus replicas with 2 by default which are responsible for monitoring different segments of data source and provide high availability as well.
prometheus_memory_request: 400Mi # Prometheus request memory
prometheus_volume_size: 20Gi # Prometheus PVC size
grafana_enabled: true # enable grafana or not
## Container Engine Acceleration
## Use nvidia gpu acceleration in containers
# nvidia_accelerator_enabled: true # enable Nvidia GPU accelerator or not. It supports hybrid node with GPU and CPU installed.
# nvidia_gpu_nodes: # The GPU nodes specified in hosts.ini. FOr now we only support Ubuntu 16.04
# - kube-gpu-001 # The host name of the GPU node specified in hosts.ini
```
## How to Configure a GPU Node
You may want to use GPU nodes for special purpose such as machine learning. Let's say you have a GPU node called `node2` in `hosts.ini`, then in the file `common.yaml` specify the following configuration. Please be aware the `- node2` has two spaces indent.
```yaml
nvidia_accelerator_enabled: true
nvidia_gpu_nodes:
- node2
```
> Note: The GPU node now only supports Ubuntu 16.04.

View File

@ -0,0 +1,7 @@
---
linkTitle: "Installation"
weight: 2100
_build:
render: false
---

View File

@ -0,0 +1,107 @@
---
title: "Volume Snapshots"
keywords: 'KubeSphere, kubernetes, docker, helm, jenkins, istio, prometheus'
description: 'Volume Snapshots'
linkTitle: "Volume Snapshots"
weight: 2130
---
This tutorial explains how to customize KubeSphere configurations in `conf/common.yaml`. You can reference the following section to understand each parameter.
```yaml
######################### Kubernetes #########################
# The default k8s version will be installed
kube_version: v1.16.7
# The default etcd version will be installed
etcd_version: v3.2.18
# Configure a cron job to backup etcd data, which is running on etcd machines.
# Period of running backup etcd job, the unit is minutes.
# The default value 30 means backup etcd every 30 minutes.
etcd_backup_period: 30
# How many backup replicas to keep.
# The default value5 means to keep latest 5 backups, older ones will be deleted by order.
keep_backup_number: 5
# The location to store etcd backups files on etcd machines.
etcd_backup_dir: "/var/backups/kube_etcd"
# Add other registry. (For users who need to accelerate image download)
docker_registry_mirrors:
- https://docker.mirrors.ustc.edu.cn
- https://registry.docker-cn.com
- https://mirror.aliyuncs.com
# Kubernetes network plugin, Calico will be installed by default. Note that Calico and flannel are recommended, which are tested and verified by KubeSphere.
kube_network_plugin: calico
# A valid CIDR range for Kubernetes services,
# 1. should not overlap with node subnet
# 2. should not overlap with Kubernetes pod subnet
kube_service_addresses: 10.233.0.0/18
# A valid CIDR range for Kubernetes pod subnet,
# 1. should not overlap with node subnet
# 2. should not overlap with Kubernetes services subnet
kube_pods_subnet: 10.233.64.0/18
# Kube-proxy proxyMode configuration, either ipvs, or iptables
kube_proxy_mode: ipvs
# Maximum pods allowed to run on every node.
kubelet_max_pods: 110
# Enable nodelocal dns cache, see https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.md#nodelocal-dns-cache for further information
enable_nodelocaldns: true
# Highly Available loadbalancer example config
# apiserver_loadbalancer_domain_name: "lb.kubesphere.local" # Loadbalancer domain name
# loadbalancer_apiserver: # Loadbalancer apiserver configuration, please uncomment this line when you prepare HA install
# address: 192.168.0.10 # Loadbalancer apiserver IP address
# port: 6443 # apiserver port
######################### KubeSphere #########################
# Version of KubeSphere
ks_version: v2.1.0
# KubeSphere console port, range 30000-32767,
# but 30180/30280/30380 are reserved for internal service
console_port: 30880 # KubeSphere console nodeport
#CommonComponent
mysql_volume_size: 20Gi # MySQL PVC size
minio_volume_size: 20Gi # Minio PVC size
etcd_volume_size: 20Gi # etcd PVC size
openldap_volume_size: 2Gi # openldap PVC size
redis_volume_size: 2Gi # Redis PVC size
# Monitoring
prometheus_replica: 2 # Prometheus replicas with 2 by default which are responsible for monitoring different segments of data source and provide high availability as well.
prometheus_memory_request: 400Mi # Prometheus request memory
prometheus_volume_size: 20Gi # Prometheus PVC size
grafana_enabled: true # enable grafana or not
## Container Engine Acceleration
## Use nvidia gpu acceleration in containers
# nvidia_accelerator_enabled: true # enable Nvidia GPU accelerator or not. It supports hybrid node with GPU and CPU installed.
# nvidia_gpu_nodes: # The GPU nodes specified in hosts.ini. FOr now we only support Ubuntu 16.04
# - kube-gpu-001 # The host name of the GPU node specified in hosts.ini
```
## How to Configure a GPU Node
You may want to use GPU nodes for special purpose such as machine learning. Let's say you have a GPU node called `node2` in `hosts.ini`, then in the file `common.yaml` specify the following configuration. Please be aware the `- node2` has two spaces indent.
```yaml
nvidia_accelerator_enabled: true
nvidia_gpu_nodes:
- node2
```
> Note: The GPU node now only supports Ubuntu 16.04.

View File

@ -0,0 +1,10 @@
---
title: "Volumes"
keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus'
description: 'Create Volumes (PVCs)'
linkTitle: "Volumes"
weight: 2110
---
TBD

View File

@ -1,11 +1,11 @@
---
title: "quick-start"
title: "Quick Start"
description: "Help you to better understand KubeSphere with detailed graphics and contents"
layout: "single"
linkTitle: "quick-start"
linkTitle: "Quick Start"
weight: 3000
weight: 1500
icon: "/images/docs/docs.svg"
@ -19,4 +19,4 @@ In this chapter, we will demonstrate how to use KubeKey to provision a new Kuber
Below you will find some of the most common and helpful pages from this chapter. We highly recommend you to review them at first.
{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}}
{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}}

View File

@ -1,170 +0,0 @@
---
title: "Getting Started with Multi-tenant Management"
keywords: 'kubesphere, kubernetes, docker, multi-tenant'
description: 'The guide to get familiar with KubeSphere multi-tenant management'
linkTitle: "1"
weight: 3010
---
## Objective
This is the first lab exercise of KubeSphere. We strongly suggest you to learn it with your hands. This guide shows how to create workspace, role and user account which are required for next lab exercises. Moreover, you will learn how to create project and DevOps project within your workspace where is the place your workloads are running. After this lab, you will get familiar with KubeSphere multi-tenant management system.
## Prerequisites
You need to have a KubeSphere installed.
## Estimated Time
About 15 minutes
## Architecture
KubeSphere system is organized into **three** hierarchical structures of tenants which are cluster, workspace and project. Here a project is a Kubernetes namespace.
As shown below, you can create multiple workspaces within a Kubernetes cluster. Under each workspace you can also create multiple projects.
For each level, there are multiple built-in roles. and it allows you to create role with customized authorization as well. This hierarchy list is appropriate for enterprise users who have different teams or groups, and different roles within each team.
![Architecture](https://pek3b.qingstor.com/kubesphere-docs/png/20200105121616.png)
## Hands-on Lab
### Task 1: Create Roles and Accounts
The first task is going to create an account and a role, and assign the role to the user. This task must be done using the built-in user `admin` with the role `cluster-admin`.
There are three built-in roles in the cluster level as shown below.
| Built-in Roles | Description |
| --- | --- |
| cluster-admin | It has the privilege to manage any resources in the cluster. |
| workspaces-manager | It is able to manage workspaces including creating, deleting and managing the users of a workspace. |
| cluster-regular | Regular users have no authorization to manage resources before being invited to a workspaces. The access right is decided by the assigned role to the specific workspace or project.|
Here is an example showing you how to create a new role named `users-manager`, grant **account management** and **role management** capabilities to the role, then create a new account named `user-manager` and grant it the users-manager role.
| Account Name | Cluster Role | Responsibility |
| --- | --- | --- |
| user-manager | users-manager | Manage cluster accounts and roles |
1.1 Log in with the built-in user `admin`, click **Platform → Platform Roles**. You can see the role list as follows. Click **Create** to create a role which is used to manage all accounts and roles.
![Roles](https://pek3b.qingstor.com/kubesphere-docs/png/20190716112614.png#align=left&display=inline&height=998&originHeight=998&originWidth=2822&search=&status=done&width=2822)
1.2. Fill in the basic information and authorization settings of the role.
- Name: `users-manager`
- Description: Describe the role's responsibilities, here we type `Manage accounts and roles`
1.3. Check all the access rights on the options of `Account Management` and `Role Management`; then click **Create**.
![Authorization Settings](https://pek3b.qingstor.com/kubesphere-docs/png/20200305172551.png)
1.4. Click **Platform → Accounts**. You can see the account list in the current cluster. Then click **Create**.
![Account List](https://pek3b.qingstor.com/kubesphere-docs/png/20190716112945.png#align=left&display=inline&height=822&originHeight=822&originWidth=2834&search=&status=done&width=2834)
1.5. Fill in the new user's basic information. Set the username as `user-manager`; select the role `users-manager` and fill other items as required. Then click **OK** to create this account.
![Create Account](https://pek3b.qingstor.com/kubesphere-docs/png/20200105152641.png)
1.6. Then log out and log in with the user `user-manager` to create four accounts that are going to be used in next lab exercises. Once login, enter **Platform → Accounts**, then create the four accounts in the following table.
| Account Name | Cluster Role | Responsibility |
| --- | --- | --- |
| ws-manager | workspaces-manager | Create and manage all workspaces |
| ws-admin | cluster-regular | Manage all resources under a specific workspace (This example is used to invite new members to join a workspace.) |
| project-admin | cluster-regular | Create and manage projects, DevOps projects and invite new members into the projects |
| project-regular | cluster-regular | The regular user will be invited to the project and DevOps project by the project-admin. We use this account to create workloads, pipelines and other resources under the specified project. |
1.7. Verify the four accounts that we have created.
![Verify Accounts](https://pek3b.qingstor.com/kubesphere-docs/png/20190716114245.png#align=left&display=inline&height=1494&originHeight=1494&originWidth=2794&search=&status=done&width=2794)
### Task 2: Create a Workspace
The second task is going to create a workspace using the user `ws-manager` created in the previous task. As we know, it is a workspace admin.
Workspace is the base for KubeSphere multi-tenant management. It is also the basic logic unit for projects, DevOps projects and organization members.
2.1. Log in KubeSphere with `ws-manager` which has the authorization to manage all workspaces on the platform.
Click **Platform → Workspace** on the left top corner. You can see there is only one default workspace **system-workspace** listed in the page, which is for running system related components and services. You are not allowed to delete this workspace.
Click **Create** in the workspace list page, name the new workspace `demo-workspace` and assign the user `ws-admin` as the workspace admin as the screenshot shown below:
![Workspace List](https://pek3b.qingstor.com/kubesphere-docs/png/20190716130007.png#align=left&display=inline&height=736&originHeight=736&originWidth=1804&search=&status=done&width=1804)
2.2. Logout and sign in with `ws-admin` after `demo-workspace` is created. Then click **View Workspace**, select **Workspace Settings → Workspace Members** and click **Invite Member**.
![Invite Members](https://pek3b.qingstor.com/kubesphere-docs/png/20200105155226.png)
2.3. Invite both `project-admin` and `project-regular` and grant them `workspace-regular` accordingly, click **OK** to save it. Now there are three members in the `demo-workspace`.
| User Name | Role in the Workspace | Responsibility |
| --- | --- | --- |
| ws-admin | workspace-admin | Manage all resources under the workspace (We use this account to invite new members into the workspace). |
| project-admin | workspace-regular | Create and manage projects, DevOps projects, and invite new members to join. |
| project-regular | workspace-viewer | Will be invited by project-admin to join the project and DevOps project. We use this account to create workloads, pipelines, etc. |
![Workspace Members](https://pek3b.qingstor.com/kubesphere-docs/png/20190716130517.png#align=left&display=inline&height=1146&originHeight=1146&originWidth=1318&search=&status=done&width=1318)
### Task 3: Create a Project
This task is going to show how to create a project and some related operations in the project using Project Admin.
3.1. Sign in with `project-admin` created in the first task, then click **Create** and select **Create a resource project**.
![Project List](https://pek3b.qingstor.com/kubesphere-docs/png/20190716131852.png#align=left&display=inline&height=1322&originHeight=1322&originWidth=2810&search=&status=done&width=2810)
3.2. Name it `demo-project`, then set the CPU limit to 1 Core and memory limit to 1000 Mi in the Advanced Settings, then click **Create**.
3.3. Choose **Project Settings → Project Members** and click **Invite Member**.
![Invite Project Members](https://pek3b.qingstor.com/kubesphere-docs/png/20200105160247.png)
3.4. Invite `project-regular` to this project and grant this user the role **operator**.
![Built-in Projects Roles](https://pek3b.qingstor.com/kubesphere-docs/png/20190716132840.png#align=left&display=inline&height=1038&originHeight=1038&originWidth=1646&search=&status=done&width=1646)
![Project Roles](https://pek3b.qingstor.com/kubesphere-docs/png/20190716132920.png#align=left&display=inline&height=518&originHeight=518&originWidth=2288&search=&status=done&width=2288)
#### Set Gateway
Before creating a route which is the Kubernetes Ingress, you need to enable a gateway for this project. The gateway is an [Nginx ingress controller](https://github.com/kubernetes/ingress-nginx) running in the project.
3.5. We continue to use `project-admin`. Choose **Project Settings → Advanced Settings** and click **Set Gateway**.
![Gateway Page](https://pek3b.qingstor.com/kubesphere-docs/png/20200105161214.png)
3.6. Choose the access method **NodePort** and click **Save**.
![Set Gateway](https://pek3b.qingstor.com/kubesphere-docs/png/20190716134742.png#align=left&display=inline&height=946&originHeight=946&originWidth=2030&search=&status=done&width=2030)
3.7. Now we are able to see the Gateway Address, the NodePort of http and https appeared in the page.
> Note: If you want to expose services using LoadBalancer type, you need to use the [LoadBalancer plugin of cloud provider](https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/). If your Kubernetes cluster is running on bare metal environment, we recommend you to use [Porter](https://github.com/kubesphere/porter) as the LoadBalancer plugin.
![NodePort Gateway](https://pek3b.qingstor.com/kubesphere-docs/png/20200105161335.png)
### Task 4: Create DevOps Project (Optional)
> Prerequisite: You need to install [KubeSphere DevOps system](../../installation/install-devops), which is a pluggable component providing CI/CD pipeline, Binary-to-image and Source-to-image features.
4.1. We still use the account `project-admin` to demonstrate this task. Click **Workbench** and click **Create** button, then select **Create a DevOps project**.
![Workbench](https://pek3b.qingstor.com/kubesphere-docs/png/20200105162512.png)
4.2. Fill in the basic information, e.g. name it `demo-devops`, then click **Create** button. It will take a while to initialize before switching to `demo-devops` page.
![demo-devops](https://pek3b.qingstor.com/kubesphere-docs/png/20200105162623.png)
4.3. Similarly, navigate to **Project Management → Project Members**, then click **Invite Member** and grant `project-regular` the role of `maintainer`, which is allowed to create pipeline, credentials, etc.
![Invite DevOps member](https://pek3b.qingstor.com/kubesphere-docs/png/20200105162710.png)
Congratulations! You've been familiar with KubeSphere multi-tenant management mechanism. In the next few tutorials, we will use the account `project-regular` to demonstrate how to create applications and resources under the project and the DevOps project.

View File

@ -0,0 +1,8 @@
---
title: "All-in-one on Linux"
keywords: 'kubesphere, kubernetes, docker, multi-tenant'
description: 'All-in-one on Linux'
linkTitle: "All-in-one on Linux"
weight: 3010
---

View File

@ -1,162 +0,0 @@
---
title: "Application Store"
keywords: 'kubepshere, kubernetes, docker, helm, openpitrix, application store'
description: 'Application lifecycle management in Helm-based application store sponsored by OpenPitrix'
linkTitle: "13"
weight: 3130
---
KubeSphere integrates open source [OpenPitrix](https://github.com/openpitrix/openpitrix) to set up app store and app repository services which provide full lifecycle of application management. Application Store supports three kinds of application deployment as follows:
> - **Global application store** provides one-click deployment service for Helm-based applications. It provides nine built-in applications for testing.
> - **Application template** provides a way for developers and ISVs to share applications with users in a workspace. It also supports importing third-party application repositories within workspace.
> - **Composing application** means users can quickly compose multiple microservices into a complete application through the one-stop console.
![App Store](https://pek3b.qingstor.com/kubesphere-docs/png/20200212172234.png)
## Objective
In this tutorial, we will walk you through how to use [EMQ X](https://www.emqx.io/) as a demo application to demonstrate the **global application store** and **application lifecycle management** including upload / submit / review / test / release / upgrade / delete application templates.
## Prerequisites
- You need to install [Application Store (OpenPitrix)](../../installation/install-openpitrix).
- You need to create a workspace and a project, see [Get Started with Multi-tenant Management](../admin-quick-start).
## Hands-on Lab
### Step 1: Create Customized Role and Account
In this step, we will create two accounts, i.e., `isv` for ISVs and `reviewer` for app technical reviewers.
1.1. First of all, we need to create a role for app reviewers. Log in KubeSphere console with the account `admin`, go to **Platform → Platform Roles**, then click **Create** and name it `app-review`, choose **App Template** in the authorization settings list, then click **Create**.
![Authorization Settings](https://pek3b.qingstor.com/kubesphere-docs/png/20200305172646.png)
1.2. Create an account `reviewer`, and grant the role of **app-review** to it.
1.3. Similarly, create an account `isv`, and grant the role of **cluster-regular** to it.
![Create Roles](https://pek3b.qingstor.com/kubesphere-docs/png/20200212180757.png)
1.4. Invite the accounts that we created above to an existing workspace such as `demo-workspace`, and grant them the role of `workspace-admin`.
### Step 2: Upload and Submit Application
2.1. Log in KubeSphere with `isv`, enter the workspace. We are going to upload the EMQ X app to this workspace. First please download [EMQ X chart v1.0.0](https://github.com/kubesphere/tutorial/raw/master/tutorial%205%20-%20app-store/emqx-v1.0.0-rc.1.tgz) and click **Upload Template** by choosing **App Templates**.
> Note we are going to upload a newer version of EMQ X to demo the upgrade feature later on.
![App Templates](https://pek3b.qingstor.com/kubesphere-docs/png/20200212183110.png)
2.2. Click **Upload**, then click **Upload Helm Chart Package** to upload the chart.
![Upload Template](https://pek3b.qingstor.com/kubesphere-docs/png/20200212183634.png)
2.3. Click **OK**. Now download [EMQ Icon](https://github.com/kubesphere/tutorial/raw/master/tutorial%205%20-%20app-store/emqx-logo.png) and click **Upload icon** to upload App logo. Click **OK** when you are done.
![EMQ Template](https://pek3b.qingstor.com/kubesphere-docs/png/20200212232222.png)
2.4. At this point, you will be able to see the status displays `draft`, which means this app is under developing. The uploaded app is visible for all members in the same workspace.
![Template List](https://pek3b.qingstor.com/kubesphere-docs/png/20200212232332.png)
2.5. Enter app template detailed page by clicking on EMQ X from the list. You can edit the basic information of this app by clicking **Edit Info**.
![Edit Template](https://pek3b.qingstor.com/kubesphere-docs/png/20200212232811.png)
2.6. You can customize the app's basic information by filling in the table as the following screenshot.
![Customize Template](https://pek3b.qingstor.com/kubesphere-docs/png/20200213143953.png)
2.7. Save your changes, then you can test this application by deploying to Kubernetes. Click on the **Test Deploy** button.
![Save Template](https://pek3b.qingstor.com/kubesphere-docs/png/20200213152954.png)
2.8. Select project that you want to deploy into, then click **Deploy**.
![Deploy Template](https://pek3b.qingstor.com/kubesphere-docs/png/20200213153820.png)
2.9. Wait for a few minutes, then switch to the tab **Deployed Instances**. You will find EMQ X App has been deployed successfully.
![Template Instance](https://pek3b.qingstor.com/kubesphere-docs/png/20200213161854.png)
2.10. At this point, you can click `Submit Review` to submit this application to `reviewer`.
![Submit Template](https://pek3b.qingstor.com/kubesphere-docs/png/20200213162159.png)
2.11. As shown in the following graph, the app status has been changed to `Submitted`. Now app reviewer can review it.
![Template Status](https://pek3b.qingstor.com/kubesphere-docs/png/20200213162811.png)
### Step 3: Review Application
3.1. Log out, then use `reviewer` account to log in KubeSphere. Navigate to **Platform → App Management → App Review**.
![Review List](https://pek3b.qingstor.com/kubesphere-docs/png/20200213163535.png)
3.2. Click **Review** by clicking the vertical three dots at the end of app item in the list, then you start to review the app's basic information, introduction, chart file and updated logs from the pop-up windows.
![EMQ Info](https://pek3b.qingstor.com/kubesphere-docs/png/20200213163802.png)
3.3. It is the reviewer's responsibility to judge if the app satisfies the criteria of the Global App Store or not, if yes, then click `Pass`; otherwise, `Reject` it.
### Step 4: Release Application to Store
4.1. Log out and switch to use `isv` to log in KubeSphere. Now `isv` can release the EMQ X application to the global application store which means all users in this platform can find and deploy this application.
4.2. Enter the demo workspace and navigate to the EMQ X app from the template list. Enter the detailed page and expand the version list, then click **Release to Store**, choose **OK** in the pop-up windows.
![Release EMQ](https://pek3b.qingstor.com/kubesphere-docs/png/20200213171324.png)
4.3. At this point, EMQ X has been released to application store.
![Audit Records](https://pek3b.qingstor.com/kubesphere-docs/png/20200213171705.png)
4.4. Go to **App Store** in the top menu, you will see the app in the list.
![EMQ on Store](https://pek3b.qingstor.com/kubesphere-docs/png/20200213172436.png)
4.5. At this point, we can use any role of users to access EMQ X application. Click into the application detailed page to go through its basic information. You can click **Deploy** button to deploy the application to Kubernetes.
![Deploy EMQ](https://pek3b.qingstor.com/kubesphere-docs/png/20200213172650.png)
### Step 5: Create Application Category
Depending on the business needs, `Reviewer` can create multiple categories for different types of applications. It is similar as tag and can be used in application store to filter applications, e.g. Big data, Middleware, IoT, etc.
As for EMQ X application, we can create a category and name it `IOT`. First switch back to the user `Reviewer`, go to **Platform → App Management → App Categories**
![Create Category](https://pek3b.qingstor.com/kubesphere-docs/png/20200213172046.png)
Then click **Uncategorized** and find EMQ X, change its category to `IOT` and save it.
> Note usually reviewer should create necessary categories in advance according to the requirements of the store. Then ISVs categorize their applications as appropriate before submitting for review.
![Categorize EMQ](https://pek3b.qingstor.com/kubesphere-docs/png/20200213172311.png)
### Step 6: Add New Version
6.1. KubeSphere supports adding new versions of existing applications for users to quickly upgrade. Let's continue to use `isv` account and enter the EMQ X template page in the workspace.
![Create New Version](https://pek3b.qingstor.com/kubesphere-docs/png/20200213173325.png)
6.2. Download [EMQ X v4.0.2](https://github.com/kubesphere/tutorial/raw/master/tutorial%205%20-%20app-store/emqx-v4.0.2.tgz), then click on the **New Version** on the right, upload the package that you just downloaded.
![Upload New Version](https://pek3b.qingstor.com/kubesphere-docs/png/20200213173744.png)
6.3. Click **OK** when you upload successfully.
![New Version Info](https://pek3b.qingstor.com/kubesphere-docs/png/20200213174026.png)
6.4. At this point, you can test the new version and submit it to `Reviewer`. This process is similar to the one for the first version.
![Submit New Version](https://pek3b.qingstor.com/kubesphere-docs/png/20200213174256.png)
6.5. After you submit the new version, the rest of process regarding review and release are also similar to the first version that we demonstrated above.
### Step 7: Upgrade
After the new version has been released to application store, all users can upgrade from this application.

View File

@ -1,129 +0,0 @@
---
title: "Binary to Image - Publish Artifacts to Kubernetes"
keywords: "kubesphere, kubernetes, docker, B2I, binary to image, jenkins"
description: "Deploy Artifacts to Kubernetes Using Binary to Image"
linkTitle: "8"
weight: 3080
---
## What is Binary to Image
As similar as [Source to Image (S2I)](../source-to-image), Binary to Image (B2I) is a toolkit and workflow for building reproducible container images from binary executables like Jar, War, binary package, etc. All you need to do is to upload your artifact, and specify the image repository such as DockerHub or Harbor to where you want to push. After you run a B2I process, your image will be pushed to the target repository and your application will be automatically deployed to Kubernetes as well.
## How does B2I Improve CD Efficiency
From the introduction above we can see B2I bridges your binary executables to cloud native services with no complicated configurations or coding which is extremely useful for legacy applications and the users who are not familiar with Docker and Kubernetes. Moreover, with B2I tool, as said, you do not need to write Dockerfile, it not only reduces learning costs but improves publishing efficiency, which enables developers to focus on business itself. In a word, B2I can greatly empower enterprises for continuous delivery that is one of the keys to digital transformation.
The following figure describes the step-by-step process of B2I. B2I has instrumented and streamlined the steps, so it takes few clicks to complete in KubeSphere console.
![B2I Process](https://pek3b.qingstor.com/kubesphere-docs/png/20200108144952.png)
> - ① Create B2I in KubeSphere console and upload artifact or binary package
> - ② B2I will create K8s Job, Deployment and Service based on the uploaded binary
> - ③ Automatically package the artifact into Docker image
> - ④ Push image to DockerHub or Harbor
> - ⑤ B2I Job will pull the image from registry for Deployment created in the second step
> - ⑥ Automatically publish the service to Kubernetes
>
> Note: In the process, the B2I Job also reports status in the backend.
In this document, we will walk you through how to use B2I in KubeSphere. For more testing purposes on your own, we provide five artifact packages that you can download from the sites in the following tables.
|Artifact Package (Click to download) | GitHub Repository|
| --- | ---- |
| [b2i-war-java8.war](https://github.com/kubesphere/tutorial/raw/master/tutorial%204%20-%20s2i-b2i/b2i-war-java8.war)| [Spring-MVC-Showcase](https://github.com/spring-projects/spring-mvc-showcase)|
|[b2i-war-java11.war](https://github.com/kubesphere/tutorial/raw/master/tutorial%204%20-%20s2i-b2i/b2i-war-java11.war)| [SpringMVC5](https://github.com/kubesphere/s2i-java-container/tree/master/tomcat/examples/springmvc5)
|[b2i-binary](https://github.com/kubesphere/tutorial/raw/master/tutorial%204%20-%20s2i-b2i/b2i-binary)| [DevOps-go-sample](https://github.com/runzexia/devops-go-sample) |
|[b2i-jar-java11.jar](https://github.com/kubesphere/tutorial/raw/master/tutorial%204%20-%20s2i-b2i/b2i-jar-java11.jar) |[java-maven-example](https://github.com/kubesphere/s2i-java-container/tree/master/java/examples/maven) |
|[b2i-jar-java8.jar](https://github.com/kubesphere/tutorial/raw/master/tutorial%204%20-%20s2i-b2i/b2i-jar-java8.jar) | [devops-java-sample](https://github.com/kubesphere/devops-java-sample) |
## Prerequisites
- You have installed [KubeSphere DevOps System](../../installation/install-devops).
- You have created a workspace, a project and a `project-regular` account. Please see [Get Started with Multi-tenant Management](../admin-quick-start).
- Set CI dedicated node for building images, please refer to [Set CI Node for Dependency Cache](../../devops/devops-ci-node). This is not mandatory but recommended for development and production environment since it caches artifacts dependency.
## Hands-on Lab
In this lab, we will learn how to use B2I by creating service in KubeSphere, and how to automatically complete six steps described in the workflow graph above.
### Step 1: Create a Secret
We need to create a secret since B2I Job will push the image to DockerHub. If you have finished [S2I lab](../source-to-image), you already have the secret created. Otherwise, log in KubeSphere with the account `project-regular`. Go to your project and create the secret for DockerHub. Please reference [Creating Common-used Secrets](../../configuration/secrets#create-common-used-secrets).
### Step 2: Create a Service
2.1. Select **Application Workloads → Services**, then click **Create** to create a new service through the artifact.
![Create Service](https://pek3b.qingstor.com/kubesphere-docs/png/20200108170544.png)
2.2. Scroll down to **Build a new service through the artifact** and choose **war**. We will use the [Spring-MVC-Showcase](https://github.com/spring-projects/spring-mvc-showcase) project as a sample by uploading the WAR artifact ([b2i-war-java8](https://github.com/kubesphere/tutorial/raw/master/tutorial%204%20-%20s2i-b2i/b2i-war-java8.war)) to KubeSphere.
2.3. Enter service name `b2i-war-java8`, click **Next**.
2.4. Refer to the following instructions to fill in **Build Settings**.
- Upload `b2i-war-java8.war` to KubeSphere.
- Choose `tomcat85-java8-centos7:latest` as the build environment.
- Enter `<DOCKERHUB_USERNAME>/<IMAGE NAME>` or `<HARBOR-PROJECT_NAME>/<IMAGE NAME>` as image name.
- Tag the image, for instance, `latest`.
- Select `dockerhub-secret` that we created in previous step as target image repository: .
![Build Settings](https://pek3b.qingstor.com/kubesphere-docs/png/20200108175747.png)
2.5. Click **Next** to the **Container Settings** and configure the basic info as shown in the figure below.
![Container Settings](https://pek3b.qingstor.com/kubesphere-docs/png/20200108175907.png)
2.6. Click **Next** and continue to click **Next** to skip **Mount Volumes**.
2.7. Check **Internet Access** and choose **NodePort**, then click **Create**.
![Internet Access](https://pek3b.qingstor.com/kubesphere-docs/png/20200108180015.png)
### Step 3: Verify B2I Build Status
3.1. Choose **Image Builder** and click into **b2i-war-java8-xxx** to inspect B2I building status.
![Image Builder](https://pek3b.qingstor.com/kubesphere-docs/png/20200108181100.png)
3.2. Now it is ready to verify the status. You can expand the Job records to inspect the rolling logs. Normally, it will execute successfully in 2~4 minutes.
![Job Records](https://pek3b.qingstor.com/kubesphere-docs/png/20200108181133.png)
### Step 4: Verify the resources created by B2I
#### Service
![Service](https://pek3b.qingstor.com/kubesphere-docs/png/20200108182649.png)
#### Deployment
![Deployment](https://pek3b.qingstor.com/kubesphere-docs/png/20200108182707.png)
#### Job
![Job](https://pek3b.qingstor.com/kubesphere-docs/png/20200108183640.png)
Alternatively, if you want to use command line to inspect those resources, you can use web kubectl from the Toolbox at the bottom right of console. Note it requires cluster admin account to open the tool.
![Web Kubectl](https://pek3b.qingstor.com/kubesphere-docs/png/20200108184829.png)
### Step 5: Access the Service
Click into service **b2i-war-java8**. We can get the NodePort and Endpoints. Thereby we can access the **Spring-MVC-Showcase** service via Endpoints within cluster, or browse the web service externally using `http://{$Node IP}:{$NodePort}/{$Binary-Package-Name}/`.
![Resource Info](https://pek3b.qingstor.com/kubesphere-docs/png/20200108185210.png)
For the example above, enter **http://139.198.111.111:30182/b2i-war-java8/** to access **Spring-MVC-Showcase**. Make sure the traffic can pass through the NodePort.
![Access the Service](https://pek3b.qingstor.com/kubesphere-docs/png/20200108190256.png)
### Step 6: Verify Image in DockerHub
Sign in DockerHub with your account, you can find the image was successfully pushed to DockerHub with tag `latest`.
![Image in DockerHub](https://pek3b.qingstor.com/kubesphere-docs/png/20200108191311.png)
Congratulation! Now you know how to use B2I to package your artifacts into Docker image, however, without learning Docker.

View File

@ -1,155 +0,0 @@
---
title: "Managing Canary Release of Microservice App based on Istio"
keywords: 'kubesphere, kubernetes, docker, istio, canary release, jaeger'
description: 'How to manage canary release of microservices using Istio platform'
linkTitle: "11"
weight: 3110
---
[Istio](https://istio.io/)as an open source service mesh, provides powerful traffic management which makes canary release of a microservice possible. **Canary release** provides canary rollouts, and staged rollouts with percentage-based traffic splits.
> The following paragraph is from [Istio](https://istio.io/docs/concepts/traffic-management/) official website.
Istios traffic routing rules let you easily control the flow of traffic and API calls between services. Istio simplifies configuration of service-level properties like circuit breakers, timeouts, and retries, and makes it easy to set up important tasks like A/B testing, canary rollouts, and staged rollouts with percentage-based traffic splits. It also provides out-of-box failure recovery features that help make your application more robust against failures of dependent services or the network.
KubeSphere provides three kinds of grayscale strategies based on Istio, including blue-green deployment, canary release and traffic mirroring.
## Objective
In this tutorial, we are going to deploy a Bookinfo sample application composed of four separate microservices to demonstrate the canary release, tracing and traffic monitoring using Istio on KubeSphere.
## Prerequisites
- You need to [Enable Service Mesh System](../../installation/install-servicemesh).
- You need to complete all steps in [Getting Started with Multi-tenant Management](../admin-quick-start.md).
- Log in with `project-admin` and go to your project, then navigate to **Project Settings → Advanced Settings → Set Gateway** and turn on **Application Governance**.
### What is Bookinfo Application
The Bookinfo application is composed of four distributed microservices as shown below. There are three versions of the Reviews microservice.
- The **productpage** microservice calls the details and reviews microservices to populate the page.
- The **details** microservice contains book information.
- The **reviews** microservice contains book reviews. It also calls the ratings microservice.
- The **ratings** microservice contains book ranking information that accompanies a book review.
The end-to-end architecture of the application is shown below, see [Bookinfo Application](https://istio.io/docs/examples/bookinfo/) for more details.
![Bookinfo Application](https://pek3b.qingstor.com/kubesphere-docs/png/20190718152533.png#align=left&display=inline&height=1030&originHeight=1030&originWidth=1712&search=&status=done&width=1712)
## Hands-on Lab
### Step 1: Deploy Bookinfo Application
1.1. Log in with account `project-regular` and enter the **demo-project**, navigate to **Application Workloads → Applications**, click **Deploy Sample Application**.
![Application List](https://pek3b.qingstor.com/kubesphere-docs/png/20200210234559.png)
1.2. Click **Create** in the pop-up window, then the Bookinfo application will be deployed automatically, and the application components are listed in the following page, as well as the routes and hostname.
![Create Bookinfo Application](https://pek3b.qingstor.com/kubesphere-docs/png/20200210235159.png)
1.3. Now you can access the Bookinfo homepage as the following screenshot shown via **Click to visit** button. Click on the **Normal user** to enter into the summary page.
![Product Page](https://pek3b.qingstor.com/kubesphere-docs/png/20190718161448.png#align=left&display=inline&height=922&originHeight=922&originWidth=2416&search=&status=done&width=2416)
> Note you need to make the URL above accessible from your computer.
1.4. Notice that at this point it only shows **- Reviewer1** and **- Reviewer2** without any stars at the Book Reviews section. This is the initial status of this step.
![Review Page](https://pek3b.qingstor.com/kubesphere-docs/png/20190718161819.png#align=left&display=inline&height=986&originHeight=986&originWidth=2854&search=&status=done&width=2854)
### Step 2: Create Canary Release for Reviews Service
2.1. Back to KubeSphere console, choose **Grayscale Release**, and click **Create Canary Release Job**. Then select **Canary Release** and click **Create Job**.
![Grayscale Release List](https://pek3b.qingstor.com/kubesphere-docs/png/20190718162152.png#align=left&display=inline&height=748&originHeight=748&originWidth=2846&search=&status=done&width=2846)
![Create Grayscale release](https://pek3b.qingstor.com/kubesphere-docs/png/20190718162308.png#align=left&display=inline&height=1416&originHeight=1416&originWidth=2822&search=&status=done&width=2822)
2.2. Fill in the basic information, e.g. name it `canary-release`, click **Next** and select **reviews** as the canary service, then click **Next**.
![Reviews New Version](https://pek3b.qingstor.com/kubesphere-docs/png/20190718162550.png#align=left&display=inline&height=926&originHeight=926&originWidth=1908&search=&status=done&width=1908)
2.3. Enter `v2` as **Grayscale Release Version Number** and fill in the new image box with `kubesphere/examples-bookinfo-reviews-v2:1.13.0`. You can simply change the version of the default value in the box from `v1` to `v2`. Then click **Next**.
![Reviews New Version Info](https://pek3b.qingstor.com/kubesphere-docs/png/20190718162840.png#align=left&display=inline&height=754&originHeight=754&originWidth=1910&search=&status=done&width=1910)
2.4. The canary release supports **Forward by traffic ratio** and **Forward by request content**. In this tutorial we choose adjusting the traffic ratio to manage traffic routing between v1 and v2. Drag the slider to adjust v2 up 30% traffic, and v2 up 70%.
![Policy Config](https://pek3b.qingstor.com/kubesphere-docs/png/20190718163639.png#align=left&display=inline&height=750&originHeight=750&originWidth=1846&search=&status=done&width=1846)
2.5. Click **Create** when you have completed the configuration, then you are able to see the `canary-release` has been created successfully.
![Canary Release](https://pek3b.qingstor.com/kubesphere-docs/png/20190718164216.png#align=left&display=inline&height=850&originHeight=850&originWidth=2822&search=&status=done&width=2822)
### Step 3: Verify the Canary Release
When you visit the Bookinfo website again and refresh your browser repeatedly, you will be able to see that the Bookinfo reviews section switch between v1 and v2 at a random rate of about 30% and 70% respectively.
![Verify Canary Release](https://pek3b.qingstor.com/kubesphere-docs/png/bookinfo-canary.gif#align=left&display=inline&height=1016&originHeight=1016&originWidth=2844&search=&status=done&width=2844)
### Step 4: Inspect the Traffic Topology Graph
4.1. Connect to your SSH Client, use the following command to introduce real traffic to simulate the access to the bookinfo application every 0.5 seconds.
```bash
$ curl http://productpage.demo-project.192.168.0.88.nip.io:32565/productpage?u=normal
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0< 74 5183 74 3842 0 0 73957 0 --:--:-- --:--:-- --:--:-- 73884<!DOCTYPE html>
···
```
4.2. From the traffic management diagram, you can easily see the service invocation and dependencies, health, performance between different microservices.
![Inject Traffic](https://pek3b.qingstor.com/kubesphere-docs/png/20190718170256.png#align=left&display=inline&height=1338&originHeight=1338&originWidth=2070&search=&status=done&width=2070)
4.3. Click on the reviews card. The traffic monitoring graph will come out including real-time data of **Success rate**, **Traffic** and **Duration**.
![Traffic Graph](https://pek3b.qingstor.com/kubesphere-docs/png/20190718170727.png#align=left&display=inline&height=1150&originHeight=1150&originWidth=2060&search=&status=done&width=2060)
### Step 5: Inspect the Tracing Details
KubeSphere provides distributed tracing feature based on [Jaeger](https://www.jaegertracing.io/), which is used for monitoring and troubleshooting microservices-based distributed application.
5.1. Choose **Tracing** tab. You can clearly see all phases and internal calls of a request, as well as the period in each phase.
![Tracing](https://pek3b.qingstor.com/kubesphere-docs/png/20190718171052.png#align=left&display=inline&height=1568&originHeight=1568&originWidth=2824&search=&status=done&width=2824)
5.2. Click any item, you can even drill down to see the request details and this request is being processed by which machine or container.
![Request Details](https://pek3b.qingstor.com/kubesphere-docs/png/20190718173117.png#align=left&display=inline&height=1382&originHeight=1382&originWidth=2766&search=&status=done&width=2766)
### Step 6: Take Over All Traffic
6.1. As mentioned previously, when the canary version v2 is released, it could be used to send a portion of traffic to the canary version. Publishers can test the new version online and collect user feedbacks.
Switch to **Grayscale Release** tab, click into **canary-release**.
![Canary Release List](https://pek3b.qingstor.com/kubesphere-docs/png/20190718181326.png#align=left&display=inline&height=756&originHeight=756&originWidth=2824&search=&status=done&width=2824)
6.2. Click **···** at **reviews v2** and select **Take Over**. Then 100% of traffic will be sent to the new version v2.
> Note: If anything goes wrong along the way, we can abort and roll back to the previous version v1 in no time.
![Adjust Traffic](https://pek3b.qingstor.com/kubesphere-docs/png/20190718181413.png#align=left&display=inline&height=1438&originHeight=1438&originWidth=2744&search=&status=done&width=2744)
6.3. Open the Bookinfo page again and refresh the browsers several times. We can find that it only shows the result of reviews v2, i.e., ratings with black stars.
![New Traffic Result](https://pek3b.qingstor.com/kubesphere-docs/png/20190718235627.png#align=left&display=inline&height=1108&originHeight=1108&originWidth=2372&search=&status=done&width=2372)
### Step 7: Take Down the Old Version
When the new version v2 has been released online and takes over all the traffic successfully. Also, the testing results and online users feedback are confirmed to be correct. You can take down the old version and remove the resources of v1.
Click on the **Job Offline** button to take down the old version.
![Take Down Old Version](https://pek3b.qingstor.com/kubesphere-docs/png/20190719001803.png#align=left&display=inline&height=1466&originHeight=1466&originWidth=2742&search=&status=done&width=2742)
> Notice: If take down a specific version of the component, the associated workloads and Istio related configuration resources will be removed simultaneously. It turns out that v1 is being replaced by v2.
![Canary Release Result](https://pek3b.qingstor.com/kubesphere-docs/png/20190719001945.png#align=left&display=inline&height=1418&originHeight=1418&originWidth=1988&search=&status=done&width=1988)

View File

@ -0,0 +1,8 @@
---
title: "Compose and deploy a Wordpress App"
keywords: 'kubesphere, kubernetes, docker, multi-tenant'
description: 'Compose and deploy a Wordpress App'
linkTitle: "Compose and deploy a Wordpress App"
weight: 3050
---

View File

@ -0,0 +1,8 @@
---
title: "Create Workspace, Project, Account, Role"
keywords: 'kubesphere, kubernetes, docker, multi-tenant'
description: 'Create Workspace, Project, Account, and Role'
linkTitle: "Create Workspace, Project, Account, Role"
weight: 3030
---

View File

@ -0,0 +1,8 @@
---
title: "Deploy a Bookinfo App"
keywords: 'kubesphere, kubernetes, docker, multi-tenant'
description: 'Deploy a Bookinfo App'
linkTitle: "Deploy a Bookinfo App"
weight: 3040
---

Some files were not shown because too many files have changed in this diff Show More