update multi-cluster basic guide, sync /en to /zh

Signed-off-by: FeynmanZhou <pengfeizhou@yunify.com>
This commit is contained in:
FeynmanZhou 2020-09-07 23:42:42 +08:00
parent 42c19c2bc3
commit cab198dd0f
61 changed files with 688 additions and 1874 deletions

View File

@ -10,3 +10,5 @@ weight: 2340
The multi-cluster feature relates to the network connection among multiple clusters. Therefore, it is important to understand the topological relations of clusters as the workload can be reduced.
Before you use the multi-cluster feature, you need to create a Host Cluster (hereafter referred to as **H** Cluster), which is actually a KubeSphere cluster that has enabled the multi-cluster feature. All the clusters managed by the H Cluster are called Member Cluster (hereafter referred to as **M** Cluster). They are common KubeSphere clusters that do not have the multi-cluster feature enabled. There can only be one H Cluster while multiple M Clusters can exist at the same time. In a multi-cluster architecture, the network between the H Cluster and the M Cluster can be connected directly or through an agent. The network between M Clusters can be set in a completely isolated environment.
![Kubernetes Federation in KubeSphere](https://ap3.qingstor.com/kubesphere-website/docs/20200907232319.png)

View File

@ -1,224 +0,0 @@
---
title: "Role and Member Management"
keywords: 'kubernetes, kubesphere, air gapped, installation'
description: 'Role and Member Management'
weight: 2240
---
The air-gapped installation is almost the same as the online installation except it creates a local registry to host the Docker images. We will demonstrate how to install KubeSphere and Kubernetes on air-gapped environment.
> Note: The dependencies in different operating systems may cause upexpected problems. If you encounter any installation problems on air-gapped environment, please describe your OS information and error logs on [GitHub](https://github.com/kubesphere/kubesphere/issues).
## Prerequisites
- If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information.
> - Installer will use `/var/lib/docker` as the default directory where all Docker related files, including the images, are stored. We recommend you to add additional storage to a disk with at least 100G mounted at `/var/lib/docker` and `/mnt/registry` respectively, use the [fdisk](https://www.computerhope.com/unix/fdisk.htm) command for reference.
- Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [OpenEBS](https://openebs.io/) to provide storage service with dynamic provisioning. It is convenient for testing and development. For production, please [configure supported persistent storage service](../storage-configuration) and prepare [high availability configuration](../master-ha) before installation.
- Since the air-gapped machines cannot connect to apt or yum source, please use clean Linux machine to avoid this problem.
## Step 1: Prepare Linux Hosts
The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements.
- Supported OSes: CentOS 7.4 ~ 7.7 (64-bit), Ubuntu 16.04.5/16.04.6/18.04.1/18.04.2/18.04.3 LTS (64-bit)
- Time synchronization is required across all nodes, otherwise the installation may not succeed;
- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`;
- If you are using `Ubuntu 18.04`, you need to use the user `root`.
- Ensure your disk of each node is at least 100G.
- CPU and memory in total of all machines: 2 cores and 4 GB for minimal installation; 8 cores and 16 GB for complete installation.
The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes.
> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide.
| Host IP | Host Name | Role |
| --- | --- | --- |
|192.168.0.1|master|master, etcd|
|192.168.0.2|node1|node|
|192.168.0.3|node2|node|
### Cluster Architecture
#### Single Master, Single Etcd, Two Nodes
![Architecture](/cluster-architecture.svg)
## Step 2: Download Installer Package
Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`.
```bash
curl -L https://kubesphere.io/download/offline/latest > kubesphere-all-offline-v2.1.1.tar.gz \
&& tar -zxf kubesphere-all-offline-v2.1.1.tar.gz && cd kubesphere-all-offline-v2.1.1/conf
```
## Step 3: Configure Host Template
> This step is only for multi-node installation, you can skip this step if you choose all-in-one installation.
Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file.
> Note:
>
> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`.
> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`.
> - master, node1 and node2 are the host names of each node and all host names should be in lowercase.
### hosts.ini
```ini
[all]
master ansible_connection=local ip=192.168.0.1
node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD
node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD
[local-registry]
master
[kube-master]
master
[kube-node]
node1
node2
[etcd]
master
[k8s-cluster:children]
kube-node
kube-master
```
> Note:
>
> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here.
> - Installer will use a node as the local registry for docker images, defaults to "master" in the group `[local-registry]`.
> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively.
> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`.
>
> Parameters Specification:
>
> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection.
> - `ansible_host`: The name of the host to be connected.
> - `ip`: The ip of the host to be connected.
> - `ansible_user`: The default ssh user name to use.
> - `ansible_become_pass`: Allows you to set the privilege escalation password.
> - `ansible_ssh_pass`: The password of the host to be connected using root.
## Step 4: Enable All Components
> This is step is complete installation. You can skip this step if you choose a minimal installation.
Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default.
```yaml
# LOGGING CONFIGURATION
# logging is an optional component when installing KubeSphere, and
# Kubernetes builtin logging APIs will be used if logging_enabled is set to false.
# Builtin logging only provides limited functions, so recommend to enable logging.
logging_enabled: true # Whether to install logging system
elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number
elasticsearch_data_replica: 2 # total number of data nodes
elasticsearch_volume_size: 20Gi # Elasticsearch volume size
log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default.
elk_prefix: logstash # the string making up index names. The index name will be formatted as ks-<elk_prefix>-log
kibana_enabled: false # Kibana Whether to install built-in Grafana
#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption.
#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port
#DevOps Configuration
devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image)
jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default
jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default
jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default
jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters
jenkinsJavaOpts_Xmx: 6g
jenkinsJavaOpts_MaxRAM: 8g
sonarqube_enabled: true # Whether to install built-in SonarQube
#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption.
#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token
# Following components are all optional for KubeSphere,
# Which could be turned on to install it before installation or later by updating its value to true
openpitrix_enabled: true # KubeSphere application store
metrics_server_enabled: true # For KubeSphere HPA to use
servicemesh_enabled: true # KubeSphere service mesh system(Istio-based)
notification_enabled: true # KubeSphere notification system
alerting_enabled: true # KubeSphere alerting system
```
## Step 5: Install KubeSphere to Linux Machines
> Note:
>
> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default.
> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions.
> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation.
> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts.
**1.** Enter `scripts` folder, and execute `install.sh` using `root` user:
```bash
cd ../cripts
./install.sh
```
**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume.
```bash
################################################
KubeSphere Installer Menu
################################################
* 1) All-in-one
* 2) Multi-node
* 3) Quit
################################################
https://kubesphere.io/ 2020-02-24
################################################
Please input an option: 2
```
**3.** Verify the multi-node installation
**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go.
```bash
successsful!
#####################################################
### Welcome to KubeSphere! ###
#####################################################
Console: http://192.168.0.1:30880
Account: admin
Password: P@88w0rd
NOTEPlease modify the default password after login.
#####################################################
```
> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components).
**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in.
![Login](https://pek3b.qingstor.com/kubesphere-docs/png/20191017172215.png)
<font color=red>Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up.</font>
![Landing Page](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png)
## Enable Pluggable Components
If you already have set up minimal installation, you still can edit the ConfigMap of ks-installer using the following command. Please make sure there is enough resource in your machines, see [Pluggable Components Overview](/en/installation/pluggable-components/).
```bash
kubectl edit cm -n kubesphere-system ks-installer
```
## FAQ
If you have further questions please do not hesitate to raise issues on [GitHub](https://github.com/kubesphere/kubesphere/issues).

View File

@ -0,0 +1,10 @@
---
title: "Role and Member Management"
keywords: 'kubernetes, kubesphere, air gapped, installation'
description: 'Role and Member Management'
weight: 2240
---
TBD

View File

@ -10,14 +10,4 @@ icon: "/images/docs/docs.svg"
---
## Installing KubeSphere and Kubernetes on Linux
In this chapter, we will demonstrate how to use KubeKey to provision a new Kubernetes and KubeSphere cluster based on different infrastructures. Kubekey can help you to quickly build a production-ready cluster architecture on a set of machines from zero to one. It also helps you to easily scale the cluster and install pluggable components on existing architecture.
## Most Popular Pages
Below you will find some of the most common and helpful pages from this chapter. We highly recommend you to review them at first.
{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}}
{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}}
TBD

View File

@ -7,218 +7,4 @@ description: 'Role and Member Management'
weight: 2240
---
The air-gapped installation is almost the same as the online installation except it creates a local registry to host the Docker images. We will demonstrate how to install KubeSphere and Kubernetes on air-gapped environment.
> Note: The dependencies in different operating systems may cause upexpected problems. If you encounter any installation problems on air-gapped environment, please describe your OS information and error logs on [GitHub](https://github.com/kubesphere/kubesphere/issues).
## Prerequisites
- If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information.
> - Installer will use `/var/lib/docker` as the default directory where all Docker related files, including the images, are stored. We recommend you to add additional storage to a disk with at least 100G mounted at `/var/lib/docker` and `/mnt/registry` respectively, use the [fdisk](https://www.computerhope.com/unix/fdisk.htm) command for reference.
- Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [OpenEBS](https://openebs.io/) to provide storage service with dynamic provisioning. It is convenient for testing and development. For production, please [configure supported persistent storage service](../storage-configuration) and prepare [high availability configuration](../master-ha) before installation.
- Since the air-gapped machines cannot connect to apt or yum source, please use clean Linux machine to avoid this problem.
## Step 1: Prepare Linux Hosts
The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements.
- Supported OSes: CentOS 7.4 ~ 7.7 (64-bit), Ubuntu 16.04.5/16.04.6/18.04.1/18.04.2/18.04.3 LTS (64-bit)
- Time synchronization is required across all nodes, otherwise the installation may not succeed;
- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`;
- If you are using `Ubuntu 18.04`, you need to use the user `root`.
- Ensure your disk of each node is at least 100G.
- CPU and memory in total of all machines: 2 cores and 4 GB for minimal installation; 8 cores and 16 GB for complete installation.
The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes.
> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide.
| Host IP | Host Name | Role |
| --- | --- | --- |
|192.168.0.1|master|master, etcd|
|192.168.0.2|node1|node|
|192.168.0.3|node2|node|
### Cluster Architecture
#### Single Master, Single Etcd, Two Nodes
![Architecture](/cluster-architecture.svg)
## Step 2: Download Installer Package
Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`.
```bash
curl -L https://kubesphere.io/download/offline/latest > kubesphere-all-offline-v2.1.1.tar.gz \
&& tar -zxf kubesphere-all-offline-v2.1.1.tar.gz && cd kubesphere-all-offline-v2.1.1/conf
```
## Step 3: Configure Host Template
> This step is only for multi-node installation, you can skip this step if you choose all-in-one installation.
Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file.
> Note:
>
> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`.
> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`.
> - master, node1 and node2 are the host names of each node and all host names should be in lowercase.
### hosts.ini
```ini
[all]
master ansible_connection=local ip=192.168.0.1
node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD
node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD
[local-registry]
master
[kube-master]
master
[kube-node]
node1
node2
[etcd]
master
[k8s-cluster:children]
kube-node
kube-master
```
> Note:
>
> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here.
> - Installer will use a node as the local registry for docker images, defaults to "master" in the group `[local-registry]`.
> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively.
> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`.
>
> Parameters Specification:
>
> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection.
> - `ansible_host`: The name of the host to be connected.
> - `ip`: The ip of the host to be connected.
> - `ansible_user`: The default ssh user name to use.
> - `ansible_become_pass`: Allows you to set the privilege escalation password.
> - `ansible_ssh_pass`: The password of the host to be connected using root.
## Step 4: Enable All Components
> This is step is complete installation. You can skip this step if you choose a minimal installation.
Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default.
```yaml
# LOGGING CONFIGURATION
# logging is an optional component when installing KubeSphere, and
# Kubernetes builtin logging APIs will be used if logging_enabled is set to false.
# Builtin logging only provides limited functions, so recommend to enable logging.
logging_enabled: true # Whether to install logging system
elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number
elasticsearch_data_replica: 2 # total number of data nodes
elasticsearch_volume_size: 20Gi # Elasticsearch volume size
log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default.
elk_prefix: logstash # the string making up index names. The index name will be formatted as ks-<elk_prefix>-log
kibana_enabled: false # Kibana Whether to install built-in Grafana
#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption.
#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port
#DevOps Configuration
devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image)
jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default
jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default
jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default
jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters
jenkinsJavaOpts_Xmx: 6g
jenkinsJavaOpts_MaxRAM: 8g
sonarqube_enabled: true # Whether to install built-in SonarQube
#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption.
#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token
# Following components are all optional for KubeSphere,
# Which could be turned on to install it before installation or later by updating its value to true
openpitrix_enabled: true # KubeSphere application store
metrics_server_enabled: true # For KubeSphere HPA to use
servicemesh_enabled: true # KubeSphere service mesh system(Istio-based)
notification_enabled: true # KubeSphere notification system
alerting_enabled: true # KubeSphere alerting system
```
## Step 5: Install KubeSphere to Linux Machines
> Note:
>
> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default.
> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions.
> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation.
> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts.
**1.** Enter `scripts` folder, and execute `install.sh` using `root` user:
```bash
cd ../cripts
./install.sh
```
**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume.
```bash
################################################
KubeSphere Installer Menu
################################################
* 1) All-in-one
* 2) Multi-node
* 3) Quit
################################################
https://kubesphere.io/ 2020-02-24
################################################
Please input an option: 2
```
**3.** Verify the multi-node installation
**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go.
```bash
successsful!
#####################################################
### Welcome to KubeSphere! ###
#####################################################
Console: http://192.168.0.1:30880
Account: admin
Password: P@88w0rd
NOTEPlease modify the default password after login.
#####################################################
```
> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components).
**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in.
![Login](https://pek3b.qingstor.com/kubesphere-docs/png/20191017172215.png)
<font color=red>Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up.</font>
![Landing Page](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png)
## Enable Pluggable Components
If you already have set up minimal installation, you still can edit the ConfigMap of ks-installer using the following command. Please make sure there is enough resource in your machines, see [Pluggable Components Overview](/en/installation/pluggable-components/).
```bash
kubectl edit cm -n kubesphere-system ks-installer
```
## FAQ
If you have further questions please do not hesitate to raise issues on [GitHub](https://github.com/kubesphere/kubesphere/issues).
TBD

View File

@ -1,23 +1,66 @@
---
title: "Installing KubeSphere on Kubernetes"
description: "Help you to better understand KubeSphere with detailed graphics and contents"
title: "Installing on Kubernetes"
description: "Demonstrate how to install KubeSphere on Kubernetes either hosted on cloud or on-premises."
layout: "single"
linkTitle: "Installing KubeSphere on Kubernetes"
linkTitle: "Installing on Kubernetes"
weight: 2500
icon: "/images/docs/docs.svg"
---
## Installing KubeSphere and Kubernetes on Linux
This chapter demonstrates how to deploy KubeSphere on existing Kubernetes clusters hosted on cloud or on-premises. As a highly flexible solution to container orchestration, KubeSphere allows users to deploy it and use its services across all Kubernetes engines.
In this chapter, we will demonstrate how to use KubeKey to provision a new Kubernetes and KubeSphere cluster based on different infrastructures. Kubekey can help you to quickly build a production-ready cluster architecture on a set of machines from zero to one. It also helps you to easily scale the cluster and install pluggable components on existing architecture.
## Introduction
### [Overview](../installing-on-kubernetes/introduction/overview/)
Develop a basic understanding of the general steps of deploying KubeSphere on existing Kubernetes clusters.
### [Prerequisites](../installing-on-kubernetes/introduction/prerequisites/)
Make sure your environment where existing Kubernetes clusters run meets the prerequisites before installation.
## Installing on Hosted Kubernetes
### [Deploy KubeSphere on Oracle OKE](../installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-oke/)
Learn how to deploy KubeSphere on Oracle Cloud Infrastructure Container Engine for Kubernetes.
### [Deploy KubeSphere on AWS EKS](../installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-eks/)
Learn how to deploy KubeSphere on Amazon Elastic Kubernetes Service.
### [Deploy KubeSphere on DigitalOcean](../installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-do/)
Learn how to deploy KubeSphere on DigitalOcean.
### [Deploy KubeSphere on GKE](../installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-gke/)
Learn how to deploy KubeSphere on Google Kubernetes Engine.
### [Deploy KubeSphere on AKS](../installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-aks/)
Learn how to deploy KubeSphere on Azure Kubernetes Service.
### [Deploy KubeSphere on Huawei CCE](../installing-on-kubernetes/hosted-kubernetes/install-ks-on-huawei-cce/)
Learn how to deploy KubeSphere on Huawei Cloud Container Engine.
## Installing on On-premises Kubernetes
### [Air-gapped Installation](../installing-on-kubernetes/on-prem-kubernetes/install-ks-on-linux-airgapped/)
Explore the best practice of installing KubeSphere in an air-gapped environment.
## Uninstalling
### [Uninstalling KubeSphere from Kubernetes](../installing-on-kubernetes/uninstalling/uninstalling-kubesphere-from-k8s/)
Remove KubeSphere from Kubernetes clusters.
## Most Popular Pages
Below you will find some of the most common and helpful pages from this chapter. We highly recommend you to review them at first.
Below you will find some of the most viewed and helpful pages in this chapter. It is highly recommended that you refer to them first.
{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}}
{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}}
{{< popularPage icon="/images/docs/bitmap.jpg" title="Deploy KubeSphere on AWS EKS" description="Provision KubeSphere on existing Kubernetes clusters on EKS." link="../installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-eks/" >}}

View File

@ -0,0 +1,7 @@
---
linkTitle: "FAQ"
weight: 2700
_build:
render: false
---

View File

@ -0,0 +1,10 @@
---
title: "FAQ"
keywords: 'kubernetes, kubesphere, faq'
description: 'FAQ'
weight: 2710
---
TBD

View File

@ -1,117 +0,0 @@
---
title: "在华为云 CCE 安装 KubeSphere"
keywords: "kubesphere, kubernetes, docker, huawei, cce"
description: "介绍如何在华为云 CCE 容器引擎上部署 KubeSphere 3.0"
weight: 2255
---
本指南将介绍如果在[华为云 CCE 容器引擎](https://support.huaweicloud.com/cce/)上部署并使用 KubeSphere 3.0.0 平台。
## 华为云 CCE 环境准备
### 创建 Kubernetes 集群
首先按使用环境的资源需求创建 Kubernetes 集群,满足以下一些条件即可(如已有环境并满足条件可跳过本节内容):
- KubeSphere 3.0.0 默认支持的 Kubernetes 版本为 `1.15.x`, `1.16.x`, `1.17.x`, `1.18.x`,需要选择其中支持的版本进行集群创建(如 `v1.15.11`, `v1.17.9`
- 需要确保 Kubernetes 集群所使用的云主机的网络可以,可以通过在创建集群的同时 “自动创建” 或 “使用已有” 弹性 IP或者在集群创建后自行配置网络如配置 [NAT 网关](https://support.huaweicloud.com/natgateway/)
- 工作节点规格方面建议选择 `s3.xlarge.2` 的 `4 核8 GB` 配置,并按需扩展工作节点数量(通常生产环境需要 3 个及以上工作节点)。
### 创建公网 kubectl 证书
- 创建完集群后,进入 `资源管理` > `集群管理` 界面,在 `基本信息` > `网络` 面板中,绑定 `公网apiserver地址`
- 而后在右侧面板中,选择 `kubectl` 标签页,并在 `下载kubectl配置文件` 列表项中 `点击此处下载`,即可获取公用可用的 kubectl 证书。
![生成 Kubectl 配置文件](/images/docs/huawei-cce/zh/generate-kubeconfig.png)
获取 kubectl 配置文件后,可通过 kubectl 命令行工具来验证集群连接:
```bash
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.8", GitCommit:"9f2892aab98fe339f3bd70e3c470144299398ace", GitTreeState:"clean", BuildDate:"2020-08-15T10:08:56Z", GoVersion:"go1.14.7", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"17+", GitVersion:"v1.17.9-r0-CCE20.7.1.B003-17.36.3", GitCommit:"136c81cf3bd314fcbc5154e07cbeece860777e93", GitTreeState:"clean", BuildDate:"2020-08-08T06:01:28Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
```
## KubeSphere 平台部署
### 创建自定义 StorageClass
> 由于华为 CCE 自带的 Everest CSI 组件所提供的 StorageClass `csi-disk` 默认指定的是 SATA 磁盘(即普通 I/O 磁盘),但实际创建的 Kubernetes 集群所配置的磁盘基本只有 SAS高 I/O和 SSD (超高 I/O),因此建议额外创建对应的 StorageClass并设定为默认以方便后续部署使用。参见官方文档 - [使用 kubectl 创建云硬盘](https://support.huaweicloud.com/usermanual-cce/cce_01_0044.html#section7)。
以下示例展示如何创建一个 SAS高 I/O磁盘对应的 StorageClass
```yaml
# csi-disk-sas.yaml
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.kubernetes.io/is-default-class: "true"
storageclass.kubesphere.io/support-snapshot: "false"
name: csi-disk-sas
parameters:
csi.storage.k8s.io/csi-driver-name: disk.csi.everest.io
csi.storage.k8s.io/fstype: ext4
# 绑定华为 “高I/O” 磁盘,如需 “超高I/O“ 则此值改为 SSD
everest.io/disk-volume-type: SAS
everest.io/passthrough: "true"
provisioner: everest-csi-provisioner
allowVolumeExpansion: true
reclaimPolicy: Delete
volumeBindingMode: Immediate
```
关于如何设定/取消默认 StorageClass可参考 Kubernetes 官方文档 - [改变默认 StorageClass](https://kubernetes.io/zh/docs/tasks/administer-cluster/change-default-storage-class/)。
### 通过 ks-installer 执行最小化部署
接下来就可以使用 [ks-installer](https://github.com/kubesphere/ks-installer) 在已有的 Kubernetes 集群上来执行 KubeSphere 部署,建议首先还是以最小功能集进行安装,可执行以下命令:
```bash
kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/kubesphere-installer.yaml
```
```bash
kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml
```
执行部署命令后,可以通过进入 `工作负载` > `容器组 Pod` 界面,在右侧面板中查询 `kubesphere-system` 命名空间下的 Pod 运行状态了解 KubeSphere 平台最小功能集的部署状态;通过该命名空间下 `ks-console-xxxx` 容器的状态来了解 KubeSphere 控制台应用的可用状态。
![部署 KubeSphere 最小功能集](/images/docs/huawei-cce/zh/deploy-ks-minimal.png)
### 开启 KubeSphere 外网访问
通过 `kubesphere-system` 命名空间下的 Pod 运行状态确认 KubeSphere 基础组件都已进入运行状态后,我们需要为 KubeSphere 控制台开启外网访问。
进入 `资源管理` > `网络管理`,在右侧面板中选择 `ks-console` 更改网络访问方式,建议选用 `负载均衡(``LoadBalancer` 访问方式(需绑定弹性公网 IP配置完成后如下图
![开启 KubeSphere 外网访问](/images/docs/huawei-cce/zh/expose-ks-console.png)
服务细节配置基本上选用默认选项即可,当然也可以按需进行调整:
![为 KubeSphere 控制台配置负载均衡访问](/images/docs/huawei-cce/zh/edit-ks-console-svc.png)
通过负载均衡绑定公网访问后,即可使用给定的访问地址进行访问,进入到 KubeSphere 的登陆界面并使用默认账号(用户名 `admin`,密码 `P@88w0rd`)即可登陆平台:
![登录 KubeSphere 平台](/images/docs/huawei-cce/zh/login-ks-console.png)
### 通过 KubeSphere 开启附加组件
KubeSphere 平台外网可访问后,接下来的操作即可都在平台内完成。开启附加组件的操作可以参考社区文档 - `KubeSphere 3.0 界面开启可插拔组件安装`
💡 需要留意:在开启 Istio 组件之前由于自定义资源定义CRD冲突的问题需要先删除华为 CCE 自带的 `applications.app.k8s.io` ,最直接的方式是通过 kubectl 工具来完成:
```bash
$ kubectl delete crd applications.app.k8s.io
```
全部附加组件开启并安装成功后,进入集群管理界面,可以得到如下界面呈现效果,特别是在 `服务组件` 部分可以看到已经开启的各个基础和附加组件:
![KubeSphere 全功能集管理界面](/images/docs/huawei-cce/zh/view-ks-console-full.png)

View File

@ -15,7 +15,7 @@ This section gives you an overview of the general steps of installing KubeSphere
{{< notice note >}}
Please read the prerequisites before you install KubeSphere on existing Kubernetes clusters.
Please read [Prerequisites](../prerequisites/) before you install KubeSphere on existing Kubernetes clusters.
{{</ notice >}}

View File

@ -51,4 +51,4 @@ glusterfs (default) kubernetes.io/glusterfs 3d4h
If your Kubernetes cluster environment meets all the requirements above, then you are ready to deploy KubeSphere on your existing Kubernetes cluster.
For more information, see Overview of Installing on Kubernetes.
For more information, see [Overview](../overview/).

View File

@ -1,23 +1,78 @@
---
title: "Installing on Linux"
description: "Help you to better understand KubeSphere with detailed graphics and contents"
description: "Demonstrate how to install KubeSphere on Linux on cloud and in on-premises environments."
layout: "single"
linkTitle: "Installing on Linux"
weight: 2000
icon: "/images/docs/docs.svg"
---
## Installing KubeSphere and Kubernetes on Linux
This chapter demonstrates how to use KubeKey to provision a production-ready Kubernetes and KubeSphere cluster on Linux in different environments. You can also use KubeKey to easily scale up and down your cluster and set various storage classes based on your needs.
In this chapter, we will demonstrate how to use KubeKey to provision a new Kubernetes and KubeSphere cluster based on different infrastructures. Kubekey can help you to quickly build a production-ready cluster architecture on a set of machines from zero to one. It also helps you to easily scale the cluster and install pluggable components on existing architecture.
## Introduction
### [Overview](../installing-on-linux/introduction/intro/)
Explore the general content in this chapter, including installation preparation, installation tool and method, as well as storage setting.
### [Multi-node Installation](../installing-on-linux/introduction/multioverview/)
Learn the general steps of installing KubeSphere and Kubernetes on a multi-node cluster.
### [Port Requirements](../installing-on-linux/introduction/port-firewall/)
Understand the specific port requirements for different services in KubeSphere.
### [Kubernetes Cluster Configuration](../installing-on-linux/introduction/vars/)
Customize your setting in the configuration file for your cluster.
### [Persistent Storage Configuration](../installing-on-linux/introduction/storage-configuration/)
Add different storage classes to your cluster with KubeKey, such as Ceph RBD and Glusterfs.
## Installing in On-premises Environments
### [Deploy KubeSphere on VMware vSphere](../installing-on-linux/on-premises/install-kubesphere-on-vmware-vsphere/)
Learn how to create a high-availability cluster on VMware vSphere.
## Installing on Public Cloud
### [Deploy KubeSphere on Azure VM Instance](../installing-on-linux/public-cloud/install-ks-on-azure-vms/)
Learn how to create a high-availability cluster on Azure virtual machines.
### [Deploy KubeSphere on QingCloud Instance](../installing-on-linux/public-cloud/kubesphere-on-qingcloud-instance/)
Learn how to create a high-availability cluster on QingCloud platform.
## Cluster Operation
### [Add New Nodes](../installing-on-linux/cluster-operation/add-new-nodes/)
Add more nodes to scale up your cluster.
### [Remove Nodes](../installing-on-linux/cluster-operation/remove-nodes/)
Cordon a node and even delete a node to scale down your cluster.
## Uninstalling
### [Uninstalling KubeSphere and Kubernetes](../installing-on-linux/uninstalling/uninstalling-kubesphere-and-kubernetes/)
Remove KubeSphere and Kubernetes from your machines.
## FAQ
### [Configure Booster for Installation](../installing-on-linux/faq/configure-booster/)
Set a registry mirror to speed up downloads during installation.
## Most Popular Pages
Below you will find some of the most common and helpful pages from this chapter. We highly recommend you to review them at first.
Below you will find some of the most viewed and helpful pages in this chapter. It is highly recommended that you refer to them first.
{{< popularPage icon="/images/docs/qingcloud-2.svg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}}
{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}}
{{< popularPage icon="/images/docs/qingcloud-2.svg" title="Deploy KubeSphere on QingCloud" description="Provision an HA KubeSphere cluster on QingCloud." link="../installing-on-linux/public-cloud/kubesphere-on-qingcloud-instance/" >}}

View File

@ -0,0 +1,8 @@
---
title: "FAQ"
keywords: 'kubernetes, kubesphere, uninstalling, remove-cluster'
description: 'How to uninstall KubeSphere'
weight: 2470
---

View File

@ -0,0 +1,91 @@
---
title: "Configure Booster for Installation"
keywords: 'KubeSphere, booster, installation, faq'
description: 'How to configure a booster for installation'
weight: 2476
---
If you have trouble downloading images from dockerhub.io, it is highly recommended that you configure a registry mirror (i.e. booster) beforehand to speed up downloads. You can refer to the [official documentation of Docker](https://docs.docker.com/registry/recipes/mirror/#configure-the-docker-daemon) or follow the steps below.
## Get Booster URL
To configure the booster, you need a registry mirror address. See the following example to see how you can get a booster URL from Alibaba Cloud.
1. Log in the console of Alibaba Cloud and enter "container registry" in the search bar. Click **Container Registry** in the drop-down list as below.
![container-registry](https://ap3.qingstor.com/kubesphere-website/docs/20200904165654.png)
2. Click **Image Booster**.
![image-booster](https://ap3.qingstor.com/kubesphere-website/docs/20200904170057.png)
3. You can find the **Booster URL** in the image below as well as the official guide from Alibaba Cloud to help you configure the booster.
![](https://ap3.qingstor.com/kubesphere-website/docs/20200904171359.png)
## Set Registry Mirror
You can configure the Docker daemon directly or use KubeKey to set the configuration.
### Configure the Docker daemon
{{< notice note >}}
Docker needs to be installed in advance for this method.
{{</ notice >}}
1. Execute the following commands:
```bash
sudo mkdir -p /etc/docker
```
```bash
sudo vi /etc/docker/daemon.json
```
2. Add the `registry-mirrors` key and value to the file.
```bash
{
"registry-mirrors": ["https://<my-docker-mirror-host>"]
}
```
{{< notice note >}}
Make sure you replace the address within the quotation mark above with your own Booster URL.
{{</ notice >}}
3. Save the file and reload Docker by executing the following commands so that the change can take effect.
```bash
sudo systemctl daemon-reload
```
```bash
sudo systemctl restart docker
```
### Use KubeKey to set the registry mirror
1. After you create a config-sample.yaml file with KubeKey before installation, navigate to `registry` in the file.
```bash
registry:
registryMirrors: [] # For users who need to speed up downloads
insecureRegistries: [] # Set an address of insecure image registry. See https://docs.docker.com/registry/insecure/
privateRegistry: "" # Configure a private image registry for air-gapped installation (e.g. docker local registry or Harbor)
```
2. Input the registry mirror address above and save the file. For more information about the installation process, see [Multi-node Installation](../../../installing-on-linux/introduction/multioverview/).
{{< notice note >}}
If you adopt [all-in-one installation](../../../quick-start/all-in-one-on-linux/), refer to the first method because a config-sample.yaml file is not needed for this mode.
{{</ notice >}}

View File

@ -99,7 +99,7 @@ wget -c https://kubesphere.io/download/kubekey-v1.0.0-linux-amd64.tar.gz -O - |
Download KubeKey from [GitHub Release Page](https://github.com/kubesphere/kubekey/releases/tag/v1.0.0) or use the following command directly.
```bash
wget https://github.com/kubesphere/kubekey/releases/download/v1.0.0/kubekey-v1.0.0-linux-amd64.tar.gz
wget https://github.com/kubesphere/kubekey/releases/download/v1.0.0/kubekey-v1.0.0-linux-amd64.tar.gz -O - | tar -xz
```
{{</ tab >}}
@ -137,7 +137,7 @@ Here are some examples for your reference:
./kk create config [-f ~/myfolder/abc.yaml]
```
- You can customize the persistent storage plugins (e.g. NFS Client, Ceph RBD, and GlusterFS) in `sample-config.yaml`.
- You can customize persistent storage plugins (e.g. NFS Client, Ceph RBD, and GlusterFS) in `config-sample.yaml`.
```bash
./kk create config --with-storage localVolume
@ -145,7 +145,7 @@ Here are some examples for your reference:
{{< notice note >}}
KubeKey will install [OpenEBS](https://openebs.io/) to provision [LocalPV](https://kubernetes.io/docs/concepts/storage/volumes/#local) for development and testing environment by default, which is convenient for new users. For this example of multi-cluster installation, we will use the default storage class (local volume). For production, please use NFS/Ceph/GlusterFS/CSI or commercial products as persistent storage solutions, you need to specify them in `addons` of `sample-config.yaml`, see [Persistent Storage Configuration](../storage-configuration).
KubeKey will install [OpenEBS](https://openebs.io/) to provision [LocalPV](https://kubernetes.io/docs/concepts/storage/volumes/#local) for development and testing environment by default, which is convenient for new users. In this example of multi-node installation, the default storage class (local volume) is used. For production, please use NFS/Ceph/GlusterFS/CSI or commercial products as persistent storage solutions. You need to specify them under `addons` of `config-sample.yaml`. See [Persistent Storage Configuration](../storage-configuration) for more details.
{{</ notice >}}

View File

@ -1,16 +1,16 @@
---
title: "Port Requirements"
keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus'
description: 'How to set the port in firewall rules'
keywords: 'Kubernetes, KubeSphere, port-requirements, firewall-rules'
description: 'Port requirements in KubeSphere'
linkTitle: "Port Requirements"
weight: 2120
---
KubeSphere requires certain ports to communicate among services. If your network configuration uses a firewallyou need to ensure infrastructure components can communicate with each other through specific ports that act as communication endpoints for certain processes or services.
KubeSphere requires certain ports for the communications among services. If your network is configured with firewall rules, you need to ensure infrastructure components can communicate with each other through specific ports that act as communication endpoints for certain processes or services.
|services|protocol|action|start port|end port|comment
|Service|Protocol|Action|Start Port|End Port|Notes
|---|---|---|---|---|---|
|ssh|TCP|allow|22|
|etcd|TCP|allow|2379|2380|
@ -21,12 +21,11 @@ KubeSphere requires certain ports to communicate among services. If your network
|master|TCP|allow|10250|10258|
|dns|TCP|allow|53|
|dns|UDP|allow|53|
|local-registry|TCP|allow|5000||offline environment|
|local-apt|TCP|allow|5080||offline environment|
|rpcbind|TCP|allow|111|| use NFS |
|ipip| IPENCAP / IPIP|allow| | |calico needs to allow the ipip protocol |
|local-registry|TCP|allow|5000||For offline environment|
|local-apt|TCP|allow|5080||For offline environment|
|rpcbind|TCP|allow|111|| Required if NFS is used|
|ipip| IPENCAP / IPIP|allow| | |Calico needs to allow the ipip protocol |
{{< notice note >}}
Please note when you use Calico network plugin and run your cluster in classic network in cloud environment, you need to open both IPENCAP and IPIP protocol for source IP.
{{</ notice >}}
When you use the Calico network plugin and run your cluster in a classic network on cloud, you need to enable both IPENCAP and IPIP protocol for the source IP.
{{</ notice >}}

View File

@ -6,13 +6,13 @@ description: 'Persistent Storage Configuration'
linkTitle: "Persistent Storage Configuration"
weight: 2140
---
# Overview
## Overview
Persistence volume is **Must** for Kubesphere. So before installation of Kubesphere, one **default**
[StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/) and corresponding storage plugin should be installed on the Kubernetes cluster.
As different users may choose different storage plugin, [KubeKey](https://github.com/kubesphere/kubekey) supports to install storage plugin by the way of
[add-on](https://github.com/kubesphere/kubekey/blob/v1.0.0/docs/addons.md). This passage will introduce add-on configuration for some mainly used storage plugin.
# QingCloud-CSI
## QingCloud-CSI
[QingCloud-CSI](https://github.com/yunify/qingcloud-csi) plugin implements an interface between CSI-enabled Container Orchestrator (CO) and the disk of QingCloud.
Here is a helm-chart example of installing by KubeKey add-on.
```bash
@ -32,7 +32,7 @@ addons:
For more information about QingCloud, see [QingCloud](https://www.qingcloud.com/).
For more chart values, see [configuration](https://github.com/kubesphere/helm-charts/tree/master/src/test/csi-qingcloud#configuration).
# NFS-client
## NFS-client
The [nfs-client-provisioner](https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client) is an automatic provisioner for Kubernetes that uses your
*already configured* NFS server, dynamically creating Persistent Volumes.
Hear is a helm-chart example of installing by KubeKey add-on.
@ -51,10 +51,11 @@ addons:
```
For more chart values, see [configuration](https://github.com/kubesphere/helm-charts/tree/master/src/main/csi-nfs-provisioner#configuration)
# Ceph RBD
## Ceph RBD
Ceph RBD is an in-tree storage plugin on Kubernetes. As **hyperkube** images were [deprecated since 1.17](https://github.com/kubernetes/kubernetes/pull/85094),
**KubeKey** will never use **hyperkube** images. So in-tree Ceph rbd may not work on Kubernetes installed by **KubeKey**.
We could use [rbd provisioner](https://github.com/kubernetes-incubator/external-storage/tree/master/ceph/rbd) as substitute, which is same format with in-tree Ceph rbd.
**KubeKey** will never use **hyperkube** images. So in-tree Ceph RBD may not work on Kubernetes installed by **KubeKey**.
If you work with 14.0.0(Nautilus)+ Ceph Cluster, we appreciate you to use [Ceph CSI](#Ceph CSI).
Meanwhile you could use [rbd provisioner](https://github.com/kubernetes-incubator/external-storage/tree/master/ceph/rbd) as substitute, which is same format with in-tree Ceph RBD.
Here is an example of rbd-provisioner.
```yaml
- name: rbd-provisioner
@ -69,9 +70,76 @@ Here is an example of rbd-provisioner.
- ceph.userKey=SHOULD_BE_REPLACED
- sc.isDefault=true
```
For more values, see [configuration](https://github.com/kubesphere/helm-charts/tree/master/src/test/rbd-provisioner#configuration))
For more values, see [configuration](https://github.com/kubesphere/helm-charts/tree/master/src/test/rbd-provisioner#configuration)
# Glusterfs
## Ceph CSI
[Ceph-CSI](https://github.com/ceph/ceph-csi) contains Ceph Container Storage Interface (CSI) driver for RBD, CephFS. It will be substitute for [Ceph-RBD](#Ceph RBD) in the future.
Ceph CSI should be installed on v1.14.0+ Kubernetes, and work with 14.0.0(Nautilus)+ Ceph Cluster.
For details about compatibility, see [support matrix](https://github.com/ceph/ceph-csi#support-matrix). Here is an example of installing ceph-csi-rbd by **KubeKey** add-on.
```yaml
csiConfig:
- clusterID: "cluster1"
monitors:
- SHOULD_BE_REPLACED
```
Save the YAML file of ceph config in local, **/root/ceph-csi-config.yaml** for example.
```yaml
apiVersion: v1
kind: Secret
metadata:
name: csi-rbd-secret
namespace: kube-system
stringData:
userID: admin
userKey: SHOULD_BE_REPLACED
encryptionPassphrase: test_passphrase
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-rbd-sc
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
storageclass.kubesphere.io/supported-access-modes: '["ReadWriteOnce","ReadOnlyMany","ReadWriteMany"]'
provisioner: rbd.csi.ceph.com
parameters:
clusterID: "cluster1"
pool: rbd
imageFeatures: layering
csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
csi.storage.k8s.io/provisioner-secret-namespace: kube-system
csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret
csi.storage.k8s.io/controller-expand-secret-namespace: kube-system
csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret
csi.storage.k8s.io/node-stage-secret-namespace: kube-system
csi.storage.k8s.io/fstype: ext4
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
- discard
```
Save the YAML file of StorageClass in local, **/root/ceph-csi-rbd-sc.yaml** for example. The add-on configuration could be set like:
```yaml
addons:
- name: ceph-csi-rbd
namespace: kube-system
sources:
chart:
name: ceph-csi-rbd
repo: https://ceph.github.io/csi-charts
values: /root/ceph-csi-config.yaml
- name: ceph-csi-rbd-sc
sources:
yaml:
path:
- /root/ceph-csi-rbd-sc.yaml
```
For more information, see [chart for ceph-csi-rbd](https://github.com/ceph/ceph-csi/tree/master/charts/ceph-csi-rbd)
## Glusterfs
Glusterfs is an in-tree storage plugin on Kubernetes, only StorageClass is need to been installed.
```yaml
apiVersion: v1
@ -117,11 +185,11 @@ Save the YAML file of StorageClass in local, **/root/glusterfs-sc.yaml** for exa
- /root/glusterfs-sc.yaml
```
# OpenEBS/LocalVolumes
## OpenEBS/LocalVolumes
[OpenEBS](https://github.com/openebs/openebs) Dynamic Local PV provisioner can create Kubernetes Local Persistent Volumes using a unique
HostPath (directory) on the node to persist data. It's very convenient for experience KubeSphere when you has no special storage system.
If no default StorageClass configured of **KubeKey** add-on, OpenEBS/LocalVolumes will be installed.
# Multi-Storage
## Multi-Storage
If you intend to install more than one storage plugins, remind to set only one to be default.
Otherwise [ks-installer](https://github.com/kubesphere/ks-installer) will be confused about which StorageClass to use.

View File

@ -1,5 +1,5 @@
---
linkTitle: "Install on On-premises environment"
linkTitle: "Installing in On-premises Environments"
weight: 2200
_build:

View File

@ -113,7 +113,7 @@ wget -c https://kubesphere.io/download/kubekey-v1.0.0-linux-amd64.tar.gz -O - |
Download KubeKey from [GitHub Release Page](https://github.com/kubesphere/kubekey/releases/tag/v1.0.0) or use the following command directly:
```bash
wget https://github.com/kubesphere/kubekey/releases/download/v1.0.0/kubekey-v1.0.0-linux-amd64.tar.gz
wget https://github.com/kubesphere/kubekey/releases/download/v1.0.0/kubekey-v1.0.0-linux-amd64.tar.gz -O - | tar -xz
```
{{</ tab >}}

View File

@ -1,5 +1,5 @@
---
title: "KubeSphere on QingCloud Instance"
title: "Deploy KubeSphere on QingCloud Instance"
keywords: "KubeSphere, Installation, HA, High-availability, LoadBalancer"
description: "The tutorial is for installing a high-availability cluster."
@ -136,7 +136,7 @@ wget -c https://kubesphere.io/download/kubekey-v1.0.0-linux-amd64.tar.gz -O - |
Download KubeKey from [GitHub Release Page](https://github.com/kubesphere/kubekey/releases/tag/v1.0.0) or use the following command directly.
```bash
wget https://github.com/kubesphere/kubekey/releases/download/v1.0.0/kubekey-v1.0.0-linux-amd64.tar.gz
wget https://github.com/kubesphere/kubekey/releases/download/v1.0.0/kubekey-v1.0.0-linux-amd64.tar.gz -O - | tar -xz
```
{{</ tab >}}

View File

@ -32,7 +32,7 @@ The feature can be enabled both before and after the installation, giving users
**High Availability**. This is extremely useful when it comes to disaster recovery. A cluster can run major services with another one serving as the backup. When the major one goes down, services can be quickly taken over by another cluster. The logic is quite similar to the case when clusters are deployed in different regions, as requests can be sent to the closest one for low latency. In short, high availability is achieved across zones and clusters.
For more information, see Multi-cluster Management.
For more information, see [Multi-cluster Management](../../multicluster-management/).
### Powerful Observability

View File

@ -46,7 +46,7 @@ With KubeSphere, users can manage the infrastructure underneath, such as adding
KubeSphere allows users to deploy applications across clusters. More importantly, an application can also be configured to run on a certain cluster. Besides, the multi-cluster feature, paired with [OpenPitrix](https://github.com/openpitrix/openpitrix), an industry-leading application management platform, enables users to manage apps across their whole lifecycle, including release, removal and distribution.
For more information, see Multi-cluster Management.
For more information, see [Multi-cluster Management](../../multicluster-management/).
## DevOps Support

View File

@ -40,7 +40,7 @@ Kubernetes has become the de facto standard in container orchestration. Against
KubeSphere provides its unique feature as a solution to the above four cases. Based on the Federation pattern of KubeSphere's multi-cluster feature, multiple heterogeneous Kubernetes clusters can be aggregated within a unified Kubernetes resource pool. When users deploy applications, they can decide to which Kubernetes cluster they want app replicas to be scheduled in the pool. The whole process is managed and maintained through KubeSphere. This is how KubeSphere helps users achieve multi-site high availability (across zones and clusters).
For more information, see Multi-cluster Management.
For more information, see [Multi-cluster Management](../../multicluster-management/).
## Full-stack Observability with Streamlined O&M

View File

@ -16,7 +16,7 @@ KubeSphere delivers **consolidated views while integrating a wide breadth of eco
## Run KubeSphere Everywhere
As a lightweight platform, KubeSphere has become more friendly to different cloud ecosystems as it does not change Kubernetes itself at all. In other words, KubeSphere can be deployed **on any existing version-compatible Kubernetes cluster on any infrastructure** including virtual machine, bare metal, on-premises, public cloud and hybrid cloud. KubeSphere users have the choice of installing KubeSphere on cloud and container platforms, such as Alibaba Cloud, AWS, QingCloud, Tencent Cloud, Huawei Cloud and Rancher, and even importing and managing their existing Kubernetes clusters created using major Kubernetes distributions. The seamless integration of KubeSphere into existing Kubernetes platforms means that the business of users will not be affected, without any modification to their current resources or assets. For more information, see Installation.
As a lightweight platform, KubeSphere has become more friendly to different cloud ecosystems as it does not change Kubernetes itself at all. In other words, KubeSphere can be deployed **on any existing version-compatible Kubernetes cluster on any infrastructure** including virtual machine, bare metal, on-premises, public cloud and hybrid cloud. KubeSphere users have the choice of installing KubeSphere on cloud and container platforms, such as Alibaba Cloud, AWS, QingCloud, Tencent Cloud, Huawei Cloud and Rancher, and even importing and managing their existing Kubernetes clusters created using major Kubernetes distributions. The seamless integration of KubeSphere into existing Kubernetes platforms means that the business of users will not be affected, without any modification to their current resources or assets. For more information, see [Installing on Linux](../../installing-on-linux/) and [Installing on Kubernetes](../../installing-on-kubernetes/).
KubeSphere screens users from the infrastructure underneath and helps enterprises modernize, migrate, deploy and manage existing and containerized apps seamlessly across a variety of infrastructure types. This is how KubeSphere empowers developers and Ops teams to focus on application development and accelerate DevOps automated workflows and delivery processes with enterprise-level observability and troubleshooting, unified monitoring and logging, centralized storage and networking management, easy-to-use CI/CD pipelines, and so on.
@ -40,7 +40,7 @@ KubeSphere screens users from the infrastructure underneath and helps enterprise
- **Multilingual Support of Web Console**. KubeSphere is designed for users around the world at the very beginning. Thanks to our community members across the globe, KubeSphere 3.0 now supports four official languages for its web console: English, Simplified Chinese, Traditional Chinese, and Spanish. More languages are expected to be supported going forward.
In addition to the above highlights, KubeSphere 3.0 also features other functionality upgrades. For more and detailed information, see Release Notes for 3.0.0.
In addition to the above highlights, KubeSphere 3.0 also features other functionality upgrades. For more and detailed information, see [Release Notes for 3.0.0](../../release/release-v300/).
## Open Source

View File

@ -10,3 +10,5 @@ weight: 2340
The multi-cluster feature relates to the network connection among multiple clusters. Therefore, it is important to understand the topological relations of clusters as the workload can be reduced.
Before you use the multi-cluster feature, you need to create a Host Cluster (hereafter referred to as **H** Cluster), which is actually a KubeSphere cluster that has enabled the multi-cluster feature. All the clusters managed by the H Cluster are called Member Cluster (hereafter referred to as **M** Cluster). They are common KubeSphere clusters that do not have the multi-cluster feature enabled. There can only be one H Cluster while multiple M Clusters can exist at the same time. In a multi-cluster architecture, the network between the H Cluster and the M Cluster can be connected directly or through an agent. The network between M Clusters can be set in a completely isolated environment.
![Kubernetes Federation in KubeSphere](https://ap3.qingstor.com/kubesphere-website/docs/20200907232319.png)

View File

@ -9,7 +9,7 @@ weight: 3535
## What is KubeSphere Logging System
KubeSphere provides a powerful, holistic and easy-to-use logging system for log collection, query and management. It covers logs from at varied levels, including tenants, infrastructure resources, and applications. Users can search logs from different dimensions, such as project, workload, Pod and keyword. Compared with Kibana, the tenant-based logging system of KubeSphere features better isolation and security among tenants as each tenant can only view his or her own logs. Apart from KubeSphere's own logging system, the container platform also allows users to add third-party log collectors, such as Elasticsearch, Kafka and Fluentd.
KubeSphere provides a powerful, holistic and easy-to-use logging system for log collection, query and management. It covers logs at varied levels, including tenants, infrastructure resources, and applications. Users can search logs from different dimensions, such as project, workload, Pod and keyword. Compared with Kibana, the tenant-based logging system of KubeSphere features better isolation and security among tenants as each tenant can only view his or her own logs. Apart from KubeSphere's own logging system, the container platform also allows users to add third-party log collectors, such as Elasticsearch, Kafka and Fluentd.
For more information, see Logging, Events and Auditing.

View File

@ -10,14 +10,4 @@ icon: "/images/docs/docs.svg"
---
## Installing KubeSphere and Kubernetes on Linux
In this chapter, we will demonstrate how to use KubeKey to provision a new Kubernetes and KubeSphere cluster based on different infrastructures. Kubekey can help you to quickly build a production-ready cluster architecture on a set of machines from zero to one. It also helps you to easily scale the cluster and install pluggable components on existing architecture.
## Most Popular Pages
Below you will find some of the most common and helpful pages from this chapter. We highly recommend you to review them at first.
{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}}
{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}}
TBD

View File

@ -8,37 +8,3 @@ weight: 2210
---
TBD
{{< notice note >}}
### This is a simple note.
{{</ notice >}}
{{< notice tip >}}
This is a simple tip.
{{</ notice >}}
{{< notice info >}}
This is a simple info.
{{</ notice >}}
{{< notice warning >}}
This is a simple warning.
{{</ notice >}}
{{< tabs >}}
{{< tab "first" >}}
### Why KubeSphere
{{</ tab >}}
{{< tab "second" >}}
```
console.log('test')
```
{{</ tab >}}
{{< tab "third" >}}
this is third tab
{{</ tab >}}
{{</ tabs >}}

View File

@ -8,37 +8,3 @@ weight: 2260
---
TBD
{{< notice note >}}
### This is a simple note.
{{</ notice >}}
{{< notice tip >}}
This is a simple tip.
{{</ notice >}}
{{< notice info >}}
This is a simple info.
{{</ notice >}}
{{< notice warning >}}
This is a simple warning.
{{</ notice >}}
{{< tabs >}}
{{< tab "first" >}}
### Why KubeSphere
{{</ tab >}}
{{< tab "second" >}}
```
console.log('test')
```
{{</ tab >}}
{{< tab "third" >}}
this is third tab
{{</ tab >}}
{{</ tabs >}}

View File

@ -8,37 +8,3 @@ weight: 2260
---
TBD
{{< notice note >}}
### This is a simple note.
{{</ notice >}}
{{< notice tip >}}
This is a simple tip.
{{</ notice >}}
{{< notice info >}}
This is a simple info.
{{</ notice >}}
{{< notice warning >}}
This is a simple warning.
{{</ notice >}}
{{< tabs >}}
{{< tab "first" >}}
### Why KubeSphere
{{</ tab >}}
{{< tab "second" >}}
```
console.log('test')
```
{{</ tab >}}
{{< tab "third" >}}
this is third tab
{{</ tab >}}
{{</ tabs >}}

View File

@ -8,37 +8,3 @@ weight: 2250
---
TBD
{{< notice note >}}
### This is a simple note.
{{</ notice >}}
{{< notice tip >}}
This is a simple tip.
{{</ notice >}}
{{< notice info >}}
This is a simple info.
{{</ notice >}}
{{< notice warning >}}
This is a simple warning.
{{</ notice >}}
{{< tabs >}}
{{< tab "first" >}}
### Why KubeSphere
{{</ tab >}}
{{< tab "second" >}}
```
console.log('test')
```
{{</ tab >}}
{{< tab "third" >}}
this is third tab
{{</ tab >}}
{{</ tabs >}}

View File

@ -8,37 +8,3 @@ weight: 2230
---
TBD
{{< notice note >}}
### This is a simple note.
{{</ notice >}}
{{< notice tip >}}
This is a simple tip.
{{</ notice >}}
{{< notice info >}}
This is a simple info.
{{</ notice >}}
{{< notice warning >}}
This is a simple warning.
{{</ notice >}}
{{< tabs >}}
{{< tab "first" >}}
### Why KubeSphere
{{</ tab >}}
{{< tab "second" >}}
```
console.log('test')
```
{{</ tab >}}
{{< tab "third" >}}
this is third tab
{{</ tab >}}
{{</ tabs >}}

View File

@ -1,5 +1,5 @@
---
title: "Jobs"
title: "Ingress"
keywords: 'kubesphere, kubernetes, docker, jobs'
description: 'Create a Kubernetes Job'
@ -8,37 +8,3 @@ weight: 2260
---
TBD
{{< notice note >}}
### This is a simple note.
{{</ notice >}}
{{< notice tip >}}
This is a simple tip.
{{</ notice >}}
{{< notice info >}}
This is a simple info.
{{</ notice >}}
{{< notice warning >}}
This is a simple warning.
{{</ notice >}}
{{< tabs >}}
{{< tab "first" >}}
### Why KubeSphere
{{</ tab >}}
{{< tab "second" >}}
```
console.log('test')
```
{{</ tab >}}
{{< tab "third" >}}
this is third tab
{{</ tab >}}
{{</ tabs >}}

View File

@ -8,37 +8,3 @@ weight: 2260
---
TBD
{{< notice note >}}
### This is a simple note.
{{</ notice >}}
{{< notice tip >}}
This is a simple tip.
{{</ notice >}}
{{< notice info >}}
This is a simple info.
{{</ notice >}}
{{< notice warning >}}
This is a simple warning.
{{</ notice >}}
{{< tabs >}}
{{< tab "first" >}}
### Why KubeSphere
{{</ tab >}}
{{< tab "second" >}}
```
console.log('test')
```
{{</ tab >}}
{{< tab "third" >}}
this is third tab
{{</ tab >}}
{{</ tabs >}}

View File

@ -1,5 +1,5 @@
---
title: "Jobs"
title: "s2i-template"
keywords: 'kubesphere, kubernetes, docker, jobs'
description: 'Create a Kubernetes Job'
@ -8,37 +8,3 @@ weight: 2260
---
TBD
{{< notice note >}}
### This is a simple note.
{{</ notice >}}
{{< notice tip >}}
This is a simple tip.
{{</ notice >}}
{{< notice info >}}
This is a simple info.
{{</ notice >}}
{{< notice warning >}}
This is a simple warning.
{{</ notice >}}
{{< tabs >}}
{{< tab "first" >}}
### Why KubeSphere
{{</ tab >}}
{{< tab "second" >}}
```
console.log('test')
```
{{</ tab >}}
{{< tab "third" >}}
this is third tab
{{</ tab >}}
{{</ tabs >}}

View File

@ -1,5 +1,5 @@
---
title: "Jobs"
title: "Services"
keywords: 'kubesphere, kubernetes, docker, jobs'
description: 'Create a Kubernetes Job'
@ -8,37 +8,3 @@ weight: 2260
---
TBD
{{< notice note >}}
### This is a simple note.
{{</ notice >}}
{{< notice tip >}}
This is a simple tip.
{{</ notice >}}
{{< notice info >}}
This is a simple info.
{{</ notice >}}
{{< notice warning >}}
This is a simple warning.
{{</ notice >}}
{{< tabs >}}
{{< tab "first" >}}
### Why KubeSphere
{{</ tab >}}
{{< tab "second" >}}
```
console.log('test')
```
{{</ tab >}}
{{< tab "third" >}}
this is third tab
{{</ tab >}}
{{</ tabs >}}

View File

@ -8,37 +8,3 @@ weight: 2240
---
TBD
{{< notice note >}}
### This is a simple note.
{{</ notice >}}
{{< notice tip >}}
This is a simple tip.
{{</ notice >}}
{{< notice info >}}
This is a simple info.
{{</ notice >}}
{{< notice warning >}}
This is a simple warning.
{{</ notice >}}
{{< tabs >}}
{{< tab "first" >}}
### Why KubeSphere
{{</ tab >}}
{{< tab "second" >}}
```
console.log('test')
```
{{</ tab >}}
{{< tab "third" >}}
this is third tab
{{</ tab >}}
{{</ tabs >}}

View File

@ -9,36 +9,4 @@ weight: 2110
TBD
{{< notice note >}}
### This is a simple note.
{{</ notice >}}
{{< notice tip >}}
This is a simple tip.
{{</ notice >}}
{{< notice info >}}
This is a simple info.
{{</ notice >}}
{{< notice warning >}}
This is a simple warning.
{{</ notice >}}
{{< tabs >}}
{{< tab "first" >}}
### Why KubeSphere
{{</ tab >}}
{{< tab "second" >}}
```
console.log('test')
```
{{</ tab >}}
{{< tab "third" >}}
this is third tab
{{</ tab >}}
{{</ tabs >}}

View File

@ -1,44 +1,10 @@
---
title: "Secrets"
title: "Image Registry"
keywords: 'KubeSphere, kubernetes, docker, Secrets'
description: 'Create a Kubernetes Secret'
linkTitle: "Secrets"
linkTitle: "Image Registry"
weight: 2130
---
TBD
{{< notice note >}}
### This is a simple note.
{{</ notice >}}
{{< notice tip >}}
This is a simple tip.
{{</ notice >}}
{{< notice info >}}
This is a simple info.
{{</ notice >}}
{{< notice warning >}}
This is a simple warning.
{{</ notice >}}
{{< tabs >}}
{{< tab "first" >}}
### Why KubeSphere
{{</ tab >}}
{{< tab "second" >}}
```
console.log('test')
```
{{</ tab >}}
{{< tab "third" >}}
this is third tab
{{</ tab >}}
{{</ tabs >}}

View File

@ -8,37 +8,3 @@ weight: 2130
---
TBD
{{< notice note >}}
### This is a simple note.
{{</ notice >}}
{{< notice tip >}}
This is a simple tip.
{{</ notice >}}
{{< notice info >}}
This is a simple info.
{{</ notice >}}
{{< notice warning >}}
This is a simple warning.
{{</ notice >}}
{{< tabs >}}
{{< tab "first" >}}
### Why KubeSphere
{{</ tab >}}
{{< tab "second" >}}
```
console.log('test')
```
{{</ tab >}}
{{< tab "third" >}}
this is third tab
{{</ tab >}}
{{</ tabs >}}

View File

@ -1,107 +1,10 @@
---
title: "Volume Snapshots"
title: "Blue-green Deployment"
keywords: 'KubeSphere, kubernetes, docker, helm, jenkins, istio, prometheus'
description: 'Volume Snapshots'
description: 'Blue-green Deployment'
linkTitle: "Volume Snapshots"
linkTitle: "Blue-green Deployment"
weight: 2130
---
This tutorial explains how to customize KubeSphere configurations in `conf/common.yaml`. You can reference the following section to understand each parameter.
```yaml
######################### Kubernetes #########################
# The default k8s version will be installed
kube_version: v1.16.7
# The default etcd version will be installed
etcd_version: v3.2.18
# Configure a cron job to backup etcd data, which is running on etcd machines.
# Period of running backup etcd job, the unit is minutes.
# The default value 30 means backup etcd every 30 minutes.
etcd_backup_period: 30
# How many backup replicas to keep.
# The default value5 means to keep latest 5 backups, older ones will be deleted by order.
keep_backup_number: 5
# The location to store etcd backups files on etcd machines.
etcd_backup_dir: "/var/backups/kube_etcd"
# Add other registry. (For users who need to accelerate image download)
docker_registry_mirrors:
- https://docker.mirrors.ustc.edu.cn
- https://registry.docker-cn.com
- https://mirror.aliyuncs.com
# Kubernetes network plugin, Calico will be installed by default. Note that Calico and flannel are recommended, which are tested and verified by KubeSphere.
kube_network_plugin: calico
# A valid CIDR range for Kubernetes services,
# 1. should not overlap with node subnet
# 2. should not overlap with Kubernetes pod subnet
kube_service_addresses: 10.233.0.0/18
# A valid CIDR range for Kubernetes pod subnet,
# 1. should not overlap with node subnet
# 2. should not overlap with Kubernetes services subnet
kube_pods_subnet: 10.233.64.0/18
# Kube-proxy proxyMode configuration, either ipvs, or iptables
kube_proxy_mode: ipvs
# Maximum pods allowed to run on every node.
kubelet_max_pods: 110
# Enable nodelocal dns cache, see https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.md#nodelocal-dns-cache for further information
enable_nodelocaldns: true
# Highly Available loadbalancer example config
# apiserver_loadbalancer_domain_name: "lb.kubesphere.local" # Loadbalancer domain name
# loadbalancer_apiserver: # Loadbalancer apiserver configuration, please uncomment this line when you prepare HA install
# address: 192.168.0.10 # Loadbalancer apiserver IP address
# port: 6443 # apiserver port
######################### KubeSphere #########################
# Version of KubeSphere
ks_version: v2.1.0
# KubeSphere console port, range 30000-32767,
# but 30180/30280/30380 are reserved for internal service
console_port: 30880 # KubeSphere console nodeport
#CommonComponent
mysql_volume_size: 20Gi # MySQL PVC size
minio_volume_size: 20Gi # Minio PVC size
etcd_volume_size: 20Gi # etcd PVC size
openldap_volume_size: 2Gi # openldap PVC size
redis_volume_size: 2Gi # Redis PVC size
# Monitoring
prometheus_replica: 2 # Prometheus replicas with 2 by default which are responsible for monitoring different segments of data source and provide high availability as well.
prometheus_memory_request: 400Mi # Prometheus request memory
prometheus_volume_size: 20Gi # Prometheus PVC size
grafana_enabled: true # enable grafana or not
## Container Engine Acceleration
## Use nvidia gpu acceleration in containers
# nvidia_accelerator_enabled: true # enable Nvidia GPU accelerator or not. It supports hybrid node with GPU and CPU installed.
# nvidia_gpu_nodes: # The GPU nodes specified in hosts.ini. FOr now we only support Ubuntu 16.04
# - kube-gpu-001 # The host name of the GPU node specified in hosts.ini
```
## How to Configure a GPU Node
You may want to use GPU nodes for special purpose such as machine learning. Let's say you have a GPU node called `node2` in `hosts.ini`, then in the file `common.yaml` specify the following configuration. Please be aware the `- node2` has two spaces indent.
```yaml
nvidia_accelerator_enabled: true
nvidia_gpu_nodes:
- node2
```
> Note: The GPU node now only supports Ubuntu 16.04.
TBD

View File

@ -1,107 +1,10 @@
---
title: "Volume Snapshots"
title: "Canary Release"
keywords: 'KubeSphere, kubernetes, docker, helm, jenkins, istio, prometheus'
description: 'Volume Snapshots'
description: 'Canary Release'
linkTitle: "Volume Snapshots"
linkTitle: "Canary Release"
weight: 2130
---
This tutorial explains how to customize KubeSphere configurations in `conf/common.yaml`. You can reference the following section to understand each parameter.
```yaml
######################### Kubernetes #########################
# The default k8s version will be installed
kube_version: v1.16.7
# The default etcd version will be installed
etcd_version: v3.2.18
# Configure a cron job to backup etcd data, which is running on etcd machines.
# Period of running backup etcd job, the unit is minutes.
# The default value 30 means backup etcd every 30 minutes.
etcd_backup_period: 30
# How many backup replicas to keep.
# The default value5 means to keep latest 5 backups, older ones will be deleted by order.
keep_backup_number: 5
# The location to store etcd backups files on etcd machines.
etcd_backup_dir: "/var/backups/kube_etcd"
# Add other registry. (For users who need to accelerate image download)
docker_registry_mirrors:
- https://docker.mirrors.ustc.edu.cn
- https://registry.docker-cn.com
- https://mirror.aliyuncs.com
# Kubernetes network plugin, Calico will be installed by default. Note that Calico and flannel are recommended, which are tested and verified by KubeSphere.
kube_network_plugin: calico
# A valid CIDR range for Kubernetes services,
# 1. should not overlap with node subnet
# 2. should not overlap with Kubernetes pod subnet
kube_service_addresses: 10.233.0.0/18
# A valid CIDR range for Kubernetes pod subnet,
# 1. should not overlap with node subnet
# 2. should not overlap with Kubernetes services subnet
kube_pods_subnet: 10.233.64.0/18
# Kube-proxy proxyMode configuration, either ipvs, or iptables
kube_proxy_mode: ipvs
# Maximum pods allowed to run on every node.
kubelet_max_pods: 110
# Enable nodelocal dns cache, see https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.md#nodelocal-dns-cache for further information
enable_nodelocaldns: true
# Highly Available loadbalancer example config
# apiserver_loadbalancer_domain_name: "lb.kubesphere.local" # Loadbalancer domain name
# loadbalancer_apiserver: # Loadbalancer apiserver configuration, please uncomment this line when you prepare HA install
# address: 192.168.0.10 # Loadbalancer apiserver IP address
# port: 6443 # apiserver port
######################### KubeSphere #########################
# Version of KubeSphere
ks_version: v2.1.0
# KubeSphere console port, range 30000-32767,
# but 30180/30280/30380 are reserved for internal service
console_port: 30880 # KubeSphere console nodeport
#CommonComponent
mysql_volume_size: 20Gi # MySQL PVC size
minio_volume_size: 20Gi # Minio PVC size
etcd_volume_size: 20Gi # etcd PVC size
openldap_volume_size: 2Gi # openldap PVC size
redis_volume_size: 2Gi # Redis PVC size
# Monitoring
prometheus_replica: 2 # Prometheus replicas with 2 by default which are responsible for monitoring different segments of data source and provide high availability as well.
prometheus_memory_request: 400Mi # Prometheus request memory
prometheus_volume_size: 20Gi # Prometheus PVC size
grafana_enabled: true # enable grafana or not
## Container Engine Acceleration
## Use nvidia gpu acceleration in containers
# nvidia_accelerator_enabled: true # enable Nvidia GPU accelerator or not. It supports hybrid node with GPU and CPU installed.
# nvidia_gpu_nodes: # The GPU nodes specified in hosts.ini. FOr now we only support Ubuntu 16.04
# - kube-gpu-001 # The host name of the GPU node specified in hosts.ini
```
## How to Configure a GPU Node
You may want to use GPU nodes for special purpose such as machine learning. Let's say you have a GPU node called `node2` in `hosts.ini`, then in the file `common.yaml` specify the following configuration. Please be aware the `- node2` has two spaces indent.
```yaml
nvidia_accelerator_enabled: true
nvidia_gpu_nodes:
- node2
```
> Note: The GPU node now only supports Ubuntu 16.04.
TBD

View File

@ -1,9 +1,9 @@
---
title: "Volumes"
title: "Overview"
keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus'
description: 'Create Volumes (PVCs)'
description: 'Overview'
linkTitle: "Volumes"
linkTitle: "Overview"
weight: 2110
---

View File

@ -1,107 +1,10 @@
---
title: "Volume Snapshots"
title: "Traffic Mirroring"
keywords: 'KubeSphere, kubernetes, docker, helm, jenkins, istio, prometheus'
description: 'Volume Snapshots'
description: 'Traffic Mirroring'
linkTitle: "Volume Snapshots"
linkTitle: "Traffic Mirroring"
weight: 2130
---
This tutorial explains how to customize KubeSphere configurations in `conf/common.yaml`. You can reference the following section to understand each parameter.
```yaml
######################### Kubernetes #########################
# The default k8s version will be installed
kube_version: v1.16.7
# The default etcd version will be installed
etcd_version: v3.2.18
# Configure a cron job to backup etcd data, which is running on etcd machines.
# Period of running backup etcd job, the unit is minutes.
# The default value 30 means backup etcd every 30 minutes.
etcd_backup_period: 30
# How many backup replicas to keep.
# The default value5 means to keep latest 5 backups, older ones will be deleted by order.
keep_backup_number: 5
# The location to store etcd backups files on etcd machines.
etcd_backup_dir: "/var/backups/kube_etcd"
# Add other registry. (For users who need to accelerate image download)
docker_registry_mirrors:
- https://docker.mirrors.ustc.edu.cn
- https://registry.docker-cn.com
- https://mirror.aliyuncs.com
# Kubernetes network plugin, Calico will be installed by default. Note that Calico and flannel are recommended, which are tested and verified by KubeSphere.
kube_network_plugin: calico
# A valid CIDR range for Kubernetes services,
# 1. should not overlap with node subnet
# 2. should not overlap with Kubernetes pod subnet
kube_service_addresses: 10.233.0.0/18
# A valid CIDR range for Kubernetes pod subnet,
# 1. should not overlap with node subnet
# 2. should not overlap with Kubernetes services subnet
kube_pods_subnet: 10.233.64.0/18
# Kube-proxy proxyMode configuration, either ipvs, or iptables
kube_proxy_mode: ipvs
# Maximum pods allowed to run on every node.
kubelet_max_pods: 110
# Enable nodelocal dns cache, see https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.md#nodelocal-dns-cache for further information
enable_nodelocaldns: true
# Highly Available loadbalancer example config
# apiserver_loadbalancer_domain_name: "lb.kubesphere.local" # Loadbalancer domain name
# loadbalancer_apiserver: # Loadbalancer apiserver configuration, please uncomment this line when you prepare HA install
# address: 192.168.0.10 # Loadbalancer apiserver IP address
# port: 6443 # apiserver port
######################### KubeSphere #########################
# Version of KubeSphere
ks_version: v2.1.0
# KubeSphere console port, range 30000-32767,
# but 30180/30280/30380 are reserved for internal service
console_port: 30880 # KubeSphere console nodeport
#CommonComponent
mysql_volume_size: 20Gi # MySQL PVC size
minio_volume_size: 20Gi # Minio PVC size
etcd_volume_size: 20Gi # etcd PVC size
openldap_volume_size: 2Gi # openldap PVC size
redis_volume_size: 2Gi # Redis PVC size
# Monitoring
prometheus_replica: 2 # Prometheus replicas with 2 by default which are responsible for monitoring different segments of data source and provide high availability as well.
prometheus_memory_request: 400Mi # Prometheus request memory
prometheus_volume_size: 20Gi # Prometheus PVC size
grafana_enabled: true # enable grafana or not
## Container Engine Acceleration
## Use nvidia gpu acceleration in containers
# nvidia_accelerator_enabled: true # enable Nvidia GPU accelerator or not. It supports hybrid node with GPU and CPU installed.
# nvidia_gpu_nodes: # The GPU nodes specified in hosts.ini. FOr now we only support Ubuntu 16.04
# - kube-gpu-001 # The host name of the GPU node specified in hosts.ini
```
## How to Configure a GPU Node
You may want to use GPU nodes for special purpose such as machine learning. Let's say you have a GPU node called `node2` in `hosts.ini`, then in the file `common.yaml` specify the following configuration. Please be aware the `- node2` has two spaces indent.
```yaml
nvidia_accelerator_enabled: true
nvidia_gpu_nodes:
- node2
```
> Note: The GPU node now only supports Ubuntu 16.04.
TBD

View File

@ -1,107 +1,10 @@
---
title: "Volume Snapshots"
title: "Project Gateway"
keywords: 'KubeSphere, kubernetes, docker, helm, jenkins, istio, prometheus'
description: 'Volume Snapshots'
description: 'Project Gateway'
linkTitle: "Volume Snapshots"
linkTitle: "Project Gateway"
weight: 2130
---
This tutorial explains how to customize KubeSphere configurations in `conf/common.yaml`. You can reference the following section to understand each parameter.
```yaml
######################### Kubernetes #########################
# The default k8s version will be installed
kube_version: v1.16.7
# The default etcd version will be installed
etcd_version: v3.2.18
# Configure a cron job to backup etcd data, which is running on etcd machines.
# Period of running backup etcd job, the unit is minutes.
# The default value 30 means backup etcd every 30 minutes.
etcd_backup_period: 30
# How many backup replicas to keep.
# The default value5 means to keep latest 5 backups, older ones will be deleted by order.
keep_backup_number: 5
# The location to store etcd backups files on etcd machines.
etcd_backup_dir: "/var/backups/kube_etcd"
# Add other registry. (For users who need to accelerate image download)
docker_registry_mirrors:
- https://docker.mirrors.ustc.edu.cn
- https://registry.docker-cn.com
- https://mirror.aliyuncs.com
# Kubernetes network plugin, Calico will be installed by default. Note that Calico and flannel are recommended, which are tested and verified by KubeSphere.
kube_network_plugin: calico
# A valid CIDR range for Kubernetes services,
# 1. should not overlap with node subnet
# 2. should not overlap with Kubernetes pod subnet
kube_service_addresses: 10.233.0.0/18
# A valid CIDR range for Kubernetes pod subnet,
# 1. should not overlap with node subnet
# 2. should not overlap with Kubernetes services subnet
kube_pods_subnet: 10.233.64.0/18
# Kube-proxy proxyMode configuration, either ipvs, or iptables
kube_proxy_mode: ipvs
# Maximum pods allowed to run on every node.
kubelet_max_pods: 110
# Enable nodelocal dns cache, see https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.md#nodelocal-dns-cache for further information
enable_nodelocaldns: true
# Highly Available loadbalancer example config
# apiserver_loadbalancer_domain_name: "lb.kubesphere.local" # Loadbalancer domain name
# loadbalancer_apiserver: # Loadbalancer apiserver configuration, please uncomment this line when you prepare HA install
# address: 192.168.0.10 # Loadbalancer apiserver IP address
# port: 6443 # apiserver port
######################### KubeSphere #########################
# Version of KubeSphere
ks_version: v2.1.0
# KubeSphere console port, range 30000-32767,
# but 30180/30280/30380 are reserved for internal service
console_port: 30880 # KubeSphere console nodeport
#CommonComponent
mysql_volume_size: 20Gi # MySQL PVC size
minio_volume_size: 20Gi # Minio PVC size
etcd_volume_size: 20Gi # etcd PVC size
openldap_volume_size: 2Gi # openldap PVC size
redis_volume_size: 2Gi # Redis PVC size
# Monitoring
prometheus_replica: 2 # Prometheus replicas with 2 by default which are responsible for monitoring different segments of data source and provide high availability as well.
prometheus_memory_request: 400Mi # Prometheus request memory
prometheus_volume_size: 20Gi # Prometheus PVC size
grafana_enabled: true # enable grafana or not
## Container Engine Acceleration
## Use nvidia gpu acceleration in containers
# nvidia_accelerator_enabled: true # enable Nvidia GPU accelerator or not. It supports hybrid node with GPU and CPU installed.
# nvidia_gpu_nodes: # The GPU nodes specified in hosts.ini. FOr now we only support Ubuntu 16.04
# - kube-gpu-001 # The host name of the GPU node specified in hosts.ini
```
## How to Configure a GPU Node
You may want to use GPU nodes for special purpose such as machine learning. Let's say you have a GPU node called `node2` in `hosts.ini`, then in the file `common.yaml` specify the following configuration. Please be aware the `- node2` has two spaces indent.
```yaml
nvidia_accelerator_enabled: true
nvidia_gpu_nodes:
- node2
```
> Note: The GPU node now only supports Ubuntu 16.04.
TBD

View File

@ -1,107 +1,10 @@
---
title: "StorageClass"
title: "Project Members"
keywords: 'KubeSphere, kubernetes, docker, helm, jenkins, istio, prometheus'
description: 'StorageClass'
description: 'Project Members'
linkTitle: "Volume Snapshots"
linkTitle: "Project Members"
weight: 2130
---
This tutorial explains how to customize KubeSphere configurations in `conf/common.yaml`. You can reference the following section to understand each parameter.
```yaml
######################### Kubernetes #########################
# The default k8s version will be installed
kube_version: v1.16.7
# The default etcd version will be installed
etcd_version: v3.2.18
# Configure a cron job to backup etcd data, which is running on etcd machines.
# Period of running backup etcd job, the unit is minutes.
# The default value 30 means backup etcd every 30 minutes.
etcd_backup_period: 30
# How many backup replicas to keep.
# The default value5 means to keep latest 5 backups, older ones will be deleted by order.
keep_backup_number: 5
# The location to store etcd backups files on etcd machines.
etcd_backup_dir: "/var/backups/kube_etcd"
# Add other registry. (For users who need to accelerate image download)
docker_registry_mirrors:
- https://docker.mirrors.ustc.edu.cn
- https://registry.docker-cn.com
- https://mirror.aliyuncs.com
# Kubernetes network plugin, Calico will be installed by default. Note that Calico and flannel are recommended, which are tested and verified by KubeSphere.
kube_network_plugin: calico
# A valid CIDR range for Kubernetes services,
# 1. should not overlap with node subnet
# 2. should not overlap with Kubernetes pod subnet
kube_service_addresses: 10.233.0.0/18
# A valid CIDR range for Kubernetes pod subnet,
# 1. should not overlap with node subnet
# 2. should not overlap with Kubernetes services subnet
kube_pods_subnet: 10.233.64.0/18
# Kube-proxy proxyMode configuration, either ipvs, or iptables
kube_proxy_mode: ipvs
# Maximum pods allowed to run on every node.
kubelet_max_pods: 110
# Enable nodelocal dns cache, see https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.md#nodelocal-dns-cache for further information
enable_nodelocaldns: true
# Highly Available loadbalancer example config
# apiserver_loadbalancer_domain_name: "lb.kubesphere.local" # Loadbalancer domain name
# loadbalancer_apiserver: # Loadbalancer apiserver configuration, please uncomment this line when you prepare HA install
# address: 192.168.0.10 # Loadbalancer apiserver IP address
# port: 6443 # apiserver port
######################### KubeSphere #########################
# Version of KubeSphere
ks_version: v2.1.0
# KubeSphere console port, range 30000-32767,
# but 30180/30280/30380 are reserved for internal service
console_port: 30880 # KubeSphere console nodeport
#CommonComponent
mysql_volume_size: 20Gi # MySQL PVC size
minio_volume_size: 20Gi # Minio PVC size
etcd_volume_size: 20Gi # etcd PVC size
openldap_volume_size: 2Gi # openldap PVC size
redis_volume_size: 2Gi # Redis PVC size
# Monitoring
prometheus_replica: 2 # Prometheus replicas with 2 by default which are responsible for monitoring different segments of data source and provide high availability as well.
prometheus_memory_request: 400Mi # Prometheus request memory
prometheus_volume_size: 20Gi # Prometheus PVC size
grafana_enabled: true # enable grafana or not
## Container Engine Acceleration
## Use nvidia gpu acceleration in containers
# nvidia_accelerator_enabled: true # enable Nvidia GPU accelerator or not. It supports hybrid node with GPU and CPU installed.
# nvidia_gpu_nodes: # The GPU nodes specified in hosts.ini. FOr now we only support Ubuntu 16.04
# - kube-gpu-001 # The host name of the GPU node specified in hosts.ini
```
## How to Configure a GPU Node
You may want to use GPU nodes for special purpose such as machine learning. Let's say you have a GPU node called `node2` in `hosts.ini`, then in the file `common.yaml` specify the following configuration. Please be aware the `- node2` has two spaces indent.
```yaml
nvidia_accelerator_enabled: true
nvidia_gpu_nodes:
- node2
```
> Note: The GPU node now only supports Ubuntu 16.04.
TBD

View File

@ -1,9 +1,9 @@
---
title: "Volumes"
title: "Project Quotas"
keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus'
description: 'Create Volumes (PVCs)'
description: 'Project Quotas'
linkTitle: "Volumes"
linkTitle: "Project Quotas"
weight: 2110
---

View File

@ -1,5 +1,5 @@
---
title: "Volume Snapshots"
title: "Project Roles"
keywords: 'KubeSphere, kubernetes, docker, helm, jenkins, istio, prometheus'
description: 'Volume Snapshots'
@ -7,101 +7,4 @@ linkTitle: "Volume Snapshots"
weight: 2130
---
This tutorial explains how to customize KubeSphere configurations in `conf/common.yaml`. You can reference the following section to understand each parameter.
```yaml
######################### Kubernetes #########################
# The default k8s version will be installed
kube_version: v1.16.7
# The default etcd version will be installed
etcd_version: v3.2.18
# Configure a cron job to backup etcd data, which is running on etcd machines.
# Period of running backup etcd job, the unit is minutes.
# The default value 30 means backup etcd every 30 minutes.
etcd_backup_period: 30
# How many backup replicas to keep.
# The default value5 means to keep latest 5 backups, older ones will be deleted by order.
keep_backup_number: 5
# The location to store etcd backups files on etcd machines.
etcd_backup_dir: "/var/backups/kube_etcd"
# Add other registry. (For users who need to accelerate image download)
docker_registry_mirrors:
- https://docker.mirrors.ustc.edu.cn
- https://registry.docker-cn.com
- https://mirror.aliyuncs.com
# Kubernetes network plugin, Calico will be installed by default. Note that Calico and flannel are recommended, which are tested and verified by KubeSphere.
kube_network_plugin: calico
# A valid CIDR range for Kubernetes services,
# 1. should not overlap with node subnet
# 2. should not overlap with Kubernetes pod subnet
kube_service_addresses: 10.233.0.0/18
# A valid CIDR range for Kubernetes pod subnet,
# 1. should not overlap with node subnet
# 2. should not overlap with Kubernetes services subnet
kube_pods_subnet: 10.233.64.0/18
# Kube-proxy proxyMode configuration, either ipvs, or iptables
kube_proxy_mode: ipvs
# Maximum pods allowed to run on every node.
kubelet_max_pods: 110
# Enable nodelocal dns cache, see https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.md#nodelocal-dns-cache for further information
enable_nodelocaldns: true
# Highly Available loadbalancer example config
# apiserver_loadbalancer_domain_name: "lb.kubesphere.local" # Loadbalancer domain name
# loadbalancer_apiserver: # Loadbalancer apiserver configuration, please uncomment this line when you prepare HA install
# address: 192.168.0.10 # Loadbalancer apiserver IP address
# port: 6443 # apiserver port
######################### KubeSphere #########################
# Version of KubeSphere
ks_version: v2.1.0
# KubeSphere console port, range 30000-32767,
# but 30180/30280/30380 are reserved for internal service
console_port: 30880 # KubeSphere console nodeport
#CommonComponent
mysql_volume_size: 20Gi # MySQL PVC size
minio_volume_size: 20Gi # Minio PVC size
etcd_volume_size: 20Gi # etcd PVC size
openldap_volume_size: 2Gi # openldap PVC size
redis_volume_size: 2Gi # Redis PVC size
# Monitoring
prometheus_replica: 2 # Prometheus replicas with 2 by default which are responsible for monitoring different segments of data source and provide high availability as well.
prometheus_memory_request: 400Mi # Prometheus request memory
prometheus_volume_size: 20Gi # Prometheus PVC size
grafana_enabled: true # enable grafana or not
## Container Engine Acceleration
## Use nvidia gpu acceleration in containers
# nvidia_accelerator_enabled: true # enable Nvidia GPU accelerator or not. It supports hybrid node with GPU and CPU installed.
# nvidia_gpu_nodes: # The GPU nodes specified in hosts.ini. FOr now we only support Ubuntu 16.04
# - kube-gpu-001 # The host name of the GPU node specified in hosts.ini
```
## How to Configure a GPU Node
You may want to use GPU nodes for special purpose such as machine learning. Let's say you have a GPU node called `node2` in `hosts.ini`, then in the file `common.yaml` specify the following configuration. Please be aware the `- node2` has two spaces indent.
```yaml
nvidia_accelerator_enabled: true
nvidia_gpu_nodes:
- node2
```
> Note: The GPU node now only supports Ubuntu 16.04.
TBD

View File

@ -7,101 +7,4 @@ linkTitle: "Volume Snapshots"
weight: 2130
---
This tutorial explains how to customize KubeSphere configurations in `conf/common.yaml`. You can reference the following section to understand each parameter.
```yaml
######################### Kubernetes #########################
# The default k8s version will be installed
kube_version: v1.16.7
# The default etcd version will be installed
etcd_version: v3.2.18
# Configure a cron job to backup etcd data, which is running on etcd machines.
# Period of running backup etcd job, the unit is minutes.
# The default value 30 means backup etcd every 30 minutes.
etcd_backup_period: 30
# How many backup replicas to keep.
# The default value5 means to keep latest 5 backups, older ones will be deleted by order.
keep_backup_number: 5
# The location to store etcd backups files on etcd machines.
etcd_backup_dir: "/var/backups/kube_etcd"
# Add other registry. (For users who need to accelerate image download)
docker_registry_mirrors:
- https://docker.mirrors.ustc.edu.cn
- https://registry.docker-cn.com
- https://mirror.aliyuncs.com
# Kubernetes network plugin, Calico will be installed by default. Note that Calico and flannel are recommended, which are tested and verified by KubeSphere.
kube_network_plugin: calico
# A valid CIDR range for Kubernetes services,
# 1. should not overlap with node subnet
# 2. should not overlap with Kubernetes pod subnet
kube_service_addresses: 10.233.0.0/18
# A valid CIDR range for Kubernetes pod subnet,
# 1. should not overlap with node subnet
# 2. should not overlap with Kubernetes services subnet
kube_pods_subnet: 10.233.64.0/18
# Kube-proxy proxyMode configuration, either ipvs, or iptables
kube_proxy_mode: ipvs
# Maximum pods allowed to run on every node.
kubelet_max_pods: 110
# Enable nodelocal dns cache, see https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.md#nodelocal-dns-cache for further information
enable_nodelocaldns: true
# Highly Available loadbalancer example config
# apiserver_loadbalancer_domain_name: "lb.kubesphere.local" # Loadbalancer domain name
# loadbalancer_apiserver: # Loadbalancer apiserver configuration, please uncomment this line when you prepare HA install
# address: 192.168.0.10 # Loadbalancer apiserver IP address
# port: 6443 # apiserver port
######################### KubeSphere #########################
# Version of KubeSphere
ks_version: v2.1.0
# KubeSphere console port, range 30000-32767,
# but 30180/30280/30380 are reserved for internal service
console_port: 30880 # KubeSphere console nodeport
#CommonComponent
mysql_volume_size: 20Gi # MySQL PVC size
minio_volume_size: 20Gi # Minio PVC size
etcd_volume_size: 20Gi # etcd PVC size
openldap_volume_size: 2Gi # openldap PVC size
redis_volume_size: 2Gi # Redis PVC size
# Monitoring
prometheus_replica: 2 # Prometheus replicas with 2 by default which are responsible for monitoring different segments of data source and provide high availability as well.
prometheus_memory_request: 400Mi # Prometheus request memory
prometheus_volume_size: 20Gi # Prometheus PVC size
grafana_enabled: true # enable grafana or not
## Container Engine Acceleration
## Use nvidia gpu acceleration in containers
# nvidia_accelerator_enabled: true # enable Nvidia GPU accelerator or not. It supports hybrid node with GPU and CPU installed.
# nvidia_gpu_nodes: # The GPU nodes specified in hosts.ini. FOr now we only support Ubuntu 16.04
# - kube-gpu-001 # The host name of the GPU node specified in hosts.ini
```
## How to Configure a GPU Node
You may want to use GPU nodes for special purpose such as machine learning. Let's say you have a GPU node called `node2` in `hosts.ini`, then in the file `common.yaml` specify the following configuration. Please be aware the `- node2` has two spaces indent.
```yaml
nvidia_accelerator_enabled: true
nvidia_gpu_nodes:
- node2
```
> Note: The GPU node now only supports Ubuntu 16.04.
TBD

View File

@ -11,7 +11,7 @@ For those who are new to KubeSphere and looking for a quick way to discover the
## Prerequisites
If your machine is behind a firewall, you need to open relevant ports by following the document [Ports Requirement](../port-firewall).
If your machine is behind a firewall, you need to open relevant ports by following the document [Port Requirements](../../installing-on-linux/introduction/port-firewall/).
## Step 1: Prepare Linux Machine
@ -48,7 +48,7 @@ The system requirements above and the instructions below are for the default min
{{< notice tip >}}
- It is recommended that your OS be clean (without any other software installed). Otherwise, there may be conflicts.
- It is recommended that a container image mirror (accelerator) be prepared if you have trouble downloading images from dockerhub.io. See [Configure registry mirrors for the Docker daemon](https://docs.docker.com/registry/recipes/mirror/#configure-the-docker-daemon).
- It is recommended that a container image mirror (accelerator) be prepared if you have trouble downloading images from dockerhub.io. See [Configure Booster for Installation](../../installing-on-linux/faq/configure-booster/).
{{</ notice >}}
@ -63,7 +63,7 @@ Follow the step below to download KubeKey.
Download KubeKey using the following command:
```bash
wget https://github.com/kubesphere/kubekey/releases/download/v1.0.0/kubekey-v1.0.0-linux-amd64.tar.gz
wget -c https://kubesphere.io/download/kubekey-v1.0.0-linux-amd64.tar.gz -O - | tar -xz
```
{{</ tab >}}
@ -73,7 +73,7 @@ wget https://github.com/kubesphere/kubekey/releases/download/v1.0.0/kubekey-v1.0
Download KubeKey from [GitHub Release Page](https://github.com/kubesphere/kubekey/releases/tag/v1.0.0) or use the following command directly.
```bash
wget https://github.com/kubesphere/kubekey/releases/download/v1.0.0/kubekey-v1.0.0-linux-amd64.tar.gz
wget https://github.com/kubesphere/kubekey/releases/download/v1.0.0/kubekey-v1.0.0-linux-amd64.tar.gz -O - | tar -xz
```
{{</ tab >}}
@ -100,18 +100,18 @@ In this QuickStart tutorial, you only need to execute one command for installati
./kk create cluster [--with-kubernetes version] [--with-kubesphere version]
```
Create a Kubernetes cluster with KubeSphere installed (e.g. `--with-kubesphere v3.0.0`), this is an example for your reference:
Create a Kubernetes cluster with KubeSphere installed. Here is an example for your reference:
```bash
./kk create cluster --with-kubernetes v1.17.9 --with-kubesphere [version]
./kk create cluster --with-kubernetes v1.17.9 --with-kubesphere v3.0.0
```
{{< notice note >}}
- Supported Kubernetes versions: *v1.15.12*, *v1.16.13*, *v1.17.9* (default), *v1.18.6*.
- For all-in-one installation, generally speaking, you do not need to change any configuration.
- KubeKey will install [OpenEBS](https://openebs.io/) to provision LocalPV for development and testing environment by default, which is convenient for new users. For other storage classes, see Storage Class Configuration.
- KubeKey will install [OpenEBS](https://openebs.io/) to provision LocalPV for development and testing environment by default, which is convenient for new users. For other storage classes, see [Persistent Storage Configuration](../../installing-on-linux/introduction/storage-configuration/).
{{</ notice >}}

View File

@ -1,8 +0,0 @@
---
title: "Compose and deploy a Wordpress App"
keywords: 'kubesphere, kubernetes, docker, multi-tenant'
description: 'Compose and deploy a Wordpress App'
linkTitle: "Compose and deploy a Wordpress App"
weight: 3050
---

View File

@ -229,7 +229,7 @@ The role of `roles-manager` overlaps with `users-manager` while the latter is al
{{< notice note >}}
To create a DevOps project, you need to install KubeSphere DevOps system in advance, which is a pluggable component providing CI/CD pipelines, Binary-to-image, Source-to-image features, and more. For more information about how to enable DevOps, see KubeSphere DevOps System.
To create a DevOps project, you need to install KubeSphere DevOps system in advance, which is a pluggable component providing CI/CD pipelines, Binary-to-image, Source-to-image features, and more. For more information about how to enable DevOps, see [KubeSphere DevOps System](../../pluggable-components/devops/).
{{</ notice >}}
@ -245,7 +245,7 @@ To create a DevOps project, you need to install KubeSphere DevOps system in adva
![new-devops-project](https://ap3.qingstor.com/kubesphere-website/docs/20200827150523.png)
4. Go to **Project Management** and select **Project Members**. Click **Invite Member** to grant `project-regular` the role of `maintainer`, who is allowed to create pipelines and credentials.
4. Go to **Project Management** and select **Project Members**. Click **Invite Member** to grant `project-regular` the role of `operator`, who is allowed to create pipelines and credentials.
![devops-invite-member](https://ap3.qingstor.com/kubesphere-website/docs/20200827150704.png)

View File

@ -0,0 +1,199 @@
---
title: "Compose and Deploy Wordpress"
keywords: 'KubeSphere, Kubernetes, app, Wordpress'
description: 'Compose and deploy Wordpress.'
linkTitle: "Compose and Deploy Wordpress"
weight: 3050
---
## WordPress Introduction
WordPress is a free and open-source content management system written in PHP, allowing users to build their own websites. A complete Wordpress application includes the following Kubernetes objects with MySQL serving as the backend database.
![WordPress](https://pek3b.qingstor.com/kubesphere-docs/png/20200105181908.png)
## Objective
This tutorial demonstrates how to create an application (WordPress as an example) in KubeSphere and access it outside the cluster.
## Prerequisites
An account `project-regular` is needed with the role `operator` assigned in one of your projects (the user has been invited to the project). For more information, see [Create Workspace, Project, Account and Role](../create-workspace-and-project/).
## Estimated Time
About 15 minutes.
## Hands-on Lab
### Task 1: Create Secrets
#### Create a MySQL Secret
The environment variable `WORDPRESS_DB_PASSWORD` is the password to connect to the database in WordPress. In this step, you need to create a ConfigMap to store the environment variable that will be used in MySQL pod template.
1. Log in KubeSphere console using the account `project-regular`. Go to the detailed page of `demo-project` and navigate to **Configurations**. In **Secrets**, click **Create** on the right.
![create-secret](https://ap3.qingstor.com/kubesphere-website/docs/20200903154611.png)
2. Enter the basic information (e.g. name it `mysql-secret`) and click **Next**. In the next page, select **Default** for **Type** and click **Add Data** to add a key-value pair. Input the Key (`MYSQL_ROOT_PASSWORD`) and Value (`123456`) as below and click `√` at the bottom right corner to confirm. When you finish, click **Create** to continue.
![key-value](https://ap3.qingstor.com/kubesphere-website/docs/20200903155603.png)
#### Create a WordPress Secret
Follow the same steps above to create a WordPress secret `wordpress-secret` with the key `WORDPRESS_DB_PASSWORD` and value `123456`. Secrets created display in the list as below:
![wordpress-secrets](https://ap3.qingstor.com/kubesphere-website/docs/20200903160809.png)
### Task 2: Create a Volume
1. Go to **Volumes** under **Storage** and click **Create**.
![create-volume](https://ap3.qingstor.com/kubesphere-website/docs/20200903162343.png)
2. Enter the basic information of the volume (e.g. name it `wordpress-pvc`) and click **Next**.
3. In **Volume Settings**, you need to choose an available **Storage Class**, and set **Access Mode** and **Volume Capacity**. You can use the default value directly as shown below. Click **Next** to continue.
![volume-settings](https://ap3.qingstor.com/kubesphere-website/docs/20200903163419.png)
4. For **Advanced Settings**, you do not need to add extra information for this task and click **Create** to finish.
### Task 3: Create an Application
#### Add MySQL backend component
1. Navigate to **Applications** under **Application Workloads**, select **Composing App** and click **Create Composing Application**.
![](https://ap3.qingstor.com/kubesphere-website/docs/20200903164227.png)
2. Enter the basic information (e.g. input `wordpress` for Application Name) and click **Next**.
![basic-info](https://ap3.qingstor.com/kubesphere-website/docs/basic-info.png)
3. In **Components**, click **Add Service** to set a component in the app.
![add-service](https://ap3.qingstor.com/kubesphere-website/docs/20200903173210.png)
4. Define a service type for the component. Select **Stateful Service** here.
5. Enter the name for the stateful service (e.g. **mysql**) and click **Next**.
![mysql-name](https://ap3.qingstor.com/kubesphere-website/docs/mysqlname.png)
6. In **Container Image**, click **Add Container Image**.
![container-image](https://ap3.qingstor.com/kubesphere-website/docs/container-image.png)
7. Enter `mysql:5.6` in the search box, press **Enter** and click **Use Default Ports**. After that, do not click `√` at the bottom right corner as the setting is not finished yet.
![](https://ap3.qingstor.com/kubesphere-website/docs/20200903174120.png)
{{< notice note >}}
In **Advanced Settings**, make sure the memory limit is no less than 1000 Mi or MySQL may fail to start due to a lack of memory.
{{</ notice >}}
8. Scroll down to **Environment Variables** and click **Use ConfigMap or Secret**. Input the name `MYSQL_ROOT_PASSWORD` and choose the resource `mysql-secret` and the key `MYSQL_ROOT_PASSWORD` created in the previous step. Click `√` after you finish and **Next** to continue.
![environment-var](https://ap3.qingstor.com/kubesphere-website/docs/20200903174838.png)
9. Select **Add Volume Template** in **Mount Volumes**. Input the value of **Volume Name** (`mysql`) and **Mount Path** (mode: `ReadAndWrite`, path: `/var/lib/mysql`) as below:
![volume-template](https://ap3.qingstor.com/kubesphere-website/docs/vol11.jpg)
Click `√` after you finish and click **Next** to continue.
10. In **Advanced Settings**, you can click **Add** directly or select other options based on your needs.
![advanced-setting](https://ap3.qingstor.com/kubesphere-website/docs/20200903180415.png)
11. At this point, the MySQL component has beed added as shown below:
![mysql-done](https://ap3.qingstor.com/kubesphere-website/docs/20200903180714.png)
#### Add WordPress frontend component
12. Click **Add Service** again and select **Stateless Service** this time. Enter the name `wordpress` and click Next.
![](https://ap3.qingstor.com/kubesphere-website/docs/name-wordpress.png)
13. Similar to the step above, click **Add Container Image**, enter `wordpress:4.8-apache` in the search box, press **Enter** and click **Use Default Ports**.
![](https://ap3.qingstor.com/kubesphere-website/docs/20200903171416.png)
14. Scroll down to **Environment Variables** and click **Use ConfigMap or Secret**. Two environment variables need to be added here. Enter the values according to the screenshot below.
- For `WORDPRESS_DB_PASSWORD`, choose `wordpress-secret` and `WORDPRESS_DB_PASSWORD` created in Task 1.
- Click **Add Environment Variable**, and enter `WORDPRESS_DB_HOST` and `mysql` for the key and value.
{{< notice warning >}}
For the second environment variable added here, the value must be exactly the same as the name you set for MySQL in step 5. Otherwise, Wordpress cannot connect to the corresponding database of MySQL.
{{</ notice >}}
![environment-varss](https://ap3.qingstor.com/kubesphere-website/docs/20200903171658.png)
Click `√` to save it and **Next** to continue.
15. In **Mount Volumes**, click **Add Volume** and select **Choose an existing volume**.
![](https://ap3.qingstor.com/kubesphere-website/docs/20200903171819.png)
![choose-existing](https://ap3.qingstor.com/kubesphere-website/docs/20200903171906.png)
16. Select `wordpress-pvc` created in the previous step, set the mode as `ReadAndWrite`, and input `/var/www/html` as its mount path. Click `√` to save it and **Next** to continue.
![](https://ap3.qingstor.com/kubesphere-website/docs/20200903172021.png)
17. In **Advanced Settings**, you can click **Add** directly or select other options based on your needs.
![](https://ap3.qingstor.com/kubesphere-website/docs/20200903172144.png)
18. The frontend component is also set now. Click **Next** to continue.
![two-components-done](https://ap3.qingstor.com/kubesphere-website/docs/20200903172222.png)
19. You can set route rules (Ingress) here or click **Create** directly.
![](https://ap3.qingstor.com/kubesphere-website/docs/20200903184009.png)
20. The app will display in the list below after you create it.
![](https://ap3.qingstor.com/kubesphere-website/docs/20200903184151.png)
### Task 4: Verify the Resources
In **Workloads**, check the status of `wordpress-v1` and `mysql-v1` in **Deployments** and **StatefulSets** respectively. If they are running as shown in the image below, it means WordPress has been created successfully.
![wordpress-deployment](https://ap3.qingstor.com/kubesphere-website/docs/20200903203217.png)
![wordpress-statefulset](https://ap3.qingstor.com/kubesphere-website/docs/20200903203638.png)
### Task 5: Access WordPress through NodePort
1. To access the service outside the cluster, navigate to **Services** first. Click the three dots on the right of `wordpress` and select **Edit Internet Access**.
![edit-internet-access](https://ap3.qingstor.com/kubesphere-website/docs/20200903204414.png)
2. Select `NodePort` for **Access Method** and click **OK**.
![access-method](https://ap3.qingstor.com/kubesphere-website/docs/20200903205135.png)
3. Click the service and you can see the port exposed.
![nodeport-number](https://ap3.qingstor.com/kubesphere-website/docs/20200903205423.png)
4. Access this application via `{Node IP}:{NodePort}` and you can see an image as below:
![wordpress](https://ap3.qingstor.com/kubesphere-website/docs/20200903200408.png)
{{< notice note >}}
Make sure the port is opened in your security groups before you access the service.
{{</ notice >}}

View File

@ -11,4 +11,24 @@ icon: "/images/docs/docs.svg"
---
This section helps cluster operators to upgrade existing KubeSphere to v3.0.0.
This chapter demonstrates how cluster operators can upgrade existing KubeSphere to v3.0.0.
## [Overview](../upgrade/upgrade-overview/)
Understand what you need to pay attention to before the upgrade, such as versions and upgrade tools.
## [Upgrade with KubeKey](../upgrade/upgrade-with-kubekey/)
Follow steps to use KubeKey to upgrade Kubernetes and KubeSphere.
## [Upgrade with ks-installer](../upgrade/upgrade-with-ks-installer/)
Follow steps to use ks-installer to upgrade KubeSphere.
## [Changes after Upgrade](../upgrade/what-changed/)
Understand what will be changed after the upgrade.
## [FAQ](../upgrade/upgrade-faq/)
Find the answers to some of the most asked questions of upgrading.

View File

@ -1,10 +1,10 @@
---
title: "FAQ"
keywords: "kubernetes, upgrade, kubesphere, v3.0.0"
keywords: "Kubernetes, upgrade, KubeSphere, v3.0.0"
description: "KubeSphere Upgrade FAQ"
linkTitle: "FAQ"
weight: 250
weight: 4030
---
## How to upgrade Qingcloud CSI after upgrading?

View File

@ -1,38 +1,38 @@
---
title: "Overview"
keywords: "kubernetes, upgrade, kubesphere, v3.0.0"
keywords: "Kubernetes, upgrade, KubeSphere, v3.0.0, upgrade"
description: "KubeSphere Upgrade Overview"
linkTitle: "Overview"
weight: 50
weight: 4010
---
## Kubernetes
KubeSphere v3.0.0 is compatible with Kubernetes 1.15.x, 1.16.x, 1.17.x and 1.18.x:
- If your KubeSphere v2.1.x is installed on Kubernetes 1.15.x+ , you can choose to only upgrade KubeSphere to v3.0.0 or upgrade Kubernetes (to a higher version) and KubeSphere (to v3.0.0) at the same time.
- If your KubeSphere v2.1.x is installed on Kubernetes 1.15.x+, you can choose to only upgrade KubeSphere to v3.0.0 or upgrade Kubernetes (to a higher version) and KubeSphere (to v3.0.0) at the same time.
- If your KubeSphere v2.1.x is installed on Kubernetes 1.14.x, you have to upgrade Kubernetes (to 1.15.x+) and KubeSphere (to v3.0.0 ) at the same time.
{{< notice warning >}}
- There're some significant API changes in Kubernetes 1.16.x comparing with prior versions 1.14.x and 1.15.x, please refer to [Deprecated APIs Removed In 1.16: Heres What You Need To Know](https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/) for more details. So if you plan to upgrade from Kubernetes 1.14.x/1.15.x to 1.16.x+, you'll have to migrate some of your workloads after upgrading.
There are some significant API changes in Kubernetes 1.16.x compared with prior versions 1.14.x and 1.15.x. Please refer to [Deprecated APIs Removed In 1.16: Heres What You Need To Know](https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/) for more details. So if you plan to upgrade from Kubernetes 1.14.x/1.15.x to 1.16.x+, you have to migrate some of your workloads after upgrading.
{{</ notice >}}
## Before Upgrade
{{< notice warning >}}
- Please note that you are supposed to implement a simulation for the upgrade in the testing environment first. After the upgrade is successful in testing environment and all applications are running normally, then upgrade it in production environment.
- Note that during the upgrade process, there may be a short interruption of applications (especially for those single-replica Pod). Please arrange a reasonable upgrade time.
- It's recommended to backup ETCD and stateful applications before upgrading in production environment, you can use [Velero](https://velero.io/) to implement backup and migrate Kubernetes resources and persistent volumes.
- You are supposed to implement a simulation for the upgrade in a testing environment first. After the upgrade is successful in the testing environment and all applications are running normally, upgrade it in your production environment.
- During the upgrade process, there may be a short interruption of applications (especially for those single-replica Pod). Please arrange a reasonable period of time for upgrade.
- It is recommended to back up ETCD and stateful applications before upgrading in a production environment. You can use [Velero](https://velero.io/) to implement backup and migrate Kubernetes resources and persistent volumes.
{{</ notice >}}
## How
A brand new installer [KubeKey](https://github.com/kubesphere/kubekey) is introduced in KubeSphere v3.0.0, with which you can install or upgrade Kubernetes and KubeSphere. More details about upgrading with [KubeKey](https://github.com/kubesphere/kubekey) will be covered in the following sections.
A brand-new installer [KubeKey](https://github.com/kubesphere/kubekey) is introduced in KubeSphere v3.0.0, with which you can install or upgrade Kubernetes and KubeSphere. More details about upgrading with [KubeKey](https://github.com/kubesphere/kubekey) will be covered in [Upgrade with KubeKey](../upgrade-with-kubekey/).
## KubeKey or ks-installer?
[ks-installer](https://github.com/kubesphere/ks-installer/tree/master) was the main installation tool as of KubeSphere v2. For users whose Kubernetes clusters were NOT deployed via [KubeSphere Installer](https://v2-1.docs.kubesphere.io/docs/installation/all-in-one/#step-2-download-installer-package), they should choose ks-installer to upgrade KubeSphere. For example, if your Kubernetes is hosted by cloud vendors or self provisioned, you should go for [Upgrade with ks-installer](../upgrade-with-ks-installer).
[ks-installer](https://github.com/kubesphere/ks-installer/tree/master) was the main installation tool as of KubeSphere v2. For users whose Kubernetes clusters were NOT deployed via [KubeSphere Installer](https://v2-1.docs.kubesphere.io/docs/installation/all-in-one/#step-2-download-installer-package), they should choose ks-installer to upgrade KubeSphere. For example, if your Kubernetes is hosted by cloud vendors or self provisioned, please refer to [Upgrade with ks-installer](../upgrade-with-ks-installer).

View File

@ -4,12 +4,12 @@ keywords: "kubernetes, upgrade, kubesphere, v3.0.0"
description: "Upgrade KubeSphere with ks-installer"
linkTitle: "Upgrade with ks-installer"
weight: 150
weight: 4020
---
ks-installer is recommended for users whose Kubernetes clusters were not setup via [KubeSphere Installer](https://v2-1.docs.kubesphere.io/docs/installation/all-in-one/#step-2-download-installer-package), but hosted by cloud vendors. This tutorial guides to **upgrade KubeSphere only**. Cluster operators are responsible for upgrading Kubernetes on themselves beforehand.
ks-installer is recommended for users whose Kubernetes clusters were not set up via [KubeSphere Installer](https://v2-1.docs.kubesphere.io/docs/installation/all-in-one/#step-2-download-installer-package), but hosted by cloud vendors. This tutorial is for **upgrading KubeSphere only**. Cluster operators are responsible for upgrading Kubernetes themselves beforehand.
## Prerequisite
## Prerequisites
- You need to have a KubeSphere cluster running version 2.1.1.
@ -17,28 +17,34 @@ ks-installer is recommended for users whose Kubernetes clusters were not setup v
If your KubeSphere version is v2.1.0 or earlier, please upgrade to v2.1.1 first.
{{</ notice >}}
- Make sure you read the release notes carefully
- Make sure you read [Release Notes For 3.0.0](../../release/release-v300/) carefully.
{{< notice warning >}}
In v3.0.0, KubeSphere refactors many of its components such as Fluent Bit Operator, IAM, etc. Make sure you back up any important components in case you heavily customized them but not from console.
In v3.0.0, KubeSphere refactors many of its components such as Fluent Bit Operator and IAM. Make sure you back up any important components in case you heavily customized them but not from console.
{{</ notice >}}
## Step 1. Download YAML files
## Step 1: Download YAML files
The following are configuration templates.
Execute the following commands to download configuration templates.
```
```bash
wget https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml
```
```bash
wget https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml
```
## Step 2. Modify the configuration file template
## Step 2: Modify the configuration file template
Sync the changes from the v2.1.1 to v3.0.0 into the config section of `cluster-configuration.yaml`. Note that the storage class and the pluggable components need to be consistent with the v2.1.1.
Sync the changes from v2.1.1 to v3.0.0 into the config section of `cluster-configuration.yaml`. Note that the storage class and the pluggable components need to be consistent with that of v2.1.1.
## Step 3. Apply YAML files
## Step 3: Apply YAML files
```
```bash
kubectl apply -f kubesphere-installer.yaml
kubectl apply -f cluster-configuration.yaml
```
```bash
kubectl apply -f cluster-configuration.yaml
```

View File

@ -1,14 +1,14 @@
---
title: "Upgrade with KubeKey"
keywords: "kubernetes, upgrade, kubesphere, v3.0.0"
keywords: "Kubernetes, upgrade, KubeSphere, v3.0.0, KubeKey"
description: "Upgrade KubeSphere with kubekey"
linkTitle: "Upgrade with KubeKey"
weight: 100
weight: 4015
---
KubeKey is recommended for users whose KubeSphere and Kubernetes were both deployed by [KubeSphere Installer](https://v2-1.docs.kubesphere.io/docs/installation/all-in-one/#step-2-download-installer-package). If your Kubernetes cluster was provisioned by yourself or cloud providers, please refer to [Upgrade with ks-installer](../upgrade-with-ks-installer).
## Prerequisite
## Prerequisites
- You need to have a KubeSphere cluster running version 2.1.1.
@ -16,25 +16,25 @@ KubeKey is recommended for users whose KubeSphere and Kubernetes were both deplo
If your KubeSphere version is v2.1.0 or earlier, please upgrade to v2.1.1 first.
{{</ notice >}}
- Download KubeKey
- Download KubeKey.
{{< tabs >}}
{{< tab "For users with poor network to GitHub" >}}
{{< tab "For users with poor network connections to GitHub" >}}
For users in China, you can download the installer using this link.
Download KubeKey using the following command:
```bash
wget https://github.com/kubesphere/kubekey/releases/download/v1.0.0/kubekey-v1.0.0-linux-amd64.tar.gz
wget -c https://kubesphere.io/download/kubekey-v1.0.0-linux-amd64.tar.gz -O - | tar -xz
```
{{</ tab >}}
{{< tab "For users with good network to GitHub" >}}
{{< tab "For users with good network connections to GitHub" >}}
For users with good network to GitHub, you can download it from [GitHub Release Page](https://github.com/kubesphere/kubekey/releases/tag/v1.0.0) or use the following link directly.
Download KubeKey from [GitHub Release Page](https://github.com/kubesphere/kubekey/releases/tag/v1.0.0) or use the following command directly.
```bash
wget https://github.com/kubesphere/kubekey/releases/download/v1.0.0/kubekey-v1.0.0-linux-amd64.tar.gz
wget https://github.com/kubesphere/kubekey/releases/download/v1.0.0/kubekey-v1.0.0-linux-amd64.tar.gz -O - | tar -xz
```
{{</ tab >}}
@ -46,32 +46,31 @@ Grant the execution right to `kk`:
chmod +x kk
```
- Make sure you read the release notes carefully
- Make sure you read [Release Notes For 3.0.0](../../release/release-v300/) carefully.
{{< notice warning >}}
In v3.0.0, KubeSphere refactors many of its components such as Fluent Bit Operator, IAM, etc. Make sure you back up any important components in case you heavily customized them but not from console.
In v3.0.0, KubeSphere refactors many of its components such as Fluent Bit Operator and IAM. Make sure you back up any important components in case you heavily customized them but not from console.
{{</ notice >}}
- Make your upgrade plan. The two upgrading scenarios are documented below.
- Make your upgrade plan. Two upgrading scenarios are documented below.
## Upgrade KubeSphere and Kubernetes
Upgrading steps are different for single-node clusters (all in one) and multi-node clusters.
Upgrading steps are different for single-node clusters (all-in-one) and multi-node clusters.
{{< notice info >}}
Upgrading with Kubernetes will cause helm to be upgraded from v2 to v3. If you want to continue using helm2, please backup it: `cp /usr/local/bin/helm /usr/local/bin/helm2`
- Upgrading with Kubernetes will cause helm to be upgraded from v2 to v3. If you want to continue using helm2, please back up it: `cp /usr/local/bin/helm /usr/local/bin/helm2`
- When upgrading Kubernetes, KubeKey will upgrade from one MINOR version to the next MINOR version until the target version. For example, you may see the upgrading process going from 1.16 to 1.17 and to 1.18, instead of directly jumping to 1.18 from 1.16.
{{</ notice >}}
{{< notice info >}}
When upgrading Kubernetes, KubeKey will upgrade from one MINOR version to the next MINOR version until the target version. For example, you may observe the upgrading process going through 1.16, 1.17 and 1.18, but not jumping to 1.18 from 1.16.
{{</ notice >}}
### Allinone
### All-in-one Cluster
The following command upgrades your single-node cluster to KubeSphere v3.0.0 and Kubernetes v1.17.9 (default):
```
```bash
./kk upgrade --with-kubesphere --with-kubernetes
```
@ -82,13 +81,13 @@ To upgrade Kubernetes to a specific version, please explicitly provide the versi
- v1.17.0, v1.17.4, v1.17.5, v1.17.6, v1.17.7, v1.17.8, v1.17.9
- v1.18.3, v1.18.5, v1.18.6
### Multi-Nodes
### Multi-node Cluster
#### Step1. Generate KubeKey configuration file
#### Step1: Generate a configuration file with KubeKey
This commad creates KubeKey configuration file onto `config-sample.yaml` from your cluster.
This command creates a configuration file `config-sample.yaml` from your cluster.
```
```bash
./kk create config --from-cluster
```
@ -96,23 +95,23 @@ This commad creates KubeKey configuration file onto `config-sample.yaml` from yo
It assumes your kubeconfig is allocated in `~/.kube/config`. You can change it with the flag `--kubeconfig`.
{{</ notice >}}
#### Step 2. Modify the configuration file template
#### Step 2: Modify the configuration file template
Modify `config-sample.yaml` to fit your cluster setup. Make sure you replace the following fields correctly.
- `hosts`: Fill connection information among your hosts.
- `roleGroups.etcd`: Fill etcd memebers.
- `controlPlaneEndpoint`: Fill your load balancer address (Optional)
- `registry`: Fill image registry information (Optional)
- `hosts`: Input connection information among your hosts.
- `roleGroups.etcd`: Input etcd members.
- `controlPlaneEndpoint`: Input your load balancer address (Optional).
- `registry`: Input image registry information (Optional).
{{< notice note >}}
Please refer to the Cluster section of [config-example.yaml](https://github.com/kubesphere/kubekey/blob/master/docs/config-example.md) for more information.
{{</ notice >}}
#### Step 3. Upgrade your cluster
#### Step 3: Upgrade your cluster
The following command upgrades your cluster to KubeSphere v3.0.0 and Kubernetes v1.17.9 (default):
```
```bash
./kk upgrade --with-kubesphere --with-kubernetes -f config-sample.yaml
```

View File

@ -1,10 +1,15 @@
---
title: "What Changed in 3.0.0"
keywords: "kubernetes, upgrade, kubesphere, v3.0.0"
description: "KubeSphere Upgrade"
title: "Changes after Upgrade"
keywords: "Kubernetes, upgrade, KubeSphere, v3.0.0"
description: "Understand what will be changed after upgrade."
linkTitle: "What Changed in 3.0.0"
weight: 200
linkTitle: "Changes after Upgrade"
weight: 4025
---
There are some changes in access control in 3.0. We simplified the definition of custom roles, aggregated some closely related permission items into permission groups. The custom role will not change during the upgrade, custom roles that match the new policy rules can be used directly, otherwise you need to manually modify them according to the document.
This section covers the changes after upgrade for existing settings in previous versions. If you want to know all the new features and enhancements in KubeSphere 3.0.0, see [Release Notes for 3.0.0](../../release/release-v300/) directly.
## Access Control
The definition of custom roles has been simplified. Some closely-related permission items have been aggregated into permission groups. Custom roles will not change during the upgrade and can be used directly after the upgrade if they conform to new policy rules for authorization assignment. Otherwise, you need to modify them manually by adding authorization to these roles.

View File

@ -11,12 +11,4 @@ icon: "/images/docs/docs.svg"
---
## Installing KubeSphere and Kubernetes on Linux
In this chapter, we will demonstrate how to use KubeKey to provision a new Kubernetes and KubeSphere cluster based on different infrastructures. Kubekey can help you to quickly build a production-ready cluster architecture on a set of machines from zero to one. It also helps you to easily scale the cluster and install pluggable components on existing architecture.
## Most Popular Pages
Below you will find some of the most common and helpful pages from this chapter. We highly recommend you to review them at first.
{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}}
TBD