diff --git a/content/en/docs/application-store/_index.md b/content/en/docs/application-store/_index.md new file mode 100644 index 000000000..bc9c43c71 --- /dev/null +++ b/content/en/docs/application-store/_index.md @@ -0,0 +1,23 @@ +--- +title: "Application Store" +description: "Getting started with KubeSphere DevOps project" +layout: "single" + +linkTitle: "Application Store" +weight: 4500 + +icon: "/images/docs/docs.svg" + +--- + +## Installing KubeSphere and Kubernetes on Linux + +In this chapter, we will demonstrate how to use KubeKey to provision a new Kubernetes and KubeSphere cluster based on different infrastructures. Kubekey can help you to quickly build a production-ready cluster architecture on a set of machines from zero to one. It also helps you to easily scale the cluster and install pluggable components on existing architecture. + +## Most Popular Pages + +Below you will find some of the most common and helpful pages from this chapter. We highly recommend you to review them at first. + +{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} + +{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} diff --git a/content/en/docs/application-store/app-developer-guide/_index.md b/content/en/docs/application-store/app-developer-guide/_index.md new file mode 100644 index 000000000..bb7d8edd9 --- /dev/null +++ b/content/en/docs/application-store/app-developer-guide/_index.md @@ -0,0 +1,7 @@ +--- +linkTitle: "Application Developer Guide" +weight: 2200 + +_build: + render: false +--- diff --git a/content/en/docs/installation/install-on-linux/install-ks-on-linux-airgapped.md b/content/en/docs/application-store/app-developer-guide/helm-developer-guide.md similarity index 100% rename from content/en/docs/installation/install-on-linux/install-ks-on-linux-airgapped.md rename to content/en/docs/application-store/app-developer-guide/helm-developer-guide.md diff --git a/content/en/docs/application-store/app-developer-guide/helm-specification.md b/content/en/docs/application-store/app-developer-guide/helm-specification.md new file mode 100644 index 000000000..26b3e4f04 --- /dev/null +++ b/content/en/docs/application-store/app-developer-guide/helm-specification.md @@ -0,0 +1,224 @@ +--- +title: "Air-Gapped Installation" +keywords: 'kubernetes, kubesphere, air gapped, installation' +description: 'How to install KubeSphere on air-gapped Linux machines' + + +weight: 2240 +--- + +The air-gapped installation is almost the same as the online installation except it creates a local registry to host the Docker images. We will demonstrate how to install KubeSphere and Kubernetes on air-gapped environment. + +> Note: The dependencies in different operating systems may cause upexpected problems. If you encounter any installation problems on air-gapped environment, please describe your OS information and error logs on [GitHub](https://github.com/kubesphere/kubesphere/issues). + +## Prerequisites + +- If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information. +> - Installer will use `/var/lib/docker` as the default directory where all Docker related files, including the images, are stored. We recommend you to add additional storage to a disk with at least 100G mounted at `/var/lib/docker` and `/mnt/registry` respectively, use the [fdisk](https://www.computerhope.com/unix/fdisk.htm) command for reference. +- Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [OpenEBS](https://openebs.io/) to provide storage service with dynamic provisioning. It is convenient for testing and development. For production, please [configure supported persistent storage service](../storage-configuration) and prepare [high availability configuration](../master-ha) before installation. +- Since the air-gapped machines cannot connect to apt or yum source, please use clean Linux machine to avoid this problem. + +## Step 1: Prepare Linux Hosts + +The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements. + +- Supported OSes: CentOS 7.4 ~ 7.7 (64-bit), Ubuntu 16.04.5/16.04.6/18.04.1/18.04.2/18.04.3 LTS (64-bit) +- Time synchronization is required across all nodes, otherwise the installation may not succeed; +- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`; +- If you are using `Ubuntu 18.04`, you need to use the user `root`. +- Ensure your disk of each node is at least 100G. +- CPU and memory in total of all machines: 2 cores and 4 GB for minimal installation; 8 cores and 16 GB for complete installation. + + +The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes. + +> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide. + +| Host IP | Host Name | Role | +| --- | --- | --- | +|192.168.0.1|master|master, etcd| +|192.168.0.2|node1|node| +|192.168.0.3|node2|node| + +### Cluster Architecture + +#### Single Master, Single Etcd, Two Nodes + +![Architecture](/cluster-architecture.svg) + +## Step 2: Download Installer Package + +Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`. + +```bash +curl -L https://kubesphere.io/download/offline/latest > kubesphere-all-offline-v2.1.1.tar.gz \ +&& tar -zxf kubesphere-all-offline-v2.1.1.tar.gz && cd kubesphere-all-offline-v2.1.1/conf +``` + +## Step 3: Configure Host Template + +> This step is only for multi-node installation, you can skip this step if you choose all-in-one installation. + +Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file. + +> Note: +> +> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`. +> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`. +> - master, node1 and node2 are the host names of each node and all host names should be in lowercase. + +### hosts.ini + +```ini +[all] +master ansible_connection=local ip=192.168.0.1 +node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD +node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD + +[local-registry] +master + +[kube-master] +master + +[kube-node] +node1 +node2 + +[etcd] +master + +[k8s-cluster:children] +kube-node +kube-master +``` + +> Note: +> +> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here. +> - Installer will use a node as the local registry for docker images, defaults to "master" in the group `[local-registry]`. +> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively. +> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`. +> +> Parameters Specification: +> +> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection. +> - `ansible_host`: The name of the host to be connected. +> - `ip`: The ip of the host to be connected. +> - `ansible_user`: The default ssh user name to use. +> - `ansible_become_pass`: Allows you to set the privilege escalation password. +> - `ansible_ssh_pass`: The password of the host to be connected using root. + +## Step 4: Enable All Components + +> This is step is complete installation. You can skip this step if you choose a minimal installation. + +Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default. + +```yaml +# LOGGING CONFIGURATION +# logging is an optional component when installing KubeSphere, and +# Kubernetes builtin logging APIs will be used if logging_enabled is set to false. +# Builtin logging only provides limited functions, so recommend to enable logging. +logging_enabled: true # Whether to install logging system +elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number +elasticsearch_data_replica: 2 # total number of data nodes +elasticsearch_volume_size: 20Gi # Elasticsearch volume size +log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default. +elk_prefix: logstash # the string making up index names. The index name will be formatted as ks--log +kibana_enabled: false # Kibana Whether to install built-in Grafana +#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption. +#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port + +#DevOps Configuration +devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image) +jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default +jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default +jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default +jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters +jenkinsJavaOpts_Xmx: 6g +jenkinsJavaOpts_MaxRAM: 8g +sonarqube_enabled: true # Whether to install built-in SonarQube +#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption. +#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token + +# Following components are all optional for KubeSphere, +# Which could be turned on to install it before installation or later by updating its value to true +openpitrix_enabled: true # KubeSphere application store +metrics_server_enabled: true # For KubeSphere HPA to use +servicemesh_enabled: true # KubeSphere service mesh system(Istio-based) +notification_enabled: true # KubeSphere notification system +alerting_enabled: true # KubeSphere alerting system +``` + +## Step 5: Install KubeSphere to Linux Machines + +> Note: +> +> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default. +> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions. +> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation. +> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts. + +**1.** Enter `scripts` folder, and execute `install.sh` using `root` user: + +```bash +cd ../cripts +./install.sh +``` + +**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume. + +```bash +################################################ + KubeSphere Installer Menu +################################################ +* 1) All-in-one +* 2) Multi-node +* 3) Quit +################################################ +https://kubesphere.io/ 2020-02-24 +################################################ +Please input an option: 2 + +``` + +**3.** Verify the multi-node installation: + +**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go. + +```bash +successsful! +##################################################### +### Welcome to KubeSphere! ### +##################################################### + +Console: http://192.168.0.1:30880 +Account: admin +Password: P@88w0rd + +NOTE:Please modify the default password after login. +##################################################### +``` + +> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components). + +**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in. + +![Login](https://pek3b.qingstor.com/kubesphere-docs/png/20191017172215.png) + +Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up. + +![Landing Page](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png) + +## Enable Pluggable Components + +If you already have set up minimal installation, you still can edit the ConfigMap of ks-installer using the following command. Please make sure there is enough resource in your machines, see [Pluggable Components Overview](/en/installation/pluggable-components/). + +```bash +kubectl edit cm -n kubesphere-system ks-installer +``` + +## FAQ + +If you have further questions please do not hesitate to raise issues on [GitHub](https://github.com/kubesphere/kubesphere/issues). diff --git a/content/en/docs/application-store/built-in-apps/_index.md b/content/en/docs/application-store/built-in-apps/_index.md new file mode 100644 index 000000000..0f2ce8a6d --- /dev/null +++ b/content/en/docs/application-store/built-in-apps/_index.md @@ -0,0 +1,7 @@ +--- +linkTitle: "Built-in Applications" +weight: 2200 + +_build: + render: false +--- diff --git a/content/en/docs/installation/install-on-linux/all-in-one.md b/content/en/docs/application-store/built-in-apps/all-in-one.md similarity index 100% rename from content/en/docs/installation/install-on-linux/all-in-one.md rename to content/en/docs/application-store/built-in-apps/all-in-one.md diff --git a/content/en/docs/installation/install-on-linux/complete-installation.md b/content/en/docs/application-store/built-in-apps/complete-installation.md similarity index 100% rename from content/en/docs/installation/install-on-linux/complete-installation.md rename to content/en/docs/application-store/built-in-apps/complete-installation.md diff --git a/content/en/docs/application-store/built-in-apps/install-ks-on-linux-airgapped.md b/content/en/docs/application-store/built-in-apps/install-ks-on-linux-airgapped.md new file mode 100644 index 000000000..26b3e4f04 --- /dev/null +++ b/content/en/docs/application-store/built-in-apps/install-ks-on-linux-airgapped.md @@ -0,0 +1,224 @@ +--- +title: "Air-Gapped Installation" +keywords: 'kubernetes, kubesphere, air gapped, installation' +description: 'How to install KubeSphere on air-gapped Linux machines' + + +weight: 2240 +--- + +The air-gapped installation is almost the same as the online installation except it creates a local registry to host the Docker images. We will demonstrate how to install KubeSphere and Kubernetes on air-gapped environment. + +> Note: The dependencies in different operating systems may cause upexpected problems. If you encounter any installation problems on air-gapped environment, please describe your OS information and error logs on [GitHub](https://github.com/kubesphere/kubesphere/issues). + +## Prerequisites + +- If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information. +> - Installer will use `/var/lib/docker` as the default directory where all Docker related files, including the images, are stored. We recommend you to add additional storage to a disk with at least 100G mounted at `/var/lib/docker` and `/mnt/registry` respectively, use the [fdisk](https://www.computerhope.com/unix/fdisk.htm) command for reference. +- Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [OpenEBS](https://openebs.io/) to provide storage service with dynamic provisioning. It is convenient for testing and development. For production, please [configure supported persistent storage service](../storage-configuration) and prepare [high availability configuration](../master-ha) before installation. +- Since the air-gapped machines cannot connect to apt or yum source, please use clean Linux machine to avoid this problem. + +## Step 1: Prepare Linux Hosts + +The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements. + +- Supported OSes: CentOS 7.4 ~ 7.7 (64-bit), Ubuntu 16.04.5/16.04.6/18.04.1/18.04.2/18.04.3 LTS (64-bit) +- Time synchronization is required across all nodes, otherwise the installation may not succeed; +- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`; +- If you are using `Ubuntu 18.04`, you need to use the user `root`. +- Ensure your disk of each node is at least 100G. +- CPU and memory in total of all machines: 2 cores and 4 GB for minimal installation; 8 cores and 16 GB for complete installation. + + +The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes. + +> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide. + +| Host IP | Host Name | Role | +| --- | --- | --- | +|192.168.0.1|master|master, etcd| +|192.168.0.2|node1|node| +|192.168.0.3|node2|node| + +### Cluster Architecture + +#### Single Master, Single Etcd, Two Nodes + +![Architecture](/cluster-architecture.svg) + +## Step 2: Download Installer Package + +Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`. + +```bash +curl -L https://kubesphere.io/download/offline/latest > kubesphere-all-offline-v2.1.1.tar.gz \ +&& tar -zxf kubesphere-all-offline-v2.1.1.tar.gz && cd kubesphere-all-offline-v2.1.1/conf +``` + +## Step 3: Configure Host Template + +> This step is only for multi-node installation, you can skip this step if you choose all-in-one installation. + +Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file. + +> Note: +> +> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`. +> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`. +> - master, node1 and node2 are the host names of each node and all host names should be in lowercase. + +### hosts.ini + +```ini +[all] +master ansible_connection=local ip=192.168.0.1 +node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD +node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD + +[local-registry] +master + +[kube-master] +master + +[kube-node] +node1 +node2 + +[etcd] +master + +[k8s-cluster:children] +kube-node +kube-master +``` + +> Note: +> +> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here. +> - Installer will use a node as the local registry for docker images, defaults to "master" in the group `[local-registry]`. +> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively. +> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`. +> +> Parameters Specification: +> +> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection. +> - `ansible_host`: The name of the host to be connected. +> - `ip`: The ip of the host to be connected. +> - `ansible_user`: The default ssh user name to use. +> - `ansible_become_pass`: Allows you to set the privilege escalation password. +> - `ansible_ssh_pass`: The password of the host to be connected using root. + +## Step 4: Enable All Components + +> This is step is complete installation. You can skip this step if you choose a minimal installation. + +Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default. + +```yaml +# LOGGING CONFIGURATION +# logging is an optional component when installing KubeSphere, and +# Kubernetes builtin logging APIs will be used if logging_enabled is set to false. +# Builtin logging only provides limited functions, so recommend to enable logging. +logging_enabled: true # Whether to install logging system +elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number +elasticsearch_data_replica: 2 # total number of data nodes +elasticsearch_volume_size: 20Gi # Elasticsearch volume size +log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default. +elk_prefix: logstash # the string making up index names. The index name will be formatted as ks--log +kibana_enabled: false # Kibana Whether to install built-in Grafana +#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption. +#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port + +#DevOps Configuration +devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image) +jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default +jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default +jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default +jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters +jenkinsJavaOpts_Xmx: 6g +jenkinsJavaOpts_MaxRAM: 8g +sonarqube_enabled: true # Whether to install built-in SonarQube +#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption. +#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token + +# Following components are all optional for KubeSphere, +# Which could be turned on to install it before installation or later by updating its value to true +openpitrix_enabled: true # KubeSphere application store +metrics_server_enabled: true # For KubeSphere HPA to use +servicemesh_enabled: true # KubeSphere service mesh system(Istio-based) +notification_enabled: true # KubeSphere notification system +alerting_enabled: true # KubeSphere alerting system +``` + +## Step 5: Install KubeSphere to Linux Machines + +> Note: +> +> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default. +> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions. +> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation. +> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts. + +**1.** Enter `scripts` folder, and execute `install.sh` using `root` user: + +```bash +cd ../cripts +./install.sh +``` + +**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume. + +```bash +################################################ + KubeSphere Installer Menu +################################################ +* 1) All-in-one +* 2) Multi-node +* 3) Quit +################################################ +https://kubesphere.io/ 2020-02-24 +################################################ +Please input an option: 2 + +``` + +**3.** Verify the multi-node installation: + +**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go. + +```bash +successsful! +##################################################### +### Welcome to KubeSphere! ### +##################################################### + +Console: http://192.168.0.1:30880 +Account: admin +Password: P@88w0rd + +NOTE:Please modify the default password after login. +##################################################### +``` + +> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components). + +**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in. + +![Login](https://pek3b.qingstor.com/kubesphere-docs/png/20191017172215.png) + +Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up. + +![Landing Page](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png) + +## Enable Pluggable Components + +If you already have set up minimal installation, you still can edit the ConfigMap of ks-installer using the following command. Please make sure there is enough resource in your machines, see [Pluggable Components Overview](/en/installation/pluggable-components/). + +```bash +kubectl edit cm -n kubesphere-system ks-installer +``` + +## FAQ + +If you have further questions please do not hesitate to raise issues on [GitHub](https://github.com/kubesphere/kubesphere/issues). diff --git a/content/en/docs/installation/install-on-linux/master-ha.md b/content/en/docs/application-store/built-in-apps/master-ha.md similarity index 100% rename from content/en/docs/installation/install-on-linux/master-ha.md rename to content/en/docs/application-store/built-in-apps/master-ha.md diff --git a/content/en/docs/installation/install-on-linux/multi-node.md b/content/en/docs/application-store/built-in-apps/multi-node.md similarity index 100% rename from content/en/docs/installation/install-on-linux/multi-node.md rename to content/en/docs/application-store/built-in-apps/multi-node.md diff --git a/content/en/docs/installation/install-on-linux/storage-configuration.md b/content/en/docs/application-store/built-in-apps/storage-configuration.md similarity index 100% rename from content/en/docs/installation/install-on-linux/storage-configuration.md rename to content/en/docs/application-store/built-in-apps/storage-configuration.md diff --git a/content/en/docs/cluster-administration/_index.md b/content/en/docs/cluster-administration/_index.md new file mode 100644 index 000000000..ebb2b9400 --- /dev/null +++ b/content/en/docs/cluster-administration/_index.md @@ -0,0 +1,22 @@ +--- +title: "Cluster Administration" +description: "Help you to better understand KubeSphere with detailed graphics and contents" +layout: "single" + +linkTitle: "Cluster Administration" + +weight: 4100 + +icon: "/images/docs/docs.svg" + +--- + +## Installing KubeSphere and Kubernetes on Linux + +In this chapter, we will demonstrate how to use KubeKey to provision a new Kubernetes and KubeSphere cluster based on different infrastructures. Kubekey can help you to quickly build a production-ready cluster architecture on a set of machines from zero to one. It also helps you to easily scale the cluster and install pluggable components on existing architecture. + +## Most Popular Pages + +Below you will find some of the most common and helpful pages from this chapter. We highly recommend you to review them at first. + +{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} diff --git a/content/en/docs/cluster-administration/nodes.md b/content/en/docs/cluster-administration/nodes.md new file mode 100644 index 000000000..4bed011c5 --- /dev/null +++ b/content/en/docs/cluster-administration/nodes.md @@ -0,0 +1,10 @@ +--- +title: "Nodes" +keywords: "kubernetes, StorageClass, kubesphere, PVC" +description: "Kubernetes Nodes Management" + +linkTitle: "Nodes" +weight: 200 +--- + +TBD diff --git a/content/en/docs/cluster-administration/platform-settings/_index.md b/content/en/docs/cluster-administration/platform-settings/_index.md new file mode 100644 index 000000000..d3af6d02b --- /dev/null +++ b/content/en/docs/cluster-administration/platform-settings/_index.md @@ -0,0 +1,7 @@ +--- +linkTitle: "DevOps Administration" +weight: 2200 + +_build: + render: false +--- diff --git a/content/en/docs/cluster-administration/platform-settings/customize-basic-information.md b/content/en/docs/cluster-administration/platform-settings/customize-basic-information.md new file mode 100644 index 000000000..52a968785 --- /dev/null +++ b/content/en/docs/cluster-administration/platform-settings/customize-basic-information.md @@ -0,0 +1,224 @@ +--- +title: "Role and Member Management" +keywords: 'kubernetes, kubesphere, air gapped, installation' +description: 'Role and Member Management' + + +weight: 2240 +--- + +The air-gapped installation is almost the same as the online installation except it creates a local registry to host the Docker images. We will demonstrate how to install KubeSphere and Kubernetes on air-gapped environment. + +> Note: The dependencies in different operating systems may cause upexpected problems. If you encounter any installation problems on air-gapped environment, please describe your OS information and error logs on [GitHub](https://github.com/kubesphere/kubesphere/issues). + +## Prerequisites + +- If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information. +> - Installer will use `/var/lib/docker` as the default directory where all Docker related files, including the images, are stored. We recommend you to add additional storage to a disk with at least 100G mounted at `/var/lib/docker` and `/mnt/registry` respectively, use the [fdisk](https://www.computerhope.com/unix/fdisk.htm) command for reference. +- Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [OpenEBS](https://openebs.io/) to provide storage service with dynamic provisioning. It is convenient for testing and development. For production, please [configure supported persistent storage service](../storage-configuration) and prepare [high availability configuration](../master-ha) before installation. +- Since the air-gapped machines cannot connect to apt or yum source, please use clean Linux machine to avoid this problem. + +## Step 1: Prepare Linux Hosts + +The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements. + +- Supported OSes: CentOS 7.4 ~ 7.7 (64-bit), Ubuntu 16.04.5/16.04.6/18.04.1/18.04.2/18.04.3 LTS (64-bit) +- Time synchronization is required across all nodes, otherwise the installation may not succeed; +- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`; +- If you are using `Ubuntu 18.04`, you need to use the user `root`. +- Ensure your disk of each node is at least 100G. +- CPU and memory in total of all machines: 2 cores and 4 GB for minimal installation; 8 cores and 16 GB for complete installation. + + +The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes. + +> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide. + +| Host IP | Host Name | Role | +| --- | --- | --- | +|192.168.0.1|master|master, etcd| +|192.168.0.2|node1|node| +|192.168.0.3|node2|node| + +### Cluster Architecture + +#### Single Master, Single Etcd, Two Nodes + +![Architecture](/cluster-architecture.svg) + +## Step 2: Download Installer Package + +Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`. + +```bash +curl -L https://kubesphere.io/download/offline/latest > kubesphere-all-offline-v2.1.1.tar.gz \ +&& tar -zxf kubesphere-all-offline-v2.1.1.tar.gz && cd kubesphere-all-offline-v2.1.1/conf +``` + +## Step 3: Configure Host Template + +> This step is only for multi-node installation, you can skip this step if you choose all-in-one installation. + +Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file. + +> Note: +> +> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`. +> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`. +> - master, node1 and node2 are the host names of each node and all host names should be in lowercase. + +### hosts.ini + +```ini +[all] +master ansible_connection=local ip=192.168.0.1 +node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD +node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD + +[local-registry] +master + +[kube-master] +master + +[kube-node] +node1 +node2 + +[etcd] +master + +[k8s-cluster:children] +kube-node +kube-master +``` + +> Note: +> +> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here. +> - Installer will use a node as the local registry for docker images, defaults to "master" in the group `[local-registry]`. +> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively. +> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`. +> +> Parameters Specification: +> +> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection. +> - `ansible_host`: The name of the host to be connected. +> - `ip`: The ip of the host to be connected. +> - `ansible_user`: The default ssh user name to use. +> - `ansible_become_pass`: Allows you to set the privilege escalation password. +> - `ansible_ssh_pass`: The password of the host to be connected using root. + +## Step 4: Enable All Components + +> This is step is complete installation. You can skip this step if you choose a minimal installation. + +Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default. + +```yaml +# LOGGING CONFIGURATION +# logging is an optional component when installing KubeSphere, and +# Kubernetes builtin logging APIs will be used if logging_enabled is set to false. +# Builtin logging only provides limited functions, so recommend to enable logging. +logging_enabled: true # Whether to install logging system +elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number +elasticsearch_data_replica: 2 # total number of data nodes +elasticsearch_volume_size: 20Gi # Elasticsearch volume size +log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default. +elk_prefix: logstash # the string making up index names. The index name will be formatted as ks--log +kibana_enabled: false # Kibana Whether to install built-in Grafana +#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption. +#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port + +#DevOps Configuration +devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image) +jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default +jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default +jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default +jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters +jenkinsJavaOpts_Xmx: 6g +jenkinsJavaOpts_MaxRAM: 8g +sonarqube_enabled: true # Whether to install built-in SonarQube +#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption. +#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token + +# Following components are all optional for KubeSphere, +# Which could be turned on to install it before installation or later by updating its value to true +openpitrix_enabled: true # KubeSphere application store +metrics_server_enabled: true # For KubeSphere HPA to use +servicemesh_enabled: true # KubeSphere service mesh system(Istio-based) +notification_enabled: true # KubeSphere notification system +alerting_enabled: true # KubeSphere alerting system +``` + +## Step 5: Install KubeSphere to Linux Machines + +> Note: +> +> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default. +> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions. +> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation. +> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts. + +**1.** Enter `scripts` folder, and execute `install.sh` using `root` user: + +```bash +cd ../cripts +./install.sh +``` + +**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume. + +```bash +################################################ + KubeSphere Installer Menu +################################################ +* 1) All-in-one +* 2) Multi-node +* 3) Quit +################################################ +https://kubesphere.io/ 2020-02-24 +################################################ +Please input an option: 2 + +``` + +**3.** Verify the multi-node installation: + +**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go. + +```bash +successsful! +##################################################### +### Welcome to KubeSphere! ### +##################################################### + +Console: http://192.168.0.1:30880 +Account: admin +Password: P@88w0rd + +NOTE:Please modify the default password after login. +##################################################### +``` + +> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components). + +**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in. + +![Login](https://pek3b.qingstor.com/kubesphere-docs/png/20191017172215.png) + +Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up. + +![Landing Page](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png) + +## Enable Pluggable Components + +If you already have set up minimal installation, you still can edit the ConfigMap of ks-installer using the following command. Please make sure there is enough resource in your machines, see [Pluggable Components Overview](/en/installation/pluggable-components/). + +```bash +kubectl edit cm -n kubesphere-system ks-installer +``` + +## FAQ + +If you have further questions please do not hesitate to raise issues on [GitHub](https://github.com/kubesphere/kubesphere/issues). diff --git a/content/en/docs/cluster-administration/storageclass.md b/content/en/docs/cluster-administration/storageclass.md new file mode 100644 index 000000000..db100ea30 --- /dev/null +++ b/content/en/docs/cluster-administration/storageclass.md @@ -0,0 +1,8 @@ +--- +title: "StorageClass" +keywords: "kubernetes, docker, kubesphere, jenkins, istio, prometheus" +description: "Kubernetes and KubeSphere node management" + +linkTitle: "StorageClass" +weight: 100 +--- diff --git a/content/en/docs/devops-user-guide/_index.md b/content/en/docs/devops-user-guide/_index.md new file mode 100644 index 000000000..7cbaba6b1 --- /dev/null +++ b/content/en/docs/devops-user-guide/_index.md @@ -0,0 +1,23 @@ +--- +title: "DevOps User Guide" +description: "Getting started with KubeSphere DevOps project" +layout: "single" + +linkTitle: "DevOps User Guide" +weight: 4400 + +icon: "/images/docs/docs.svg" + +--- + +## Installing KubeSphere and Kubernetes on Linux + +In this chapter, we will demonstrate how to use KubeKey to provision a new Kubernetes and KubeSphere cluster based on different infrastructures. Kubekey can help you to quickly build a production-ready cluster architecture on a set of machines from zero to one. It also helps you to easily scale the cluster and install pluggable components on existing architecture. + +## Most Popular Pages + +Below you will find some of the most common and helpful pages from this chapter. We highly recommend you to review them at first. + +{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} + +{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} diff --git a/content/en/docs/devops-user-guide/devops-administration/_index.md b/content/en/docs/devops-user-guide/devops-administration/_index.md new file mode 100644 index 000000000..d3af6d02b --- /dev/null +++ b/content/en/docs/devops-user-guide/devops-administration/_index.md @@ -0,0 +1,7 @@ +--- +linkTitle: "DevOps Administration" +weight: 2200 + +_build: + render: false +--- diff --git a/content/en/docs/devops-user-guide/devops-administration/role-and-member-management.md b/content/en/docs/devops-user-guide/devops-administration/role-and-member-management.md new file mode 100644 index 000000000..52a968785 --- /dev/null +++ b/content/en/docs/devops-user-guide/devops-administration/role-and-member-management.md @@ -0,0 +1,224 @@ +--- +title: "Role and Member Management" +keywords: 'kubernetes, kubesphere, air gapped, installation' +description: 'Role and Member Management' + + +weight: 2240 +--- + +The air-gapped installation is almost the same as the online installation except it creates a local registry to host the Docker images. We will demonstrate how to install KubeSphere and Kubernetes on air-gapped environment. + +> Note: The dependencies in different operating systems may cause upexpected problems. If you encounter any installation problems on air-gapped environment, please describe your OS information and error logs on [GitHub](https://github.com/kubesphere/kubesphere/issues). + +## Prerequisites + +- If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information. +> - Installer will use `/var/lib/docker` as the default directory where all Docker related files, including the images, are stored. We recommend you to add additional storage to a disk with at least 100G mounted at `/var/lib/docker` and `/mnt/registry` respectively, use the [fdisk](https://www.computerhope.com/unix/fdisk.htm) command for reference. +- Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [OpenEBS](https://openebs.io/) to provide storage service with dynamic provisioning. It is convenient for testing and development. For production, please [configure supported persistent storage service](../storage-configuration) and prepare [high availability configuration](../master-ha) before installation. +- Since the air-gapped machines cannot connect to apt or yum source, please use clean Linux machine to avoid this problem. + +## Step 1: Prepare Linux Hosts + +The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements. + +- Supported OSes: CentOS 7.4 ~ 7.7 (64-bit), Ubuntu 16.04.5/16.04.6/18.04.1/18.04.2/18.04.3 LTS (64-bit) +- Time synchronization is required across all nodes, otherwise the installation may not succeed; +- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`; +- If you are using `Ubuntu 18.04`, you need to use the user `root`. +- Ensure your disk of each node is at least 100G. +- CPU and memory in total of all machines: 2 cores and 4 GB for minimal installation; 8 cores and 16 GB for complete installation. + + +The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes. + +> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide. + +| Host IP | Host Name | Role | +| --- | --- | --- | +|192.168.0.1|master|master, etcd| +|192.168.0.2|node1|node| +|192.168.0.3|node2|node| + +### Cluster Architecture + +#### Single Master, Single Etcd, Two Nodes + +![Architecture](/cluster-architecture.svg) + +## Step 2: Download Installer Package + +Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`. + +```bash +curl -L https://kubesphere.io/download/offline/latest > kubesphere-all-offline-v2.1.1.tar.gz \ +&& tar -zxf kubesphere-all-offline-v2.1.1.tar.gz && cd kubesphere-all-offline-v2.1.1/conf +``` + +## Step 3: Configure Host Template + +> This step is only for multi-node installation, you can skip this step if you choose all-in-one installation. + +Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file. + +> Note: +> +> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`. +> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`. +> - master, node1 and node2 are the host names of each node and all host names should be in lowercase. + +### hosts.ini + +```ini +[all] +master ansible_connection=local ip=192.168.0.1 +node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD +node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD + +[local-registry] +master + +[kube-master] +master + +[kube-node] +node1 +node2 + +[etcd] +master + +[k8s-cluster:children] +kube-node +kube-master +``` + +> Note: +> +> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here. +> - Installer will use a node as the local registry for docker images, defaults to "master" in the group `[local-registry]`. +> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively. +> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`. +> +> Parameters Specification: +> +> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection. +> - `ansible_host`: The name of the host to be connected. +> - `ip`: The ip of the host to be connected. +> - `ansible_user`: The default ssh user name to use. +> - `ansible_become_pass`: Allows you to set the privilege escalation password. +> - `ansible_ssh_pass`: The password of the host to be connected using root. + +## Step 4: Enable All Components + +> This is step is complete installation. You can skip this step if you choose a minimal installation. + +Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default. + +```yaml +# LOGGING CONFIGURATION +# logging is an optional component when installing KubeSphere, and +# Kubernetes builtin logging APIs will be used if logging_enabled is set to false. +# Builtin logging only provides limited functions, so recommend to enable logging. +logging_enabled: true # Whether to install logging system +elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number +elasticsearch_data_replica: 2 # total number of data nodes +elasticsearch_volume_size: 20Gi # Elasticsearch volume size +log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default. +elk_prefix: logstash # the string making up index names. The index name will be formatted as ks--log +kibana_enabled: false # Kibana Whether to install built-in Grafana +#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption. +#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port + +#DevOps Configuration +devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image) +jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default +jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default +jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default +jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters +jenkinsJavaOpts_Xmx: 6g +jenkinsJavaOpts_MaxRAM: 8g +sonarqube_enabled: true # Whether to install built-in SonarQube +#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption. +#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token + +# Following components are all optional for KubeSphere, +# Which could be turned on to install it before installation or later by updating its value to true +openpitrix_enabled: true # KubeSphere application store +metrics_server_enabled: true # For KubeSphere HPA to use +servicemesh_enabled: true # KubeSphere service mesh system(Istio-based) +notification_enabled: true # KubeSphere notification system +alerting_enabled: true # KubeSphere alerting system +``` + +## Step 5: Install KubeSphere to Linux Machines + +> Note: +> +> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default. +> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions. +> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation. +> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts. + +**1.** Enter `scripts` folder, and execute `install.sh` using `root` user: + +```bash +cd ../cripts +./install.sh +``` + +**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume. + +```bash +################################################ + KubeSphere Installer Menu +################################################ +* 1) All-in-one +* 2) Multi-node +* 3) Quit +################################################ +https://kubesphere.io/ 2020-02-24 +################################################ +Please input an option: 2 + +``` + +**3.** Verify the multi-node installation: + +**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go. + +```bash +successsful! +##################################################### +### Welcome to KubeSphere! ### +##################################################### + +Console: http://192.168.0.1:30880 +Account: admin +Password: P@88w0rd + +NOTE:Please modify the default password after login. +##################################################### +``` + +> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components). + +**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in. + +![Login](https://pek3b.qingstor.com/kubesphere-docs/png/20191017172215.png) + +Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up. + +![Landing Page](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png) + +## Enable Pluggable Components + +If you already have set up minimal installation, you still can edit the ConfigMap of ks-installer using the following command. Please make sure there is enough resource in your machines, see [Pluggable Components Overview](/en/installation/pluggable-components/). + +```bash +kubectl edit cm -n kubesphere-system ks-installer +``` + +## FAQ + +If you have further questions please do not hesitate to raise issues on [GitHub](https://github.com/kubesphere/kubesphere/issues). diff --git a/content/en/docs/devops-user-guide/introduction/_index.md b/content/en/docs/devops-user-guide/introduction/_index.md new file mode 100644 index 000000000..f7bc936a3 --- /dev/null +++ b/content/en/docs/devops-user-guide/introduction/_index.md @@ -0,0 +1,7 @@ +--- +linkTitle: "DevOps Project Introduction" +weight: 2100 + +_build: + render: false +--- diff --git a/content/en/docs/installation/introduction/intro.md b/content/en/docs/devops-user-guide/introduction/credential.md similarity index 100% rename from content/en/docs/installation/introduction/intro.md rename to content/en/docs/devops-user-guide/introduction/credential.md diff --git a/content/en/docs/devops-user-guide/introduction/pipeline.md b/content/en/docs/devops-user-guide/introduction/pipeline.md new file mode 100644 index 000000000..a176c3255 --- /dev/null +++ b/content/en/docs/devops-user-guide/introduction/pipeline.md @@ -0,0 +1,93 @@ +--- +title: "Introduction" +keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus' +description: 'KubeSphere Installation Overview' + +linkTitle: "Introduction" +weight: 2110 +--- + +[KubeSphere](https://kubesphere.io/) is an enterprise-grade multi-tenant container platform built on [Kubernetes](https://kubernetes.io). It provides an easy-to-use UI for users to manage application workloads and computing resources with a few clicks, which greatly reduces the learning curve and the complexity of daily work such as development, testing, operation and maintenance. KubeSphere aims to alleviate the pain points of Kubernetes including storage, network, security and ease of use, etc. + +KubeSphere supports installing on cloud-hosted and on-premises Kubernetes cluster, e.g. native K8s, GKE, EKS, RKE, etc. It also supports installing on Linux host including virtual machine and bare metal with provisioning fresh Kubernetes cluster. Both of the two methods are easy and friendly to install KubeSphere. Meanwhile, KubeSphere offers not only online installer, but air-gapped installer for such environment with no access to the internet. + +KubeSphere is open source project on [GitHub](https://github.com/kubesphere). There are thousands of users are using KunbeSphere, and many of them are running KubeSphere for their production workloads. + +In summary, there are several installation options you can choose. Please note not all options are mutually exclusive. For instance, you can deploy KubeSphere with minimal packages on existing K8s cluster on multiple nodes in air-gapped environment. Here is the decision tree shown in the following graph you may reference for your own situation. + +- [All-in-One](../all-in-one): Intall KubeSphere on a singe node. It is only for users to quickly get familar with KubeSphere. +- [Multi-Node](../multi-node): Install KubeSphere on multiple nodes. It is for testing or development. +- [Install KubeSphere on Air Gapped Linux](../install-ks-on-linux-airgapped): All images of KubeSphere have been encapsulated into a package, it is convenient for air gapped installation on Linux machines. +- [High Availability Multi-Node](../master-ha): Install high availability KubeSphere on multiple nodes which is used for production environment. +- [KubeSphere on Existing K8s](../install-on-k8s): Deploy KubeSphere on your Kubernetes cluster including cloud-hosted services such as GKE, EKS, etc. +- [KubeSphere on Air-Gapped K8s](../install-on-k8s-airgapped): Install KubeSphere on a disconnected Kubernetes cluster. +- Minimal Packages: Only install minimal required system components of KubeSphere. The minimum of resource requirement is down to 1 core and 2G memory. +- [Full Packages](../complete-installation): Install all available system components of KubeSphere including DevOps, service mesh, application store, etc. + +![Installer Options](https://pek3b.qingstor.com/kubesphere-docs/png/20200305093158.png) + +## Before Installation + +- As the installation will pull images and update operating system from the internet, your environment must have the internet access. If not, then you need to use the air-gapped installer instead. +- For all-in-one installation, the only one node is both the master and the worker. +- For multi-node installation, you are asked to specify the node roles in the configuration file before installation. +- Your linux host must have OpenSSH Server installed. +- Please check the [ports requirements](../port-firewall) before installation. + +## Quick Install For Development and Testing + +KubeSphere has decoupled some components since v2.1.0. The installer only installs required components by default which brings the benefits of fast installation and minimal resource consumption. If you want to install any optional component, please check the following section [Pluggable Components Overview](../intro#pluggable-components-overview) for details. + +The quick install of KubeSphere is only for development or testing since it uses local volume for storage by default. If you want a production install please refer to the section [High Availability Installation for Production Environment](../intro#high-availability-installation-for-production-environment). + +### 1. Install KubeSphere on Linux + +- [All-in-One](../all-in-one): It means a single-node hassle-free configuration installation with one-click. +- [Multi-Node](../multi-node): It allows you to install KubeSphere on multiple instances using local volume, which means it is not required to install storage server such as Ceph, GlusterFS. + +> Note:With regard to air-gapped installation please refer to [Install KubeSphere on Air Gapped Linux Machines](../install-ks-on-linux-airgapped). + +### 2. Install KubeSphere on Existing Kubernetes + +You can install KubeSphere on your existing Kubernetes cluster. Please refer [Install KubeSphere on Kubernetes](../install-on-k8s) for instructions. + +## High Availability Installation for Production Environment + +### 1. Install HA KubeSphere on Linux + +KubeSphere installer supports installing a highly available cluster for production with the prerequisites being a load balancer and persistent storage service set up in advance. + +- [Persistent Service Configuration](../storage-configuration): By default, KubeSphere Installer uses [Local Volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning in Kubernetes cluster. It is convenient for quick install of testing environment. In production environment, it must have a storage server set up. Please refer [Persistent Service Configuration](../storage-configuration) for details. +- [Load Balancer Configuration for HA install](../master-ha): Before you get started with multi-node installation in production environment, you need to configure a load balancer. Either cloud LB or `HAproxy + keepalived` works for the installation. + +### 2. Install HA KubeSphere on Existing Kubernetes + +Before you install KubeSphere on existing Kubernetes, please check the prerequisites of the installation on Linux described above, and verify the existing Kubernetes to see if it satisfies these prerequisites or not, i.e., a load balancer and persistent storage service. + +If your Kubernetes is ready, please refer [Install KubeSphere on Kubernetes](../install-on-k8s) for instructions. + +> You can install KubeSphere on cloud Kubernetes service such as [Installing KubeSphere on GKE cluster](../install-on-gke) + +## Pluggable Components Overview + +KubeSphere has decoupled some core feature components since v2.1.0. These components are designed to be pluggable, which means you can enable any of them before or after installation. The installer by default does not install the pluggable components. Please check the guide [Enable Pluggable Components Installation](../pluggable-components) for your requirement. + +![Pluggable Components](https://pek3b.qingstor.com/kubesphere-docs/png/20191207140846.png) + +## Storage Configuration Instruction + +The following links explain how to configure different types of persistent storage services. Please refer to [Storage Configuration Instruction](../storage-configuration) for detailed instructions regarding how to configure the storage class in KubeSphere. + +- [NFS](https://kubernetes.io/docs/concepts/storage/volumes/#nfs) +- [GlusterFS](https://www.gluster.org/) +- [Ceph RBD](https://ceph.com/) +- [QingCloud Block Storage](https://docs.qingcloud.com/product/storage/volume/) +- [QingStor NeonSAN](https://docs.qingcloud.com/product/storage/volume/super_high_performance_shared_volume/) + +## Add New Nodes + +KubeSphere Installer allows you to scale the number of nodes, see [Add New Nodes](../add-nodes). + +## Uninstall + +Uninstall will remove KubeSphere from the machines. This operation is irreversible and dangerous. Please check [Uninstall](../uninstall). diff --git a/content/en/docs/installing-on-kubernetes/_index.md b/content/en/docs/installing-on-kubernetes/_index.md new file mode 100644 index 000000000..51adfedde --- /dev/null +++ b/content/en/docs/installing-on-kubernetes/_index.md @@ -0,0 +1,23 @@ +--- +title: "Installing on Kubernetes" +description: "Help you to better understand KubeSphere with detailed graphics and contents" +layout: "single" + +linkTitle: "Installing on Kubernetes" +weight: 2500 + +icon: "/images/docs/docs.svg" + +--- + +## Installing KubeSphere and Kubernetes on Linux + +In this chapter, we will demonstrate how to use KubeKey to provision a new Kubernetes and KubeSphere cluster based on different infrastructures. Kubekey can help you to quickly build a production-ready cluster architecture on a set of machines from zero to one. It also helps you to easily scale the cluster and install pluggable components on existing architecture. + +## Most Popular Pages + +Below you will find some of the most common and helpful pages from this chapter. We highly recommend you to review them at first. + +{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} + +{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} diff --git a/content/en/docs/installation/install-on-linux/_index.md b/content/en/docs/installing-on-kubernetes/hosted-kubernetes/_index.md similarity index 100% rename from content/en/docs/installation/install-on-linux/_index.md rename to content/en/docs/installing-on-kubernetes/hosted-kubernetes/_index.md diff --git a/content/en/docs/installing-on-kubernetes/hosted-kubernetes/all-in-one.md b/content/en/docs/installing-on-kubernetes/hosted-kubernetes/all-in-one.md new file mode 100644 index 000000000..8214171ef --- /dev/null +++ b/content/en/docs/installing-on-kubernetes/hosted-kubernetes/all-in-one.md @@ -0,0 +1,116 @@ +--- +title: "All-in-One Installation" +keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus' +description: 'The guide for installing all-in-one KubeSphere for developing or testing' + +linkTitle: "All-in-One" +weight: 2210 +--- + +For those who are new to KubeSphere and looking for a quick way to discover the platform, the all-in-one mode is your best choice to install it since it is one-click and hassle-free configuration installation with provisioning KubeSphere and Kubernetes on your machine. + +- The following instructions are for the default installation without enabling any optional components as we have made them pluggable since v2.1.0. If you want to enable any one, please see the section [Enable Pluggable Components](../all-in-one#enable-pluggable-components) below. +- If your machine has >= 8 cores and >= 16G memory, we recommend you to install the full package of KubeSphere by [enabling optional components](../complete-installation). + +## Prerequisites + +If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirement](../port-firewall) for more information. + +## Step 1: Prepare Linux Machine + +The following describes the requirements of hardware and operating system. + +- For `Ubuntu 16.04` OS, it is recommended to select the latest `16.04.5`. +- If you are using Ubuntu 18.04, you need to use the root user to install. +- If the Debian system does not have the sudo command installed, you need to execute the `apt update && apt install sudo` command using root before installation. + +### Hardware Recommendation + +| System | Minimum Requirements | +| ------- | ----------- | +| CentOS 7.4 ~ 7.7 (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:100 G | +| Ubuntu 16.04/18.04 LTS (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:100 G | +| Red Hat Enterprise Linux Server 7.4 (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:100 G | +| Debian Stretch 9.5 (64 bit)| CPU:2 Core, Memory:4 G, Disk Space:100 G | + +## Step 2: Download Installer Package + +Execute the following commands to download Installer 2.1.1 and unpack it. + +```bash +curl -L https://kubesphere.io/download/stable/latest > installer.tar.gz \ +&& tar -zxf installer.tar.gz && cd kubesphere-all-v2.1.1/scripts +``` + +## Step 3: Get Started with Installation + +You should not do anything except executing one command as follows. The installer will complete all things for you automatically including installing/updating dependency packages, installing Kubernetes with default version 1.16.7, storage service and so on. + +> Note: +> +> - Generally speaking, do not modify any configuration. +> - KubeSphere installs `calico` by default. If you would like to use a different network plugin, you are allowed to change the configuration in `conf/common.yaml`. You are also allowed to modify other configurations such as storage class, pluggable components, etc. +> - The default storage class is [OpenEBS](https://openebs.io/) which is a kind of [Local Volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) to provision persistence storage service. OpenEBS supports [dynamic provisioning PV](https://docs.openebs.io/docs/next/uglocalpv.html#Provision-OpenEBS-Local-PV-based-on-hostpath). It will be installed automatically for your testing purpose. +> - Please refer [storage configurations](../storage-configuration) for supported storage class. +> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts. + +**1.** Execute the following command: + +```bash +./install.sh +``` + +**2.** Enter `1` to select `All-in-one` mode and type `yes` if your machine satisfies the requirements to start: + +```bash +################################################ + KubeSphere Installer Menu +################################################ +* 1) All-in-one +* 2) Multi-node +* 3) Quit +################################################ +https://kubesphere.io/ 2020-02-24 +################################################ +Please input an option: 1 +``` + +**3.** Verify if KubeSphere is installed successfully or not: + +**(1).** If you see "Successful" returned after completed, it means the installation is successful. The console service is exposed through nodeport 30880 by default. You may need to bind EIP and configure port forwarding in your environment for outside users to access. Make sure you disable the related firewall. + +```bash +successsful! +##################################################### +### Welcome to KubeSphere! ### +##################################################### + +Console: http://192.168.0.8:30880 +Account: admin +Password: P@88w0rd + +NOTE:Please modify the default password after login. +##################################################### +``` + +> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components). + +**(2).** You will be able to use default account and password to log in the console to take a tour of KubeSphere. + +Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up. + +![Dashboard](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png) + +## Enable Pluggable Components + +The guide above is only used for minimal installation by default. You can execute the following command to open the configure map and enable pluggable components. Make sure your cluster has enough CPU and memory in advance, see [Enable Pluggable Components](../pluggable-components). + +```bash +kubectl edit cm -n kubesphere-system ks-installer +``` + +## FAQ + +The installer has been tested on Aliyun, AWS, Huawei Cloud, QingCloud and Tencent Cloud. Please check the [results](https://github.com/kubesphere/ks-installer/issues/23) for details. Also please read the [FAQ of installation](../../faq/faq-install). + +If you have any further questions please do not hesitate to file issues on [GitHub](https://github.com/kubesphere/kubesphere/issues). diff --git a/content/en/docs/installing-on-kubernetes/hosted-kubernetes/complete-installation.md b/content/en/docs/installing-on-kubernetes/hosted-kubernetes/complete-installation.md new file mode 100644 index 000000000..e0ab92099 --- /dev/null +++ b/content/en/docs/installing-on-kubernetes/hosted-kubernetes/complete-installation.md @@ -0,0 +1,76 @@ +--- +title: "Install All Optional Components" +keywords: 'kubesphere, kubernetes, docker, devops, service mesh, openpitrix' +description: 'Install KubeSphere with all optional components enabled on Linux machine' + + +weight: 2260 +--- + +The installer only installs required components (i.e. minimal installation) by default since v2.1.0. Other components are designed to be pluggable, which means you can enable any of them before or after installation. If your machine meets the following minimum requirements, we recommend you to **enable all components before installation**. A complete installation gives you an opportunity to comprehensively discover the container platform. + + +Minimum Requirements + +- CPU: 8 cores in total of all machines +- Memory: 16 GB in total of all machines + + + +> Note: +> +> - If your machines do not meet the minimum requirements of a complete installation, you can enable any of components at your will. Please refer to [Enable Pluggable Components Installation](../pluggable-components). +> - It works for [All-in-One](../all-in-one) and [Multi-Node](../multi-node). + +This tutorial will walk you through how to enable all components of KubeSphere. + +## Download Installer Package + +If you do not have the package yet, please run the following commands to download Installer 2.1.1 and unpack it, then enter `conf` folder. + +```bash +$ curl -L https://kubesphere.io/download/stable/v2.1.1 > installer.tar.gz \ +&& tar -zxf installer.tar.gz && cd kubesphere-all-v2.1.1/conf +``` + +## Enable All Components + +Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default. + +```yaml +# LOGGING CONFIGURATION +# logging is an optional component when installing KubeSphere, and +# Kubernetes builtin logging APIs will be used if logging_enabled is set to false. +# Builtin logging only provides limited functions, so recommend to enable logging. +logging_enabled: true # Whether to install logging system +elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number +elasticsearch_data_replica: 2 # total number of data nodes +elasticsearch_volume_size: 20Gi # Elasticsearch volume size +log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default. +elk_prefix: logstash # the string making up index names. The index name will be formatted as ks--log +kibana_enabled: false # Kibana Whether to install built-in Grafana +#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption. +#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port + +#DevOps Configuration +devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image) +jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default +jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default +jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default +jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters +jenkinsJavaOpts_Xmx: 6g +jenkinsJavaOpts_MaxRAM: 8g +sonarqube_enabled: true # Whether to install built-in SonarQube +#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption. +#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token + +# Following components are all optional for KubeSphere, +# Which could be turned on to install it before installation or later by updating its value to true +openpitrix_enabled: true # KubeSphere application store +metrics_server_enabled: true # For KubeSphere HPA to use +servicemesh_enabled: true # KubeSphere service mesh system(Istio-based) +notification_enabled: true # KubeSphere notification system +alerting_enabled: true # KubeSphere alerting system +``` + +Save it, then you can continue the installation process. diff --git a/content/en/docs/installing-on-kubernetes/hosted-kubernetes/install-ks-on-linux-airgapped.md b/content/en/docs/installing-on-kubernetes/hosted-kubernetes/install-ks-on-linux-airgapped.md new file mode 100644 index 000000000..26b3e4f04 --- /dev/null +++ b/content/en/docs/installing-on-kubernetes/hosted-kubernetes/install-ks-on-linux-airgapped.md @@ -0,0 +1,224 @@ +--- +title: "Air-Gapped Installation" +keywords: 'kubernetes, kubesphere, air gapped, installation' +description: 'How to install KubeSphere on air-gapped Linux machines' + + +weight: 2240 +--- + +The air-gapped installation is almost the same as the online installation except it creates a local registry to host the Docker images. We will demonstrate how to install KubeSphere and Kubernetes on air-gapped environment. + +> Note: The dependencies in different operating systems may cause upexpected problems. If you encounter any installation problems on air-gapped environment, please describe your OS information and error logs on [GitHub](https://github.com/kubesphere/kubesphere/issues). + +## Prerequisites + +- If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information. +> - Installer will use `/var/lib/docker` as the default directory where all Docker related files, including the images, are stored. We recommend you to add additional storage to a disk with at least 100G mounted at `/var/lib/docker` and `/mnt/registry` respectively, use the [fdisk](https://www.computerhope.com/unix/fdisk.htm) command for reference. +- Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [OpenEBS](https://openebs.io/) to provide storage service with dynamic provisioning. It is convenient for testing and development. For production, please [configure supported persistent storage service](../storage-configuration) and prepare [high availability configuration](../master-ha) before installation. +- Since the air-gapped machines cannot connect to apt or yum source, please use clean Linux machine to avoid this problem. + +## Step 1: Prepare Linux Hosts + +The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements. + +- Supported OSes: CentOS 7.4 ~ 7.7 (64-bit), Ubuntu 16.04.5/16.04.6/18.04.1/18.04.2/18.04.3 LTS (64-bit) +- Time synchronization is required across all nodes, otherwise the installation may not succeed; +- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`; +- If you are using `Ubuntu 18.04`, you need to use the user `root`. +- Ensure your disk of each node is at least 100G. +- CPU and memory in total of all machines: 2 cores and 4 GB for minimal installation; 8 cores and 16 GB for complete installation. + + +The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes. + +> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide. + +| Host IP | Host Name | Role | +| --- | --- | --- | +|192.168.0.1|master|master, etcd| +|192.168.0.2|node1|node| +|192.168.0.3|node2|node| + +### Cluster Architecture + +#### Single Master, Single Etcd, Two Nodes + +![Architecture](/cluster-architecture.svg) + +## Step 2: Download Installer Package + +Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`. + +```bash +curl -L https://kubesphere.io/download/offline/latest > kubesphere-all-offline-v2.1.1.tar.gz \ +&& tar -zxf kubesphere-all-offline-v2.1.1.tar.gz && cd kubesphere-all-offline-v2.1.1/conf +``` + +## Step 3: Configure Host Template + +> This step is only for multi-node installation, you can skip this step if you choose all-in-one installation. + +Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file. + +> Note: +> +> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`. +> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`. +> - master, node1 and node2 are the host names of each node and all host names should be in lowercase. + +### hosts.ini + +```ini +[all] +master ansible_connection=local ip=192.168.0.1 +node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD +node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD + +[local-registry] +master + +[kube-master] +master + +[kube-node] +node1 +node2 + +[etcd] +master + +[k8s-cluster:children] +kube-node +kube-master +``` + +> Note: +> +> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here. +> - Installer will use a node as the local registry for docker images, defaults to "master" in the group `[local-registry]`. +> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively. +> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`. +> +> Parameters Specification: +> +> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection. +> - `ansible_host`: The name of the host to be connected. +> - `ip`: The ip of the host to be connected. +> - `ansible_user`: The default ssh user name to use. +> - `ansible_become_pass`: Allows you to set the privilege escalation password. +> - `ansible_ssh_pass`: The password of the host to be connected using root. + +## Step 4: Enable All Components + +> This is step is complete installation. You can skip this step if you choose a minimal installation. + +Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default. + +```yaml +# LOGGING CONFIGURATION +# logging is an optional component when installing KubeSphere, and +# Kubernetes builtin logging APIs will be used if logging_enabled is set to false. +# Builtin logging only provides limited functions, so recommend to enable logging. +logging_enabled: true # Whether to install logging system +elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number +elasticsearch_data_replica: 2 # total number of data nodes +elasticsearch_volume_size: 20Gi # Elasticsearch volume size +log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default. +elk_prefix: logstash # the string making up index names. The index name will be formatted as ks--log +kibana_enabled: false # Kibana Whether to install built-in Grafana +#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption. +#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port + +#DevOps Configuration +devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image) +jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default +jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default +jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default +jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters +jenkinsJavaOpts_Xmx: 6g +jenkinsJavaOpts_MaxRAM: 8g +sonarqube_enabled: true # Whether to install built-in SonarQube +#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption. +#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token + +# Following components are all optional for KubeSphere, +# Which could be turned on to install it before installation or later by updating its value to true +openpitrix_enabled: true # KubeSphere application store +metrics_server_enabled: true # For KubeSphere HPA to use +servicemesh_enabled: true # KubeSphere service mesh system(Istio-based) +notification_enabled: true # KubeSphere notification system +alerting_enabled: true # KubeSphere alerting system +``` + +## Step 5: Install KubeSphere to Linux Machines + +> Note: +> +> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default. +> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions. +> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation. +> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts. + +**1.** Enter `scripts` folder, and execute `install.sh` using `root` user: + +```bash +cd ../cripts +./install.sh +``` + +**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume. + +```bash +################################################ + KubeSphere Installer Menu +################################################ +* 1) All-in-one +* 2) Multi-node +* 3) Quit +################################################ +https://kubesphere.io/ 2020-02-24 +################################################ +Please input an option: 2 + +``` + +**3.** Verify the multi-node installation: + +**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go. + +```bash +successsful! +##################################################### +### Welcome to KubeSphere! ### +##################################################### + +Console: http://192.168.0.1:30880 +Account: admin +Password: P@88w0rd + +NOTE:Please modify the default password after login. +##################################################### +``` + +> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components). + +**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in. + +![Login](https://pek3b.qingstor.com/kubesphere-docs/png/20191017172215.png) + +Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up. + +![Landing Page](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png) + +## Enable Pluggable Components + +If you already have set up minimal installation, you still can edit the ConfigMap of ks-installer using the following command. Please make sure there is enough resource in your machines, see [Pluggable Components Overview](/en/installation/pluggable-components/). + +```bash +kubectl edit cm -n kubesphere-system ks-installer +``` + +## FAQ + +If you have further questions please do not hesitate to raise issues on [GitHub](https://github.com/kubesphere/kubesphere/issues). diff --git a/content/en/docs/installing-on-kubernetes/hosted-kubernetes/master-ha.md b/content/en/docs/installing-on-kubernetes/hosted-kubernetes/master-ha.md new file mode 100644 index 000000000..ee8f26203 --- /dev/null +++ b/content/en/docs/installing-on-kubernetes/hosted-kubernetes/master-ha.md @@ -0,0 +1,152 @@ +--- +title: "High Availability Configuration" +keywords: "kubesphere, kubernetes, docker,installation, HA, high availability" +description: "The guide for installing a high availability of KubeSphere cluster" + +weight: 2230 +--- + +## Introduction + +[Multi-node installation](../multi-node) can help you to quickly set up a single-master cluster on multiple machines for development and testing. However, we need to consider the high availability of the cluster for production. Since the key components on the master node, i.e. kube-apiserver, kube-scheduler, and kube-controller-manager are running on a single master node, Kubernetes and KubeSphere will be unavailable during the master being down. Therefore we need to set up a high availability cluster by provisioning load balancers and multiple masters. You can use any cloud load balancer, or any hardware load balancer (e.g. F5). In addition, keepalved and Haproxy is also an alternative for creating such high-availability cluster. + +This document walks you through an example how to create two [QingCloud Load Balancer](https://docs.qingcloud.com/product/network/loadbalancer), serving as internal load balancer and external load balancer respectively, and how to configure the high availability of masters and Etcd using the load balancers. + +## Prerequisites + +- Please make sure that you already read [Multi-Node installation](../multi-node). This document only demonstrates how to configure load balancers. +- You need a [QingCloud](https://console.qingcloud.com/login) account to create load balancers, or follow the guide of any other cloud provider to create load balancers. + +## Architecture + +This example prepares six machines of CentOS 7.5. We will create two load balancers, and deploy three masters and Etcd nodes on three of the machines. You can configure these masters and Etcd nodes in `conf/hosts.ini`. + +![Master and etcd node high availability architecture](https://pek3b.qingstor.com/kubesphere-docs/png/20200307215924.png) + +## Install HA Cluster + +### Step 1: Create Load Balancers + +This step briefly shows an example of creating a load balancer on QingCloud platform. + +#### Create an Internal Load Balancer + +1.1. Log in [QingCloud Console](https://console.qingcloud.com/login) and select **Network & CDN → Load Balancers**, then click on the create button and fill in the basic information. + +1.2. Choose the VxNet that your machines are created within from the **Network** dropdown list. Here is `kube`. Other settings can be default values as follows. Click **Submit** to complete the creation. + +![Create Internal LB on QingCloud](https://pek3b.qingstor.com/kubesphere-docs/png/20200215224125.png) + +1.3. Drill into the detail page of the load balancer, then create a listener that listens to the port `6443` of the `TCP` protocol. + +- Name: Define a name for this Listener +- Listener Protocol: Select `TCP` protocol +- Port: `6443` +- Load mode: `Poll` + +> Note: After creating the listener, please check the firewall rules of the load balancer. Make sure that the port `6443` has been added to the firewall rules and the external traffic can pass through `6443`. Otherwise, the installation will fail. + +![Add Listener to LB](https://pek3b.qingstor.com/kubesphere-docs/png/20200215225205.png) + +1.4. Click **Add Backend**, choose the VxNet `kube` that we chose. Then click on the button **Advanced Search** and choose the three master nodes under the VxNet and set the port to `6443` which is the default secure port of api-server. + +Click **Submit** when you are done. + +![Choose Backends](https://pek3b.qingstor.com/kubesphere-docs/png/20200215225550.png) + +1.5. Click on the button **Apply Changes** to activate the configurations. At this point, you can find the three masters have been added as the backend servers of the listener that is behind the internal load balancer. + +> Please note: The status of all masters might shows `Not available` after you added them as backends. This is normal since the port `6443` of api-server are not active in masters yet. The status will change to `Active` and the port of api-server will be exposed after installation complete, which means the internal load balancer you configured works as expected. + +![Apply Changes](https://pek3b.qingstor.com/kubesphere-docs/png/20200215230107.png) + +#### Create an External Load Balancer + +You need to create an EIP in advance. + +1.6. Similarly, create an external load balancer without joining any network, but associate the EIP that you created to this load balancer. + +1.7. Enter the load balancer detail page, create a listener that listens to the port `30880` of the `HTTP` protocol which is the nodeport of KubeSphere console.. + +> Note: After creating the listener, please check the firewall rules of the load balancer. Make sure that the port `30880` has been added to the firewall rules and the external traffic can pass through `6443`. Otherwise, the installation will fail. + +![Create external LB](https://pek3b.qingstor.com/kubesphere-docs/png/20200215232114.png) + +1.8. Click **Add Backend**, then choose the `six` machines that we are going to install KubeSphere within the VxNet `Kube`, and set the port to `30880`. + +Click **Submit** when you are done. + +1.9. Click on the button **Apply Changes** to activate the configurations. At this point, you can find the six machines have been added as the backend servers of the listener that is behind the external load balancer. + +![Apply Changes](https://pek3b.qingstor.com/kubesphere-docs/png/20200215232445.png) + +### Step 2: Modify the host.ini + +Go to the taskbox where you downloaded the installer by following the [Multi-node Installation](../multi-node) and complete the following configurations. + +| **Parameter** | **Description** | +|--------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `[all]` | node information. Use the following syntax if you run installation as `root` user:
- ` ansible_connection= ip=`
- ` ansible_host= ip= ansible_ssh_pass=`
If you log in as a non-root user, use the syntax:
- ` ansible_connection= ip= ansible_user= ansible_become_pass=` | +| `[kube-master]` | master node names | +| `[kube-node]` | worker node names | +| `[etcd]` | etcd node names. The number of `etcd` nodes needs to be odd. | +| `[k8s-cluster:children]` | group names of `[kube-master]` and `[kube-node]` | + + +We use **CentOS 7.5** with `root` user to install an HA cluster. Please see the following configuration as an example: + +> Note: +>
+> If the _taskbox_ cannot establish `ssh` connection with the rest nodes, try to use the non-root user configuration. + +#### host.ini example + +```ini +[all] +master1 ansible_connection=local ip=192.168.0.1 +master2 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD +master3 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD +node1 ansible_host=192.168.0.4 ip=192.168.0.4 ansible_ssh_pass=PASSWORD +node2 ansible_host=192.168.0.5 ip=192.168.0.5 ansible_ssh_pass=PASSWORD +node3 ansible_host=192.168.0.6 ip=192.168.0.6 ansible_ssh_pass=PASSWORD + +[kube-master] +master1 +master2 +master3 + +[kube-node] +node1 +node2 +node3 + +[etcd] +master1 +master2 +master3 + +[k8s-cluster:children] +kube-node +kube-master +``` + +### Step 3: Configure the Load Balancer Parameters + +Besides configuring the `common.yaml` by following the [Multi-node Installation](../multi-node), you need to modify the load balancer information in the `common.yaml`. Assume the **VIP** address and listening port of the **internal load balancer** are `192.168.0.253` and `6443`, then you can refer to the following example. + +> - Note that address and port should be indented by two spaces in `common.yaml`, and the address should be VIP. +> - The domain name of the load balancer is "lb.kubesphere.local" by default for internal access. If you need to change the domain name, please uncomment and modify it. + +#### The configuration sample in common.yaml + +```yaml +## External LB example config +## apiserver_loadbalancer_domain_name: "lb.kubesphere.local" +loadbalancer_apiserver: + address: 192.168.0.253 + port: 6443 +``` + +Finally, please refer to the [guide](../storage-configuration) to configure the persistent storage service in `common.yaml` and start your HA cluster installation. + +Then it is ready to install the high availability KubeSphere cluster. diff --git a/content/en/docs/installing-on-kubernetes/hosted-kubernetes/multi-node.md b/content/en/docs/installing-on-kubernetes/hosted-kubernetes/multi-node.md new file mode 100644 index 000000000..d1cd790ea --- /dev/null +++ b/content/en/docs/installing-on-kubernetes/hosted-kubernetes/multi-node.md @@ -0,0 +1,176 @@ +--- +title: "Multi-node Installation" +keywords: 'kubesphere, kubernetes, docker, kubesphere installer' +description: 'The guide for installing KubeSphere on Multi-Node in development or testing environment' + +weight: 2220 +--- + +`Multi-Node` installation enables installing KubeSphere on multiple nodes. Typically, use any one node as _taskbox_ to run the installation task. Please note `ssh` communication is required to be established between taskbox and other nodes. + +- The following instructions are for the default installation without enabling any optional components as we have made them pluggable since v2.1.0. If you want to enable any one, please read [Enable Pluggable Components](../pluggable-components). +- If your machines in total have >= 8 cores and >= 16G memory, we recommend you to install the full package of KubeSphere by [Enabling Optional Components](../complete-installation). +- The installation time depends on your network bandwidth, your computer configuration, the number of nodes, etc. + +## Prerequisites + +If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information. + +## Step 1: Prepare Linux Hosts + +The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements. + +- Time synchronization is required across all nodes, otherwise the installation may not succeed; +- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`; +- If you are using `Ubuntu 18.04`, you need to use the user `root`; +- If the Debian system does not have the sudo command installed, you need to execute `apt update && apt install sudo` command using root before installation. + +### Hardware Recommendation + +- KubeSphere can be installed on any cloud platform. +- The installation speed can be accelerated by increasing network bandwidth. +- If you choose air-gapped installation, ensure your disk of each node is at least 100G. + +| System | Minimum Requirements (Each node) | +| --- | --- | +| CentOS 7.4 ~ 7.7 (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:40 G | +| Ubuntu 16.04/18.04 LTS (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:40 G | +| Red Hat Enterprise Linux Server 7.4 (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:40 G | +| Debian Stretch 9.5 (64 bit)| CPU:2 Core, Memory:4 G, Disk Space:40 G | + +The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes. + +> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide. + +| Host IP | Host Name | Role | +| --- | --- | --- | +|192.168.0.1|master|master, etcd| +|192.168.0.2|node1|node| +|192.168.0.3|node2|node| + +### Cluster Architecture + +#### Single Master, Single Etcd, Two Nodes + +![Architecture](/cluster-architecture.svg) + +## Step 2: Download Installer Package + +**1.** Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`. + +```bash +curl -L https://kubesphere.io/download/stable/latest > installer.tar.gz \ +&& tar -zxf installer.tar.gz && cd kubesphere-all-v2.1.1/conf +``` + +**2.** Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file. + +> Note: +> +> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`. +> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`. +> - master, node1 and node2 are the host names of each node and all host names should be in lowercase. + +### hosts.ini + +```ini +[all] +master ansible_connection=local ip=192.168.0.1 +node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD +node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD + +[kube-master] +master + +[kube-node] +node1 +node2 + +[etcd] +master + +[k8s-cluster:children] +kube-node +kube-master +``` + +> Note: +> +> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here. +> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively. +> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`. +> +> Parameters Specification: +> +> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection. +> - `ansible_host`: The name of the host to be connected. +> - `ip`: The ip of the host to be connected. +> - `ansible_user`: The default ssh user name to use. +> - `ansible_become_pass`: Allows you to set the privilege escalation password. +> - `ansible_ssh_pass`: The password of the host to be connected using root. + +## Step 3: Install KubeSphere to Linux Machines + +> Note: +> +> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default. +> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions. +> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation. +> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts. + +**1.** Enter `scripts` folder, and execute `install.sh` using `root` user: + +```bash +cd ../cripts +./install.sh +``` + +**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume. + +```bash +################################################ + KubeSphere Installer Menu +################################################ +* 1) All-in-one +* 2) Multi-node +* 3) Quit +################################################ +https://kubesphere.io/ 2020-02-24 +################################################ +Please input an option: 2 + +``` + +**3.** Verify the multi-node installation: + +**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go. + +```bash +successsful! +##################################################### +### Welcome to KubeSphere! ### +##################################################### + +Console: http://192.168.0.1:30880 +Account: admin +Password: P@88w0rd + +NOTE:Please modify the default password after login. +##################################################### +``` + +> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components). + +**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in. + +![Login](https://pek3b.qingstor.com/kubesphere-docs/png/20191017172215.png) + +Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up. + +![Landing Page](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png) + +## FAQ + +The installer has been tested on Aliyun, AWS, Huawei Cloud, QingCloud, Tencent Cloud. Please check the [results](https://github.com/kubesphere/ks-installer/issues/23) for details. Also please read the [FAQ of installation](../../faq/faq-install). + +If you have any further questions please do not hesitate to file issues on [GitHub](https://github.com/kubesphere/kubesphere/issues). diff --git a/content/en/docs/installing-on-kubernetes/hosted-kubernetes/storage-configuration.md b/content/en/docs/installing-on-kubernetes/hosted-kubernetes/storage-configuration.md new file mode 100644 index 000000000..a3d8d5156 --- /dev/null +++ b/content/en/docs/installing-on-kubernetes/hosted-kubernetes/storage-configuration.md @@ -0,0 +1,157 @@ +--- +title: "StorageClass Configuration" +keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus' +description: 'Instructions for Setting up StorageClass for KubeSphere' + +weight: 2250 +--- + +Currently, Installer supports the following [Storage Class](https://kubernetes.io/docs/concepts/storage/storage-classes/), providing persistent storage service for KubeSphere (more storage classes will be supported soon). + +- NFS +- Ceph RBD +- GlusterFS +- QingCloud Block Storage +- QingStor NeonSAN +- Local Volume (for development and test only) + +The versions of storage systems and corresponding CSI plugins in the table listed below have been well tested. + +| **Name** | **Version** | **Reference** | +| ----------- | --- |---| +Ceph RBD Server | v0.94.10 | For development and testing, refer to [Install Ceph Storage Server](/zh-CN/appendix/ceph-ks-install/) for details. Please refer to [Ceph Documentation](http://docs.ceph.com/docs/master/) for production. | +Ceph RBD Client | v12.2.5 | Before installing KubeSphere, you need to configure the corresponding parameters in `common.yaml`. Please refer to [Ceph RBD](../storage-configuration/#ceph-rbd) | +GlusterFS Server | v3.7.6 | For development and testing, refer to [Deploying GlusterFS Storage Server](/zh-CN/appendix/glusterfs-ks-install/) for details. Please refer to [Gluster Documentation](https://www.gluster.org/install/) or [Gluster Documentation](http://gluster.readthedocs.io/en/latest/Install-Guide/Install/) for production. Note you need to install [Heketi Manager (V3.0.0)](https://github.com/heketi/heketi/tree/master/docs/admin). | +|GlusterFS Client |v3.12.10|Before installing KubeSphere, you need to configure the corresponding parameters in `common.yaml`. Please refer to [GlusterFS](../storage-configuration/#glusterfs)| +|NFS Client | v3.1.0 | Before installing KubeSphere, you need to configure the corresponding parameters in `common.yaml`. Make sure you have prepared NFS storage server. Please see [NFS Client](../storage-configuration/#nfs) | +QingCloud-CSI|v0.2.0.1|You need to configure the corresponding parameters in `common.yaml` before installing KubeSphere. Please refer to [QingCloud CSI](../storage-configuration/#qingcloud-csi) for details| +NeonSAN-CSI|v0.3.0| Before installing KubeSphere, you need to configure the corresponding parameters in `common.yaml`. Make sure you have prepared QingStor NeonSAN storage server. Please see [Neonsan-CSI](../storage-configuration/#neonsan-csi) | + +> Note: You are only allowed to set ONE default storage classes in the cluster. To specify a default storage class, make sure there is no default storage class already exited in the cluster. + +## Storage Configuration + +After preparing the storage server, you need to refer to the parameters description in the following table. Then modify the corresponding configurations in `conf/common.yaml` accordingly. + +The following describes the storage configuration in `common.yaml`. + +> Note: Local Volume is configured as the default storage class in `common.yaml` by default. If you are going to set other storage class as the default, disable the Local Volume and modify the configuration for other storage class. + +### Local Volume (For developing or testing only) + +A [Local Volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) represents a mounted local storage device such as a disk, partition or directory. Local volumes can only be used as a statically created PersistentVolume. We recommend you to use Local volume in testing or development only since it is quick and easy to install KubeSphere without the struggle to set up persistent storage server. Refer to following table for the definition in `conf/common.yaml`. + +| **Local volume** | **Description** | +| --- | --- | +| local\_volume\_provisioner\_enabled | Whether to use Local as the persistent storage, defaults to true | +| local\_volume\_provisioner\_storage\_class | Storage class name, default value:local | +| local\_volume\_is\_default\_class | Whether to set Local as the default storage class, defaults to true.| + +### NFS + +An NFS volume allows an existing NFS (Network File System) share to be mounted into your Pod. NFS can be configured in `conf/common.yaml`. Note you need to prepare NFS server in advance. + +| **NFS** | **Description** | +| --- | --- | +| nfs\_client\_enable | Whether to use NFS as the persistent storage, defaults to false | +| nfs\_client\_is\_default\_class | Whether to set NFS as default storage class, defaults to false. | +| nfs\_server | The NFS server address, either IP or Hostname | +| nfs\_path | NFS shared directory, which is the file directory shared on the server, see [Kubernetes Documentation](https://kubernetes.io/docs/concepts/storage/volumes/#nfs) | +|nfs\_vers3\_enabled | Specifies which version of the NFS protocol to use, defaults to false which means v4. True means v4 | +|nfs_archiveOnDelete | Archive PVC when deleting. It will automatically remove data from `oldPath` when it sets to false | + +### Ceph RBD + +The open source [Ceph RBD](https://ceph.com/) distributed storage system can be configured to use in `conf/common.yaml`. You need to prepare Ceph storage server in advance. Please refer to [Kubernetes Documentation](https://kubernetes.io/docs/concepts/storage/storage-classes/#ceph-rbd) for more details. + +| **Ceph\_RBD** | **Description** | +| --- | --- | +| ceph\_rbd\_enabled | Whether to use Ceph RBD as the persistent storage, defaults to false | +| ceph\_rbd\_storage\_class | Storage class name | +| ceph\_rbd\_is\_default\_class | Whether to set Ceph RBD as default storage class, defaults to false | +| ceph\_rbd\_monitors | Ceph monitors, comma delimited. This parameter is required, which depends on Ceph RBD server parameters | +| ceph\_rbd\_admin\_id | Ceph client ID that is capable of creating images in the pool. Defaults to “admin” | +| ceph\_rbd\_admin\_secret | Admin_id's secret, secret name for "adminId". This parameter is required. The provided secret must have type “kubernetes.io/rbd” | +| ceph\_rbd\_pool | Ceph RBD pool. Default is “rbd” | +| ceph\_rbd\_user\_id | Ceph client ID that is used to map the RBD image. Default is the same as adminId | +| ceph\_rbd\_user\_secret | Secret for User_id, it is required to create this secret in namespace which used rbd image | +| ceph\_rbd\_fsType | fsType that is supported by Kubernetes. Default: "ext4"| +| ceph\_rbd\_imageFormat | Ceph RBD image format, “1” or “2”. Default is “1” | +|ceph\_rbd\_imageFeatures| This parameter is optional and should only be used if you set imageFormat to “2”. Currently supported features are layering only. Default is “”, and no features are turned on| + +> Note: +> +> The ceph secret, which is created in storage class, like "ceph_rbd_admin_secret" and "ceph_rbd_user_secret", is retrieved using following command in Ceph storage server. + +```bash +ceph auth get-key client.admin +``` + +### GlusterFS + +[GlusterFS](https://docs.gluster.org/en/latest/) is a scalable network filesystem suitable for data-intensive tasks such as cloud storage and media streaming. You need to prepare GlusterFS storage server in advance. Please refer to [Kubernetes Documentation](https://kubernetes.io/docs/concepts/storage/storage-classes/#glusterfs) for further information. + +| **GlusterFS(It requires glusterfs cluster which is managed by heketi)**|**Description** | +| --- | --- | +| glusterfs\_provisioner\_enabled | Whether to use GlusterFS as the persistent storage, defaults to false | +| glusterfs\_provisioner\_storage\_class | Storage class name | +| glusterfs\_is\_default\_class | Whether to set GlusterFS as default storage class, defaults to false | +| glusterfs\_provisioner\_restauthenabled | Gluster REST service authentication boolean that enables authentication to the REST server | +| glusterfs\_provisioner\_resturl | Gluster REST service/Heketi service url which provision gluster volumes on demand. The general format should be "IP address:Port" and this is a mandatory parameter for GlusterFS dynamic provisioner| +| glusterfs\_provisioner\_clusterid | Optional, for example, 630372ccdc720a92c681fb928f27b53f is the ID of the cluster which will be used by Heketi when provisioning the volume. It can also be a list of clusterids | +| glusterfs\_provisioner\_restuser | Gluster REST service/Heketi user who has access to create volumes in the Gluster Trusted Pool | +| glusterfs\_provisioner\_secretName | Optional, identification of Secret instance that contains user password to use when talking to Gluster REST service, Installer will automatically create this secret in kube-system | +| glusterfs\_provisioner\_gidMin | The minimum value of GID range for the storage class | +| glusterfs\_provisioner\_gidMax |The maximum value of GID range for the storage class | +| glusterfs\_provisioner\_volumetype | The volume type and its parameters can be configured with this optional value, for example: ‘Replica volume’: volumetype: replicate:3 | +| jwt\_admin\_key | "jwt.admin.key" field is from "/etc/heketi/heketi.json" in Heketi server | + +**Attention:** + + > Please note: `"glusterfs_provisioner_clusterid"` could be returned from glusterfs server by running the following command: + + ```bash + export HEKETI_CLI_SERVER=http://localhost:8080 + heketi-cli cluster list + ``` + +### QingCloud Block Storage + +[QingCloud Block Storage](https://docs.qingcloud.com/product/Storage/volume/) is supported in KubeSphere as the persistent storage service. If you would like to experience dynamic provisioning when creating volume, we recommend you to use it as your persistent storage solution. KubeSphere integrates [QingCloud-CSI](https://github.com/yunify/qingcloud-csi/blob/master/README_zh.md), and allows you to use various block storage services of QingCloud. With simple configuration, you can quickly expand, clone PVCs and view the topology of PVCs, create/delete snapshot, as well as restore volume from snapshot. + +QingCloud-CSI plugin has implemented the standard CSI. You can easily create and manage different types of volumes in KubeSphere, which are provided by QingCloud. The corresponding PVCs will created with ReadWriteOnce access mode and mounted to running Pods. + +QingCloud-CSI supports create the following five types of volume in QingCloud: + +- High capacity +- Standard +- SSD Enterprise +- Super high performance +- High performance + +|**QingCloud-CSI** | **Description**| +| --- | ---| +| qingcloud\_csi\_enabled|Whether to use QingCloud-CSI as the persistent storage volume, defaults to false | +| qingcloud\_csi\_is\_default\_class| Whether to set QingCloud-CSI as default storage class, defaults to false | +qingcloud\_access\_key\_id ,
qingcloud\_secret\_access\_key| Please obtain it from [QingCloud Console](https://console.qingcloud.com/login) | +|qingcloud\_zone| Zone should be the same as the zone where the Kubernetes cluster is installed, and the CSI plugin will operate on the storage volumes for this zone. For example: zone can be set to these values, such as sh1a (Shanghai 1-A), sh1b (Shanghai 1-B), pek2 (Beijing 2), pek3a (Beijing 3-A), pek3b (Beijing 3-B), pek3c (Beijing 3-C), gd1 (Guangdong 1), gd2a (Guangdong 2-A), ap1 (Asia Pacific 1), ap2a (Asia Pacific 2-A) | +| type | The type of volume in QingCloud platform. In QingCloud platform, 0 represents high performance volume. 3 represents super high performance volume. 1 or 2 represents high capacity volume depending on cluster‘s zone, see [QingCloud Documentation](https://docs.qingcloud.com/product/api/action/volume/create_volumes.html)| +| maxSize, minSize | Limit the range of volume size in GiB| +| stepSize | Set the increment of volumes size in GiB| +| fsType | The file system of the storage volume, which supports ext3, ext4, xfs. The default is ext4| + +### QingStor NeonSAN + +The NeonSAN-CSI plugin supports the enterprise-level distributed storage [QingStor NeonSAN](https://www.qingcloud.com/products/qingstor-neonsan/) as the persistent storage solution. You need prepare the NeonSAN server, then configure the NeonSAN-CSI plugin to connect to its storage server in `conf/common.yaml`. Please refer to [NeonSAN-CSI Reference](https://github.com/wnxn/qingstor-csi/blob/master/docs/reference_zh.md#storageclass-%E5%8F%82%E6%95%B0) for further information. + +| **NeonSAN** | **Description** | +| --- | --- | +| neonsan\_csi\_enabled | Whether to use NeonSAN as the persistent storage, defaults to false | +| neonsan\_csi\_is\_default\_class | Whether to set NeonSAN-CSI as the default storage class, defaults to false| +Neonsan\_csi\_protocol | transportation protocol, user must set the option, such as TCP or RDMA| +| neonsan\_server\_address | NeonSAN server address | +| neonsan\_cluster\_name| NeonSAN server cluster name| +| neonsan\_server\_pool|A comma separated list of pools. Tell plugin to manager these pools. User must set the option, the default value is kube| +| neonsan\_server\_replicas|NeonSAN image replica count. Default: 1| +| neonsan\_server\_stepSize|set the increment of volumes size in GiB. Default: 1| +| neonsan\_server\_fsType|The file system to use for the volume. Default: ext4| diff --git a/content/en/docs/installation/introduction/_index.md b/content/en/docs/installing-on-kubernetes/introduction/_index.md similarity index 100% rename from content/en/docs/installation/introduction/_index.md rename to content/en/docs/installing-on-kubernetes/introduction/_index.md diff --git a/content/en/docs/installing-on-kubernetes/introduction/intro.md b/content/en/docs/installing-on-kubernetes/introduction/intro.md new file mode 100644 index 000000000..a176c3255 --- /dev/null +++ b/content/en/docs/installing-on-kubernetes/introduction/intro.md @@ -0,0 +1,93 @@ +--- +title: "Introduction" +keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus' +description: 'KubeSphere Installation Overview' + +linkTitle: "Introduction" +weight: 2110 +--- + +[KubeSphere](https://kubesphere.io/) is an enterprise-grade multi-tenant container platform built on [Kubernetes](https://kubernetes.io). It provides an easy-to-use UI for users to manage application workloads and computing resources with a few clicks, which greatly reduces the learning curve and the complexity of daily work such as development, testing, operation and maintenance. KubeSphere aims to alleviate the pain points of Kubernetes including storage, network, security and ease of use, etc. + +KubeSphere supports installing on cloud-hosted and on-premises Kubernetes cluster, e.g. native K8s, GKE, EKS, RKE, etc. It also supports installing on Linux host including virtual machine and bare metal with provisioning fresh Kubernetes cluster. Both of the two methods are easy and friendly to install KubeSphere. Meanwhile, KubeSphere offers not only online installer, but air-gapped installer for such environment with no access to the internet. + +KubeSphere is open source project on [GitHub](https://github.com/kubesphere). There are thousands of users are using KunbeSphere, and many of them are running KubeSphere for their production workloads. + +In summary, there are several installation options you can choose. Please note not all options are mutually exclusive. For instance, you can deploy KubeSphere with minimal packages on existing K8s cluster on multiple nodes in air-gapped environment. Here is the decision tree shown in the following graph you may reference for your own situation. + +- [All-in-One](../all-in-one): Intall KubeSphere on a singe node. It is only for users to quickly get familar with KubeSphere. +- [Multi-Node](../multi-node): Install KubeSphere on multiple nodes. It is for testing or development. +- [Install KubeSphere on Air Gapped Linux](../install-ks-on-linux-airgapped): All images of KubeSphere have been encapsulated into a package, it is convenient for air gapped installation on Linux machines. +- [High Availability Multi-Node](../master-ha): Install high availability KubeSphere on multiple nodes which is used for production environment. +- [KubeSphere on Existing K8s](../install-on-k8s): Deploy KubeSphere on your Kubernetes cluster including cloud-hosted services such as GKE, EKS, etc. +- [KubeSphere on Air-Gapped K8s](../install-on-k8s-airgapped): Install KubeSphere on a disconnected Kubernetes cluster. +- Minimal Packages: Only install minimal required system components of KubeSphere. The minimum of resource requirement is down to 1 core and 2G memory. +- [Full Packages](../complete-installation): Install all available system components of KubeSphere including DevOps, service mesh, application store, etc. + +![Installer Options](https://pek3b.qingstor.com/kubesphere-docs/png/20200305093158.png) + +## Before Installation + +- As the installation will pull images and update operating system from the internet, your environment must have the internet access. If not, then you need to use the air-gapped installer instead. +- For all-in-one installation, the only one node is both the master and the worker. +- For multi-node installation, you are asked to specify the node roles in the configuration file before installation. +- Your linux host must have OpenSSH Server installed. +- Please check the [ports requirements](../port-firewall) before installation. + +## Quick Install For Development and Testing + +KubeSphere has decoupled some components since v2.1.0. The installer only installs required components by default which brings the benefits of fast installation and minimal resource consumption. If you want to install any optional component, please check the following section [Pluggable Components Overview](../intro#pluggable-components-overview) for details. + +The quick install of KubeSphere is only for development or testing since it uses local volume for storage by default. If you want a production install please refer to the section [High Availability Installation for Production Environment](../intro#high-availability-installation-for-production-environment). + +### 1. Install KubeSphere on Linux + +- [All-in-One](../all-in-one): It means a single-node hassle-free configuration installation with one-click. +- [Multi-Node](../multi-node): It allows you to install KubeSphere on multiple instances using local volume, which means it is not required to install storage server such as Ceph, GlusterFS. + +> Note:With regard to air-gapped installation please refer to [Install KubeSphere on Air Gapped Linux Machines](../install-ks-on-linux-airgapped). + +### 2. Install KubeSphere on Existing Kubernetes + +You can install KubeSphere on your existing Kubernetes cluster. Please refer [Install KubeSphere on Kubernetes](../install-on-k8s) for instructions. + +## High Availability Installation for Production Environment + +### 1. Install HA KubeSphere on Linux + +KubeSphere installer supports installing a highly available cluster for production with the prerequisites being a load balancer and persistent storage service set up in advance. + +- [Persistent Service Configuration](../storage-configuration): By default, KubeSphere Installer uses [Local Volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning in Kubernetes cluster. It is convenient for quick install of testing environment. In production environment, it must have a storage server set up. Please refer [Persistent Service Configuration](../storage-configuration) for details. +- [Load Balancer Configuration for HA install](../master-ha): Before you get started with multi-node installation in production environment, you need to configure a load balancer. Either cloud LB or `HAproxy + keepalived` works for the installation. + +### 2. Install HA KubeSphere on Existing Kubernetes + +Before you install KubeSphere on existing Kubernetes, please check the prerequisites of the installation on Linux described above, and verify the existing Kubernetes to see if it satisfies these prerequisites or not, i.e., a load balancer and persistent storage service. + +If your Kubernetes is ready, please refer [Install KubeSphere on Kubernetes](../install-on-k8s) for instructions. + +> You can install KubeSphere on cloud Kubernetes service such as [Installing KubeSphere on GKE cluster](../install-on-gke) + +## Pluggable Components Overview + +KubeSphere has decoupled some core feature components since v2.1.0. These components are designed to be pluggable, which means you can enable any of them before or after installation. The installer by default does not install the pluggable components. Please check the guide [Enable Pluggable Components Installation](../pluggable-components) for your requirement. + +![Pluggable Components](https://pek3b.qingstor.com/kubesphere-docs/png/20191207140846.png) + +## Storage Configuration Instruction + +The following links explain how to configure different types of persistent storage services. Please refer to [Storage Configuration Instruction](../storage-configuration) for detailed instructions regarding how to configure the storage class in KubeSphere. + +- [NFS](https://kubernetes.io/docs/concepts/storage/volumes/#nfs) +- [GlusterFS](https://www.gluster.org/) +- [Ceph RBD](https://ceph.com/) +- [QingCloud Block Storage](https://docs.qingcloud.com/product/storage/volume/) +- [QingStor NeonSAN](https://docs.qingcloud.com/product/storage/volume/super_high_performance_shared_volume/) + +## Add New Nodes + +KubeSphere Installer allows you to scale the number of nodes, see [Add New Nodes](../add-nodes). + +## Uninstall + +Uninstall will remove KubeSphere from the machines. This operation is irreversible and dangerous. Please check [Uninstall](../uninstall). diff --git a/content/en/docs/installation/introduction/port-firewall.md b/content/en/docs/installing-on-kubernetes/introduction/port-firewall.md similarity index 100% rename from content/en/docs/installation/introduction/port-firewall.md rename to content/en/docs/installing-on-kubernetes/introduction/port-firewall.md diff --git a/content/en/docs/installation/introduction/vars.md b/content/en/docs/installing-on-kubernetes/introduction/vars.md similarity index 100% rename from content/en/docs/installation/introduction/vars.md rename to content/en/docs/installing-on-kubernetes/introduction/vars.md diff --git a/content/en/docs/installing-on-kubernetes/on-prem-kubernetes/_index.md b/content/en/docs/installing-on-kubernetes/on-prem-kubernetes/_index.md new file mode 100644 index 000000000..cd927f966 --- /dev/null +++ b/content/en/docs/installing-on-kubernetes/on-prem-kubernetes/_index.md @@ -0,0 +1,7 @@ +--- +linkTitle: "Install on Linux" +weight: 2200 + +_build: + render: false +--- \ No newline at end of file diff --git a/content/en/docs/installing-on-kubernetes/on-prem-kubernetes/install-ks-on-linux-airgapped.md b/content/en/docs/installing-on-kubernetes/on-prem-kubernetes/install-ks-on-linux-airgapped.md new file mode 100644 index 000000000..26b3e4f04 --- /dev/null +++ b/content/en/docs/installing-on-kubernetes/on-prem-kubernetes/install-ks-on-linux-airgapped.md @@ -0,0 +1,224 @@ +--- +title: "Air-Gapped Installation" +keywords: 'kubernetes, kubesphere, air gapped, installation' +description: 'How to install KubeSphere on air-gapped Linux machines' + + +weight: 2240 +--- + +The air-gapped installation is almost the same as the online installation except it creates a local registry to host the Docker images. We will demonstrate how to install KubeSphere and Kubernetes on air-gapped environment. + +> Note: The dependencies in different operating systems may cause upexpected problems. If you encounter any installation problems on air-gapped environment, please describe your OS information and error logs on [GitHub](https://github.com/kubesphere/kubesphere/issues). + +## Prerequisites + +- If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information. +> - Installer will use `/var/lib/docker` as the default directory where all Docker related files, including the images, are stored. We recommend you to add additional storage to a disk with at least 100G mounted at `/var/lib/docker` and `/mnt/registry` respectively, use the [fdisk](https://www.computerhope.com/unix/fdisk.htm) command for reference. +- Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [OpenEBS](https://openebs.io/) to provide storage service with dynamic provisioning. It is convenient for testing and development. For production, please [configure supported persistent storage service](../storage-configuration) and prepare [high availability configuration](../master-ha) before installation. +- Since the air-gapped machines cannot connect to apt or yum source, please use clean Linux machine to avoid this problem. + +## Step 1: Prepare Linux Hosts + +The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements. + +- Supported OSes: CentOS 7.4 ~ 7.7 (64-bit), Ubuntu 16.04.5/16.04.6/18.04.1/18.04.2/18.04.3 LTS (64-bit) +- Time synchronization is required across all nodes, otherwise the installation may not succeed; +- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`; +- If you are using `Ubuntu 18.04`, you need to use the user `root`. +- Ensure your disk of each node is at least 100G. +- CPU and memory in total of all machines: 2 cores and 4 GB for minimal installation; 8 cores and 16 GB for complete installation. + + +The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes. + +> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide. + +| Host IP | Host Name | Role | +| --- | --- | --- | +|192.168.0.1|master|master, etcd| +|192.168.0.2|node1|node| +|192.168.0.3|node2|node| + +### Cluster Architecture + +#### Single Master, Single Etcd, Two Nodes + +![Architecture](/cluster-architecture.svg) + +## Step 2: Download Installer Package + +Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`. + +```bash +curl -L https://kubesphere.io/download/offline/latest > kubesphere-all-offline-v2.1.1.tar.gz \ +&& tar -zxf kubesphere-all-offline-v2.1.1.tar.gz && cd kubesphere-all-offline-v2.1.1/conf +``` + +## Step 3: Configure Host Template + +> This step is only for multi-node installation, you can skip this step if you choose all-in-one installation. + +Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file. + +> Note: +> +> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`. +> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`. +> - master, node1 and node2 are the host names of each node and all host names should be in lowercase. + +### hosts.ini + +```ini +[all] +master ansible_connection=local ip=192.168.0.1 +node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD +node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD + +[local-registry] +master + +[kube-master] +master + +[kube-node] +node1 +node2 + +[etcd] +master + +[k8s-cluster:children] +kube-node +kube-master +``` + +> Note: +> +> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here. +> - Installer will use a node as the local registry for docker images, defaults to "master" in the group `[local-registry]`. +> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively. +> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`. +> +> Parameters Specification: +> +> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection. +> - `ansible_host`: The name of the host to be connected. +> - `ip`: The ip of the host to be connected. +> - `ansible_user`: The default ssh user name to use. +> - `ansible_become_pass`: Allows you to set the privilege escalation password. +> - `ansible_ssh_pass`: The password of the host to be connected using root. + +## Step 4: Enable All Components + +> This is step is complete installation. You can skip this step if you choose a minimal installation. + +Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default. + +```yaml +# LOGGING CONFIGURATION +# logging is an optional component when installing KubeSphere, and +# Kubernetes builtin logging APIs will be used if logging_enabled is set to false. +# Builtin logging only provides limited functions, so recommend to enable logging. +logging_enabled: true # Whether to install logging system +elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number +elasticsearch_data_replica: 2 # total number of data nodes +elasticsearch_volume_size: 20Gi # Elasticsearch volume size +log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default. +elk_prefix: logstash # the string making up index names. The index name will be formatted as ks--log +kibana_enabled: false # Kibana Whether to install built-in Grafana +#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption. +#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port + +#DevOps Configuration +devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image) +jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default +jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default +jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default +jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters +jenkinsJavaOpts_Xmx: 6g +jenkinsJavaOpts_MaxRAM: 8g +sonarqube_enabled: true # Whether to install built-in SonarQube +#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption. +#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token + +# Following components are all optional for KubeSphere, +# Which could be turned on to install it before installation or later by updating its value to true +openpitrix_enabled: true # KubeSphere application store +metrics_server_enabled: true # For KubeSphere HPA to use +servicemesh_enabled: true # KubeSphere service mesh system(Istio-based) +notification_enabled: true # KubeSphere notification system +alerting_enabled: true # KubeSphere alerting system +``` + +## Step 5: Install KubeSphere to Linux Machines + +> Note: +> +> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default. +> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions. +> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation. +> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts. + +**1.** Enter `scripts` folder, and execute `install.sh` using `root` user: + +```bash +cd ../cripts +./install.sh +``` + +**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume. + +```bash +################################################ + KubeSphere Installer Menu +################################################ +* 1) All-in-one +* 2) Multi-node +* 3) Quit +################################################ +https://kubesphere.io/ 2020-02-24 +################################################ +Please input an option: 2 + +``` + +**3.** Verify the multi-node installation: + +**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go. + +```bash +successsful! +##################################################### +### Welcome to KubeSphere! ### +##################################################### + +Console: http://192.168.0.1:30880 +Account: admin +Password: P@88w0rd + +NOTE:Please modify the default password after login. +##################################################### +``` + +> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components). + +**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in. + +![Login](https://pek3b.qingstor.com/kubesphere-docs/png/20191017172215.png) + +Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up. + +![Landing Page](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png) + +## Enable Pluggable Components + +If you already have set up minimal installation, you still can edit the ConfigMap of ks-installer using the following command. Please make sure there is enough resource in your machines, see [Pluggable Components Overview](/en/installation/pluggable-components/). + +```bash +kubectl edit cm -n kubesphere-system ks-installer +``` + +## FAQ + +If you have further questions please do not hesitate to raise issues on [GitHub](https://github.com/kubesphere/kubesphere/issues). diff --git a/content/en/docs/installation/_index.md b/content/en/docs/installing-on-linux/_index.md similarity index 90% rename from content/en/docs/installation/_index.md rename to content/en/docs/installing-on-linux/_index.md index 613e9904d..2442646b9 100644 --- a/content/en/docs/installation/_index.md +++ b/content/en/docs/installing-on-linux/_index.md @@ -1,9 +1,9 @@ --- -title: "Installation" +title: "Installing on Linux" description: "Help you to better understand KubeSphere with detailed graphics and contents" layout: "single" -linkTitle: "Installation" +linkTitle: "Installing on Linux" weight: 2000 icon: "/images/docs/docs.svg" @@ -20,4 +20,4 @@ Below you will find some of the most common and helpful pages from this chapter. {{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} -{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} \ No newline at end of file +{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} diff --git a/content/en/docs/installing-on-linux/introduction/_index.md b/content/en/docs/installing-on-linux/introduction/_index.md new file mode 100644 index 000000000..2cf101ca5 --- /dev/null +++ b/content/en/docs/installing-on-linux/introduction/_index.md @@ -0,0 +1,7 @@ +--- +linkTitle: "Installation" +weight: 2100 + +_build: + render: false +--- \ No newline at end of file diff --git a/content/en/docs/installing-on-linux/introduction/intro.md b/content/en/docs/installing-on-linux/introduction/intro.md new file mode 100644 index 000000000..a176c3255 --- /dev/null +++ b/content/en/docs/installing-on-linux/introduction/intro.md @@ -0,0 +1,93 @@ +--- +title: "Introduction" +keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus' +description: 'KubeSphere Installation Overview' + +linkTitle: "Introduction" +weight: 2110 +--- + +[KubeSphere](https://kubesphere.io/) is an enterprise-grade multi-tenant container platform built on [Kubernetes](https://kubernetes.io). It provides an easy-to-use UI for users to manage application workloads and computing resources with a few clicks, which greatly reduces the learning curve and the complexity of daily work such as development, testing, operation and maintenance. KubeSphere aims to alleviate the pain points of Kubernetes including storage, network, security and ease of use, etc. + +KubeSphere supports installing on cloud-hosted and on-premises Kubernetes cluster, e.g. native K8s, GKE, EKS, RKE, etc. It also supports installing on Linux host including virtual machine and bare metal with provisioning fresh Kubernetes cluster. Both of the two methods are easy and friendly to install KubeSphere. Meanwhile, KubeSphere offers not only online installer, but air-gapped installer for such environment with no access to the internet. + +KubeSphere is open source project on [GitHub](https://github.com/kubesphere). There are thousands of users are using KunbeSphere, and many of them are running KubeSphere for their production workloads. + +In summary, there are several installation options you can choose. Please note not all options are mutually exclusive. For instance, you can deploy KubeSphere with minimal packages on existing K8s cluster on multiple nodes in air-gapped environment. Here is the decision tree shown in the following graph you may reference for your own situation. + +- [All-in-One](../all-in-one): Intall KubeSphere on a singe node. It is only for users to quickly get familar with KubeSphere. +- [Multi-Node](../multi-node): Install KubeSphere on multiple nodes. It is for testing or development. +- [Install KubeSphere on Air Gapped Linux](../install-ks-on-linux-airgapped): All images of KubeSphere have been encapsulated into a package, it is convenient for air gapped installation on Linux machines. +- [High Availability Multi-Node](../master-ha): Install high availability KubeSphere on multiple nodes which is used for production environment. +- [KubeSphere on Existing K8s](../install-on-k8s): Deploy KubeSphere on your Kubernetes cluster including cloud-hosted services such as GKE, EKS, etc. +- [KubeSphere on Air-Gapped K8s](../install-on-k8s-airgapped): Install KubeSphere on a disconnected Kubernetes cluster. +- Minimal Packages: Only install minimal required system components of KubeSphere. The minimum of resource requirement is down to 1 core and 2G memory. +- [Full Packages](../complete-installation): Install all available system components of KubeSphere including DevOps, service mesh, application store, etc. + +![Installer Options](https://pek3b.qingstor.com/kubesphere-docs/png/20200305093158.png) + +## Before Installation + +- As the installation will pull images and update operating system from the internet, your environment must have the internet access. If not, then you need to use the air-gapped installer instead. +- For all-in-one installation, the only one node is both the master and the worker. +- For multi-node installation, you are asked to specify the node roles in the configuration file before installation. +- Your linux host must have OpenSSH Server installed. +- Please check the [ports requirements](../port-firewall) before installation. + +## Quick Install For Development and Testing + +KubeSphere has decoupled some components since v2.1.0. The installer only installs required components by default which brings the benefits of fast installation and minimal resource consumption. If you want to install any optional component, please check the following section [Pluggable Components Overview](../intro#pluggable-components-overview) for details. + +The quick install of KubeSphere is only for development or testing since it uses local volume for storage by default. If you want a production install please refer to the section [High Availability Installation for Production Environment](../intro#high-availability-installation-for-production-environment). + +### 1. Install KubeSphere on Linux + +- [All-in-One](../all-in-one): It means a single-node hassle-free configuration installation with one-click. +- [Multi-Node](../multi-node): It allows you to install KubeSphere on multiple instances using local volume, which means it is not required to install storage server such as Ceph, GlusterFS. + +> Note:With regard to air-gapped installation please refer to [Install KubeSphere on Air Gapped Linux Machines](../install-ks-on-linux-airgapped). + +### 2. Install KubeSphere on Existing Kubernetes + +You can install KubeSphere on your existing Kubernetes cluster. Please refer [Install KubeSphere on Kubernetes](../install-on-k8s) for instructions. + +## High Availability Installation for Production Environment + +### 1. Install HA KubeSphere on Linux + +KubeSphere installer supports installing a highly available cluster for production with the prerequisites being a load balancer and persistent storage service set up in advance. + +- [Persistent Service Configuration](../storage-configuration): By default, KubeSphere Installer uses [Local Volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning in Kubernetes cluster. It is convenient for quick install of testing environment. In production environment, it must have a storage server set up. Please refer [Persistent Service Configuration](../storage-configuration) for details. +- [Load Balancer Configuration for HA install](../master-ha): Before you get started with multi-node installation in production environment, you need to configure a load balancer. Either cloud LB or `HAproxy + keepalived` works for the installation. + +### 2. Install HA KubeSphere on Existing Kubernetes + +Before you install KubeSphere on existing Kubernetes, please check the prerequisites of the installation on Linux described above, and verify the existing Kubernetes to see if it satisfies these prerequisites or not, i.e., a load balancer and persistent storage service. + +If your Kubernetes is ready, please refer [Install KubeSphere on Kubernetes](../install-on-k8s) for instructions. + +> You can install KubeSphere on cloud Kubernetes service such as [Installing KubeSphere on GKE cluster](../install-on-gke) + +## Pluggable Components Overview + +KubeSphere has decoupled some core feature components since v2.1.0. These components are designed to be pluggable, which means you can enable any of them before or after installation. The installer by default does not install the pluggable components. Please check the guide [Enable Pluggable Components Installation](../pluggable-components) for your requirement. + +![Pluggable Components](https://pek3b.qingstor.com/kubesphere-docs/png/20191207140846.png) + +## Storage Configuration Instruction + +The following links explain how to configure different types of persistent storage services. Please refer to [Storage Configuration Instruction](../storage-configuration) for detailed instructions regarding how to configure the storage class in KubeSphere. + +- [NFS](https://kubernetes.io/docs/concepts/storage/volumes/#nfs) +- [GlusterFS](https://www.gluster.org/) +- [Ceph RBD](https://ceph.com/) +- [QingCloud Block Storage](https://docs.qingcloud.com/product/storage/volume/) +- [QingStor NeonSAN](https://docs.qingcloud.com/product/storage/volume/super_high_performance_shared_volume/) + +## Add New Nodes + +KubeSphere Installer allows you to scale the number of nodes, see [Add New Nodes](../add-nodes). + +## Uninstall + +Uninstall will remove KubeSphere from the machines. This operation is irreversible and dangerous. Please check [Uninstall](../uninstall). diff --git a/content/en/docs/installing-on-linux/introduction/port-firewall.md b/content/en/docs/installing-on-linux/introduction/port-firewall.md new file mode 100644 index 000000000..875c2e9b0 --- /dev/null +++ b/content/en/docs/installing-on-linux/introduction/port-firewall.md @@ -0,0 +1,33 @@ +--- +title: "Port Requirements" +keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus' +description: '' + +linkTitle: "Requirements" +weight: 2120 +--- + + +KubeSphere requires certain ports to communicate among services, so you need to make sure the following ports open for use. + +| Service | Protocol | Action | Start Port | End Port | Notes | +|---|---|---|---|---|---| +| ssh | TCP | allow | 22 | | | +| etcd | TCP | allow | 2379 | 2380 | | +| apiserver | TCP | allow | 6443 | | | +| calico | TCP | allow | 9099 | 9100 | | +| bgp | TCP | allow | 179 | | | +| nodeport | TCP | allow | 30000 | 32767 | | +| master | TCP | allow | 10250 | 10258 | | +| dns | TCP | allow | 53 | | | +| dns | UDP | allow | 53 | | | +| local-registry | TCP | allow | 5000 | | Required for air gapped environment| +| local-apt | TCP | allow | 5080 | | Required for air gapped environment| +| rpcbind | TCP | allow | 111 | | When using NFS as storage server | +| ipip | IPIP | allow | | | Calico network requires ipip protocol | + +**Note** + +Please note when you use Calico network plugin and run your cluster in classic network in cloud environment, you need to open IPIP protocol for souce IP. For instance, the following is the sample on QingCloud showing how to open IPIP protocol. + +![](https://pek3b.qingstor.com/kubesphere-docs/png/20200304200605.png) diff --git a/content/en/docs/installing-on-linux/introduction/vars.md b/content/en/docs/installing-on-linux/introduction/vars.md new file mode 100644 index 000000000..cda3aa5db --- /dev/null +++ b/content/en/docs/installing-on-linux/introduction/vars.md @@ -0,0 +1,107 @@ +--- +title: "Common Configurations" +keywords: 'KubeSphere, kubernetes, docker, helm, jenkins, istio, prometheus' +description: 'Configure cluster parameters before installing' + +linkTitle: "Kubernetes Cluster Configuration" +weight: 2130 +--- + +This tutorial explains how to customize KubeSphere configurations in `conf/common.yaml`. You can reference the following section to understand each parameter. + +```yaml +######################### Kubernetes ######################### +# The default k8s version will be installed +kube_version: v1.16.7 + +# The default etcd version will be installed +etcd_version: v3.2.18 + +# Configure a cron job to backup etcd data, which is running on etcd machines. +# Period of running backup etcd job, the unit is minutes. +# The default value 30 means backup etcd every 30 minutes. +etcd_backup_period: 30 + +# How many backup replicas to keep. +# The default value5 means to keep latest 5 backups, older ones will be deleted by order. +keep_backup_number: 5 + +# The location to store etcd backups files on etcd machines. +etcd_backup_dir: "/var/backups/kube_etcd" + +# Add other registry. (For users who need to accelerate image download) +docker_registry_mirrors: + - https://docker.mirrors.ustc.edu.cn + - https://registry.docker-cn.com + - https://mirror.aliyuncs.com + +# Kubernetes network plugin, Calico will be installed by default. Note that Calico and flannel are recommended, which are tested and verified by KubeSphere. +kube_network_plugin: calico + +# A valid CIDR range for Kubernetes services, +# 1. should not overlap with node subnet +# 2. should not overlap with Kubernetes pod subnet +kube_service_addresses: 10.233.0.0/18 + +# A valid CIDR range for Kubernetes pod subnet, +# 1. should not overlap with node subnet +# 2. should not overlap with Kubernetes services subnet +kube_pods_subnet: 10.233.64.0/18 + +# Kube-proxy proxyMode configuration, either ipvs, or iptables +kube_proxy_mode: ipvs + +# Maximum pods allowed to run on every node. +kubelet_max_pods: 110 + +# Enable nodelocal dns cache, see https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.md#nodelocal-dns-cache for further information +enable_nodelocaldns: true + +# Highly Available loadbalancer example config +# apiserver_loadbalancer_domain_name: "lb.kubesphere.local" # Loadbalancer domain name +# loadbalancer_apiserver: # Loadbalancer apiserver configuration, please uncomment this line when you prepare HA install +# address: 192.168.0.10 # Loadbalancer apiserver IP address +# port: 6443 # apiserver port + +######################### KubeSphere ######################### + +# Version of KubeSphere +ks_version: v2.1.0 + +# KubeSphere console port, range 30000-32767, +# but 30180/30280/30380 are reserved for internal service +console_port: 30880 # KubeSphere console nodeport + +#CommonComponent +mysql_volume_size: 20Gi # MySQL PVC size +minio_volume_size: 20Gi # Minio PVC size +etcd_volume_size: 20Gi # etcd PVC size +openldap_volume_size: 2Gi # openldap PVC size +redis_volume_size: 2Gi # Redis PVC size + + +# Monitoring +prometheus_replica: 2 # Prometheus replicas with 2 by default which are responsible for monitoring different segments of data source and provide high availability as well. +prometheus_memory_request: 400Mi # Prometheus request memory +prometheus_volume_size: 20Gi # Prometheus PVC size +grafana_enabled: true # enable grafana or not + + +## Container Engine Acceleration +## Use nvidia gpu acceleration in containers +# nvidia_accelerator_enabled: true # enable Nvidia GPU accelerator or not. It supports hybrid node with GPU and CPU installed. +# nvidia_gpu_nodes: # The GPU nodes specified in hosts.ini. FOr now we only support Ubuntu 16.04 +# - kube-gpu-001 # The host name of the GPU node specified in hosts.ini +``` + +## How to Configure a GPU Node + +You may want to use GPU nodes for special purpose such as machine learning. Let's say you have a GPU node called `node2` in `hosts.ini`, then in the file `common.yaml` specify the following configuration. Please be aware the `- node2` has two spaces indent. + +```yaml + nvidia_accelerator_enabled: true + nvidia_gpu_nodes: + - node2 +``` + +> Note: The GPU node now only supports Ubuntu 16.04. \ No newline at end of file diff --git a/content/en/docs/installing-on-linux/on-premise/_index.md b/content/en/docs/installing-on-linux/on-premise/_index.md new file mode 100644 index 000000000..cd927f966 --- /dev/null +++ b/content/en/docs/installing-on-linux/on-premise/_index.md @@ -0,0 +1,7 @@ +--- +linkTitle: "Install on Linux" +weight: 2200 + +_build: + render: false +--- \ No newline at end of file diff --git a/content/en/docs/installing-on-linux/on-premise/install-ks-on-linux-airgapped.md b/content/en/docs/installing-on-linux/on-premise/install-ks-on-linux-airgapped.md new file mode 100644 index 000000000..26b3e4f04 --- /dev/null +++ b/content/en/docs/installing-on-linux/on-premise/install-ks-on-linux-airgapped.md @@ -0,0 +1,224 @@ +--- +title: "Air-Gapped Installation" +keywords: 'kubernetes, kubesphere, air gapped, installation' +description: 'How to install KubeSphere on air-gapped Linux machines' + + +weight: 2240 +--- + +The air-gapped installation is almost the same as the online installation except it creates a local registry to host the Docker images. We will demonstrate how to install KubeSphere and Kubernetes on air-gapped environment. + +> Note: The dependencies in different operating systems may cause upexpected problems. If you encounter any installation problems on air-gapped environment, please describe your OS information and error logs on [GitHub](https://github.com/kubesphere/kubesphere/issues). + +## Prerequisites + +- If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information. +> - Installer will use `/var/lib/docker` as the default directory where all Docker related files, including the images, are stored. We recommend you to add additional storage to a disk with at least 100G mounted at `/var/lib/docker` and `/mnt/registry` respectively, use the [fdisk](https://www.computerhope.com/unix/fdisk.htm) command for reference. +- Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [OpenEBS](https://openebs.io/) to provide storage service with dynamic provisioning. It is convenient for testing and development. For production, please [configure supported persistent storage service](../storage-configuration) and prepare [high availability configuration](../master-ha) before installation. +- Since the air-gapped machines cannot connect to apt or yum source, please use clean Linux machine to avoid this problem. + +## Step 1: Prepare Linux Hosts + +The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements. + +- Supported OSes: CentOS 7.4 ~ 7.7 (64-bit), Ubuntu 16.04.5/16.04.6/18.04.1/18.04.2/18.04.3 LTS (64-bit) +- Time synchronization is required across all nodes, otherwise the installation may not succeed; +- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`; +- If you are using `Ubuntu 18.04`, you need to use the user `root`. +- Ensure your disk of each node is at least 100G. +- CPU and memory in total of all machines: 2 cores and 4 GB for minimal installation; 8 cores and 16 GB for complete installation. + + +The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes. + +> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide. + +| Host IP | Host Name | Role | +| --- | --- | --- | +|192.168.0.1|master|master, etcd| +|192.168.0.2|node1|node| +|192.168.0.3|node2|node| + +### Cluster Architecture + +#### Single Master, Single Etcd, Two Nodes + +![Architecture](/cluster-architecture.svg) + +## Step 2: Download Installer Package + +Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`. + +```bash +curl -L https://kubesphere.io/download/offline/latest > kubesphere-all-offline-v2.1.1.tar.gz \ +&& tar -zxf kubesphere-all-offline-v2.1.1.tar.gz && cd kubesphere-all-offline-v2.1.1/conf +``` + +## Step 3: Configure Host Template + +> This step is only for multi-node installation, you can skip this step if you choose all-in-one installation. + +Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file. + +> Note: +> +> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`. +> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`. +> - master, node1 and node2 are the host names of each node and all host names should be in lowercase. + +### hosts.ini + +```ini +[all] +master ansible_connection=local ip=192.168.0.1 +node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD +node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD + +[local-registry] +master + +[kube-master] +master + +[kube-node] +node1 +node2 + +[etcd] +master + +[k8s-cluster:children] +kube-node +kube-master +``` + +> Note: +> +> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here. +> - Installer will use a node as the local registry for docker images, defaults to "master" in the group `[local-registry]`. +> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively. +> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`. +> +> Parameters Specification: +> +> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection. +> - `ansible_host`: The name of the host to be connected. +> - `ip`: The ip of the host to be connected. +> - `ansible_user`: The default ssh user name to use. +> - `ansible_become_pass`: Allows you to set the privilege escalation password. +> - `ansible_ssh_pass`: The password of the host to be connected using root. + +## Step 4: Enable All Components + +> This is step is complete installation. You can skip this step if you choose a minimal installation. + +Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default. + +```yaml +# LOGGING CONFIGURATION +# logging is an optional component when installing KubeSphere, and +# Kubernetes builtin logging APIs will be used if logging_enabled is set to false. +# Builtin logging only provides limited functions, so recommend to enable logging. +logging_enabled: true # Whether to install logging system +elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number +elasticsearch_data_replica: 2 # total number of data nodes +elasticsearch_volume_size: 20Gi # Elasticsearch volume size +log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default. +elk_prefix: logstash # the string making up index names. The index name will be formatted as ks--log +kibana_enabled: false # Kibana Whether to install built-in Grafana +#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption. +#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port + +#DevOps Configuration +devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image) +jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default +jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default +jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default +jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters +jenkinsJavaOpts_Xmx: 6g +jenkinsJavaOpts_MaxRAM: 8g +sonarqube_enabled: true # Whether to install built-in SonarQube +#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption. +#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token + +# Following components are all optional for KubeSphere, +# Which could be turned on to install it before installation or later by updating its value to true +openpitrix_enabled: true # KubeSphere application store +metrics_server_enabled: true # For KubeSphere HPA to use +servicemesh_enabled: true # KubeSphere service mesh system(Istio-based) +notification_enabled: true # KubeSphere notification system +alerting_enabled: true # KubeSphere alerting system +``` + +## Step 5: Install KubeSphere to Linux Machines + +> Note: +> +> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default. +> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions. +> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation. +> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts. + +**1.** Enter `scripts` folder, and execute `install.sh` using `root` user: + +```bash +cd ../cripts +./install.sh +``` + +**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume. + +```bash +################################################ + KubeSphere Installer Menu +################################################ +* 1) All-in-one +* 2) Multi-node +* 3) Quit +################################################ +https://kubesphere.io/ 2020-02-24 +################################################ +Please input an option: 2 + +``` + +**3.** Verify the multi-node installation: + +**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go. + +```bash +successsful! +##################################################### +### Welcome to KubeSphere! ### +##################################################### + +Console: http://192.168.0.1:30880 +Account: admin +Password: P@88w0rd + +NOTE:Please modify the default password after login. +##################################################### +``` + +> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components). + +**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in. + +![Login](https://pek3b.qingstor.com/kubesphere-docs/png/20191017172215.png) + +Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up. + +![Landing Page](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png) + +## Enable Pluggable Components + +If you already have set up minimal installation, you still can edit the ConfigMap of ks-installer using the following command. Please make sure there is enough resource in your machines, see [Pluggable Components Overview](/en/installation/pluggable-components/). + +```bash +kubectl edit cm -n kubesphere-system ks-installer +``` + +## FAQ + +If you have further questions please do not hesitate to raise issues on [GitHub](https://github.com/kubesphere/kubesphere/issues). diff --git a/content/en/docs/installing-on-linux/public-cloud/_index.md b/content/en/docs/installing-on-linux/public-cloud/_index.md new file mode 100644 index 000000000..cd927f966 --- /dev/null +++ b/content/en/docs/installing-on-linux/public-cloud/_index.md @@ -0,0 +1,7 @@ +--- +linkTitle: "Install on Linux" +weight: 2200 + +_build: + render: false +--- \ No newline at end of file diff --git a/content/en/docs/installing-on-linux/public-cloud/all-in-one.md b/content/en/docs/installing-on-linux/public-cloud/all-in-one.md new file mode 100644 index 000000000..8214171ef --- /dev/null +++ b/content/en/docs/installing-on-linux/public-cloud/all-in-one.md @@ -0,0 +1,116 @@ +--- +title: "All-in-One Installation" +keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus' +description: 'The guide for installing all-in-one KubeSphere for developing or testing' + +linkTitle: "All-in-One" +weight: 2210 +--- + +For those who are new to KubeSphere and looking for a quick way to discover the platform, the all-in-one mode is your best choice to install it since it is one-click and hassle-free configuration installation with provisioning KubeSphere and Kubernetes on your machine. + +- The following instructions are for the default installation without enabling any optional components as we have made them pluggable since v2.1.0. If you want to enable any one, please see the section [Enable Pluggable Components](../all-in-one#enable-pluggable-components) below. +- If your machine has >= 8 cores and >= 16G memory, we recommend you to install the full package of KubeSphere by [enabling optional components](../complete-installation). + +## Prerequisites + +If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirement](../port-firewall) for more information. + +## Step 1: Prepare Linux Machine + +The following describes the requirements of hardware and operating system. + +- For `Ubuntu 16.04` OS, it is recommended to select the latest `16.04.5`. +- If you are using Ubuntu 18.04, you need to use the root user to install. +- If the Debian system does not have the sudo command installed, you need to execute the `apt update && apt install sudo` command using root before installation. + +### Hardware Recommendation + +| System | Minimum Requirements | +| ------- | ----------- | +| CentOS 7.4 ~ 7.7 (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:100 G | +| Ubuntu 16.04/18.04 LTS (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:100 G | +| Red Hat Enterprise Linux Server 7.4 (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:100 G | +| Debian Stretch 9.5 (64 bit)| CPU:2 Core, Memory:4 G, Disk Space:100 G | + +## Step 2: Download Installer Package + +Execute the following commands to download Installer 2.1.1 and unpack it. + +```bash +curl -L https://kubesphere.io/download/stable/latest > installer.tar.gz \ +&& tar -zxf installer.tar.gz && cd kubesphere-all-v2.1.1/scripts +``` + +## Step 3: Get Started with Installation + +You should not do anything except executing one command as follows. The installer will complete all things for you automatically including installing/updating dependency packages, installing Kubernetes with default version 1.16.7, storage service and so on. + +> Note: +> +> - Generally speaking, do not modify any configuration. +> - KubeSphere installs `calico` by default. If you would like to use a different network plugin, you are allowed to change the configuration in `conf/common.yaml`. You are also allowed to modify other configurations such as storage class, pluggable components, etc. +> - The default storage class is [OpenEBS](https://openebs.io/) which is a kind of [Local Volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) to provision persistence storage service. OpenEBS supports [dynamic provisioning PV](https://docs.openebs.io/docs/next/uglocalpv.html#Provision-OpenEBS-Local-PV-based-on-hostpath). It will be installed automatically for your testing purpose. +> - Please refer [storage configurations](../storage-configuration) for supported storage class. +> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts. + +**1.** Execute the following command: + +```bash +./install.sh +``` + +**2.** Enter `1` to select `All-in-one` mode and type `yes` if your machine satisfies the requirements to start: + +```bash +################################################ + KubeSphere Installer Menu +################################################ +* 1) All-in-one +* 2) Multi-node +* 3) Quit +################################################ +https://kubesphere.io/ 2020-02-24 +################################################ +Please input an option: 1 +``` + +**3.** Verify if KubeSphere is installed successfully or not: + +**(1).** If you see "Successful" returned after completed, it means the installation is successful. The console service is exposed through nodeport 30880 by default. You may need to bind EIP and configure port forwarding in your environment for outside users to access. Make sure you disable the related firewall. + +```bash +successsful! +##################################################### +### Welcome to KubeSphere! ### +##################################################### + +Console: http://192.168.0.8:30880 +Account: admin +Password: P@88w0rd + +NOTE:Please modify the default password after login. +##################################################### +``` + +> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components). + +**(2).** You will be able to use default account and password to log in the console to take a tour of KubeSphere. + +Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up. + +![Dashboard](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png) + +## Enable Pluggable Components + +The guide above is only used for minimal installation by default. You can execute the following command to open the configure map and enable pluggable components. Make sure your cluster has enough CPU and memory in advance, see [Enable Pluggable Components](../pluggable-components). + +```bash +kubectl edit cm -n kubesphere-system ks-installer +``` + +## FAQ + +The installer has been tested on Aliyun, AWS, Huawei Cloud, QingCloud and Tencent Cloud. Please check the [results](https://github.com/kubesphere/ks-installer/issues/23) for details. Also please read the [FAQ of installation](../../faq/faq-install). + +If you have any further questions please do not hesitate to file issues on [GitHub](https://github.com/kubesphere/kubesphere/issues). diff --git a/content/en/docs/installing-on-linux/public-cloud/complete-installation.md b/content/en/docs/installing-on-linux/public-cloud/complete-installation.md new file mode 100644 index 000000000..e0ab92099 --- /dev/null +++ b/content/en/docs/installing-on-linux/public-cloud/complete-installation.md @@ -0,0 +1,76 @@ +--- +title: "Install All Optional Components" +keywords: 'kubesphere, kubernetes, docker, devops, service mesh, openpitrix' +description: 'Install KubeSphere with all optional components enabled on Linux machine' + + +weight: 2260 +--- + +The installer only installs required components (i.e. minimal installation) by default since v2.1.0. Other components are designed to be pluggable, which means you can enable any of them before or after installation. If your machine meets the following minimum requirements, we recommend you to **enable all components before installation**. A complete installation gives you an opportunity to comprehensively discover the container platform. + + +Minimum Requirements + +- CPU: 8 cores in total of all machines +- Memory: 16 GB in total of all machines + + + +> Note: +> +> - If your machines do not meet the minimum requirements of a complete installation, you can enable any of components at your will. Please refer to [Enable Pluggable Components Installation](../pluggable-components). +> - It works for [All-in-One](../all-in-one) and [Multi-Node](../multi-node). + +This tutorial will walk you through how to enable all components of KubeSphere. + +## Download Installer Package + +If you do not have the package yet, please run the following commands to download Installer 2.1.1 and unpack it, then enter `conf` folder. + +```bash +$ curl -L https://kubesphere.io/download/stable/v2.1.1 > installer.tar.gz \ +&& tar -zxf installer.tar.gz && cd kubesphere-all-v2.1.1/conf +``` + +## Enable All Components + +Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default. + +```yaml +# LOGGING CONFIGURATION +# logging is an optional component when installing KubeSphere, and +# Kubernetes builtin logging APIs will be used if logging_enabled is set to false. +# Builtin logging only provides limited functions, so recommend to enable logging. +logging_enabled: true # Whether to install logging system +elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number +elasticsearch_data_replica: 2 # total number of data nodes +elasticsearch_volume_size: 20Gi # Elasticsearch volume size +log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default. +elk_prefix: logstash # the string making up index names. The index name will be formatted as ks--log +kibana_enabled: false # Kibana Whether to install built-in Grafana +#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption. +#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port + +#DevOps Configuration +devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image) +jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default +jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default +jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default +jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters +jenkinsJavaOpts_Xmx: 6g +jenkinsJavaOpts_MaxRAM: 8g +sonarqube_enabled: true # Whether to install built-in SonarQube +#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption. +#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token + +# Following components are all optional for KubeSphere, +# Which could be turned on to install it before installation or later by updating its value to true +openpitrix_enabled: true # KubeSphere application store +metrics_server_enabled: true # For KubeSphere HPA to use +servicemesh_enabled: true # KubeSphere service mesh system(Istio-based) +notification_enabled: true # KubeSphere notification system +alerting_enabled: true # KubeSphere alerting system +``` + +Save it, then you can continue the installation process. diff --git a/content/en/docs/installing-on-linux/public-cloud/install-ks-on-linux-airgapped.md b/content/en/docs/installing-on-linux/public-cloud/install-ks-on-linux-airgapped.md new file mode 100644 index 000000000..26b3e4f04 --- /dev/null +++ b/content/en/docs/installing-on-linux/public-cloud/install-ks-on-linux-airgapped.md @@ -0,0 +1,224 @@ +--- +title: "Air-Gapped Installation" +keywords: 'kubernetes, kubesphere, air gapped, installation' +description: 'How to install KubeSphere on air-gapped Linux machines' + + +weight: 2240 +--- + +The air-gapped installation is almost the same as the online installation except it creates a local registry to host the Docker images. We will demonstrate how to install KubeSphere and Kubernetes on air-gapped environment. + +> Note: The dependencies in different operating systems may cause upexpected problems. If you encounter any installation problems on air-gapped environment, please describe your OS information and error logs on [GitHub](https://github.com/kubesphere/kubesphere/issues). + +## Prerequisites + +- If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information. +> - Installer will use `/var/lib/docker` as the default directory where all Docker related files, including the images, are stored. We recommend you to add additional storage to a disk with at least 100G mounted at `/var/lib/docker` and `/mnt/registry` respectively, use the [fdisk](https://www.computerhope.com/unix/fdisk.htm) command for reference. +- Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [OpenEBS](https://openebs.io/) to provide storage service with dynamic provisioning. It is convenient for testing and development. For production, please [configure supported persistent storage service](../storage-configuration) and prepare [high availability configuration](../master-ha) before installation. +- Since the air-gapped machines cannot connect to apt or yum source, please use clean Linux machine to avoid this problem. + +## Step 1: Prepare Linux Hosts + +The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements. + +- Supported OSes: CentOS 7.4 ~ 7.7 (64-bit), Ubuntu 16.04.5/16.04.6/18.04.1/18.04.2/18.04.3 LTS (64-bit) +- Time synchronization is required across all nodes, otherwise the installation may not succeed; +- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`; +- If you are using `Ubuntu 18.04`, you need to use the user `root`. +- Ensure your disk of each node is at least 100G. +- CPU and memory in total of all machines: 2 cores and 4 GB for minimal installation; 8 cores and 16 GB for complete installation. + + +The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes. + +> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide. + +| Host IP | Host Name | Role | +| --- | --- | --- | +|192.168.0.1|master|master, etcd| +|192.168.0.2|node1|node| +|192.168.0.3|node2|node| + +### Cluster Architecture + +#### Single Master, Single Etcd, Two Nodes + +![Architecture](/cluster-architecture.svg) + +## Step 2: Download Installer Package + +Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`. + +```bash +curl -L https://kubesphere.io/download/offline/latest > kubesphere-all-offline-v2.1.1.tar.gz \ +&& tar -zxf kubesphere-all-offline-v2.1.1.tar.gz && cd kubesphere-all-offline-v2.1.1/conf +``` + +## Step 3: Configure Host Template + +> This step is only for multi-node installation, you can skip this step if you choose all-in-one installation. + +Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file. + +> Note: +> +> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`. +> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`. +> - master, node1 and node2 are the host names of each node and all host names should be in lowercase. + +### hosts.ini + +```ini +[all] +master ansible_connection=local ip=192.168.0.1 +node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD +node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD + +[local-registry] +master + +[kube-master] +master + +[kube-node] +node1 +node2 + +[etcd] +master + +[k8s-cluster:children] +kube-node +kube-master +``` + +> Note: +> +> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here. +> - Installer will use a node as the local registry for docker images, defaults to "master" in the group `[local-registry]`. +> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively. +> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`. +> +> Parameters Specification: +> +> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection. +> - `ansible_host`: The name of the host to be connected. +> - `ip`: The ip of the host to be connected. +> - `ansible_user`: The default ssh user name to use. +> - `ansible_become_pass`: Allows you to set the privilege escalation password. +> - `ansible_ssh_pass`: The password of the host to be connected using root. + +## Step 4: Enable All Components + +> This is step is complete installation. You can skip this step if you choose a minimal installation. + +Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default. + +```yaml +# LOGGING CONFIGURATION +# logging is an optional component when installing KubeSphere, and +# Kubernetes builtin logging APIs will be used if logging_enabled is set to false. +# Builtin logging only provides limited functions, so recommend to enable logging. +logging_enabled: true # Whether to install logging system +elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number +elasticsearch_data_replica: 2 # total number of data nodes +elasticsearch_volume_size: 20Gi # Elasticsearch volume size +log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default. +elk_prefix: logstash # the string making up index names. The index name will be formatted as ks--log +kibana_enabled: false # Kibana Whether to install built-in Grafana +#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption. +#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port + +#DevOps Configuration +devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image) +jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default +jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default +jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default +jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters +jenkinsJavaOpts_Xmx: 6g +jenkinsJavaOpts_MaxRAM: 8g +sonarqube_enabled: true # Whether to install built-in SonarQube +#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption. +#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token + +# Following components are all optional for KubeSphere, +# Which could be turned on to install it before installation or later by updating its value to true +openpitrix_enabled: true # KubeSphere application store +metrics_server_enabled: true # For KubeSphere HPA to use +servicemesh_enabled: true # KubeSphere service mesh system(Istio-based) +notification_enabled: true # KubeSphere notification system +alerting_enabled: true # KubeSphere alerting system +``` + +## Step 5: Install KubeSphere to Linux Machines + +> Note: +> +> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default. +> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions. +> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation. +> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts. + +**1.** Enter `scripts` folder, and execute `install.sh` using `root` user: + +```bash +cd ../cripts +./install.sh +``` + +**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume. + +```bash +################################################ + KubeSphere Installer Menu +################################################ +* 1) All-in-one +* 2) Multi-node +* 3) Quit +################################################ +https://kubesphere.io/ 2020-02-24 +################################################ +Please input an option: 2 + +``` + +**3.** Verify the multi-node installation: + +**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go. + +```bash +successsful! +##################################################### +### Welcome to KubeSphere! ### +##################################################### + +Console: http://192.168.0.1:30880 +Account: admin +Password: P@88w0rd + +NOTE:Please modify the default password after login. +##################################################### +``` + +> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components). + +**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in. + +![Login](https://pek3b.qingstor.com/kubesphere-docs/png/20191017172215.png) + +Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up. + +![Landing Page](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png) + +## Enable Pluggable Components + +If you already have set up minimal installation, you still can edit the ConfigMap of ks-installer using the following command. Please make sure there is enough resource in your machines, see [Pluggable Components Overview](/en/installation/pluggable-components/). + +```bash +kubectl edit cm -n kubesphere-system ks-installer +``` + +## FAQ + +If you have further questions please do not hesitate to raise issues on [GitHub](https://github.com/kubesphere/kubesphere/issues). diff --git a/content/en/docs/installing-on-linux/public-cloud/master-ha.md b/content/en/docs/installing-on-linux/public-cloud/master-ha.md new file mode 100644 index 000000000..ee8f26203 --- /dev/null +++ b/content/en/docs/installing-on-linux/public-cloud/master-ha.md @@ -0,0 +1,152 @@ +--- +title: "High Availability Configuration" +keywords: "kubesphere, kubernetes, docker,installation, HA, high availability" +description: "The guide for installing a high availability of KubeSphere cluster" + +weight: 2230 +--- + +## Introduction + +[Multi-node installation](../multi-node) can help you to quickly set up a single-master cluster on multiple machines for development and testing. However, we need to consider the high availability of the cluster for production. Since the key components on the master node, i.e. kube-apiserver, kube-scheduler, and kube-controller-manager are running on a single master node, Kubernetes and KubeSphere will be unavailable during the master being down. Therefore we need to set up a high availability cluster by provisioning load balancers and multiple masters. You can use any cloud load balancer, or any hardware load balancer (e.g. F5). In addition, keepalved and Haproxy is also an alternative for creating such high-availability cluster. + +This document walks you through an example how to create two [QingCloud Load Balancer](https://docs.qingcloud.com/product/network/loadbalancer), serving as internal load balancer and external load balancer respectively, and how to configure the high availability of masters and Etcd using the load balancers. + +## Prerequisites + +- Please make sure that you already read [Multi-Node installation](../multi-node). This document only demonstrates how to configure load balancers. +- You need a [QingCloud](https://console.qingcloud.com/login) account to create load balancers, or follow the guide of any other cloud provider to create load balancers. + +## Architecture + +This example prepares six machines of CentOS 7.5. We will create two load balancers, and deploy three masters and Etcd nodes on three of the machines. You can configure these masters and Etcd nodes in `conf/hosts.ini`. + +![Master and etcd node high availability architecture](https://pek3b.qingstor.com/kubesphere-docs/png/20200307215924.png) + +## Install HA Cluster + +### Step 1: Create Load Balancers + +This step briefly shows an example of creating a load balancer on QingCloud platform. + +#### Create an Internal Load Balancer + +1.1. Log in [QingCloud Console](https://console.qingcloud.com/login) and select **Network & CDN → Load Balancers**, then click on the create button and fill in the basic information. + +1.2. Choose the VxNet that your machines are created within from the **Network** dropdown list. Here is `kube`. Other settings can be default values as follows. Click **Submit** to complete the creation. + +![Create Internal LB on QingCloud](https://pek3b.qingstor.com/kubesphere-docs/png/20200215224125.png) + +1.3. Drill into the detail page of the load balancer, then create a listener that listens to the port `6443` of the `TCP` protocol. + +- Name: Define a name for this Listener +- Listener Protocol: Select `TCP` protocol +- Port: `6443` +- Load mode: `Poll` + +> Note: After creating the listener, please check the firewall rules of the load balancer. Make sure that the port `6443` has been added to the firewall rules and the external traffic can pass through `6443`. Otherwise, the installation will fail. + +![Add Listener to LB](https://pek3b.qingstor.com/kubesphere-docs/png/20200215225205.png) + +1.4. Click **Add Backend**, choose the VxNet `kube` that we chose. Then click on the button **Advanced Search** and choose the three master nodes under the VxNet and set the port to `6443` which is the default secure port of api-server. + +Click **Submit** when you are done. + +![Choose Backends](https://pek3b.qingstor.com/kubesphere-docs/png/20200215225550.png) + +1.5. Click on the button **Apply Changes** to activate the configurations. At this point, you can find the three masters have been added as the backend servers of the listener that is behind the internal load balancer. + +> Please note: The status of all masters might shows `Not available` after you added them as backends. This is normal since the port `6443` of api-server are not active in masters yet. The status will change to `Active` and the port of api-server will be exposed after installation complete, which means the internal load balancer you configured works as expected. + +![Apply Changes](https://pek3b.qingstor.com/kubesphere-docs/png/20200215230107.png) + +#### Create an External Load Balancer + +You need to create an EIP in advance. + +1.6. Similarly, create an external load balancer without joining any network, but associate the EIP that you created to this load balancer. + +1.7. Enter the load balancer detail page, create a listener that listens to the port `30880` of the `HTTP` protocol which is the nodeport of KubeSphere console.. + +> Note: After creating the listener, please check the firewall rules of the load balancer. Make sure that the port `30880` has been added to the firewall rules and the external traffic can pass through `6443`. Otherwise, the installation will fail. + +![Create external LB](https://pek3b.qingstor.com/kubesphere-docs/png/20200215232114.png) + +1.8. Click **Add Backend**, then choose the `six` machines that we are going to install KubeSphere within the VxNet `Kube`, and set the port to `30880`. + +Click **Submit** when you are done. + +1.9. Click on the button **Apply Changes** to activate the configurations. At this point, you can find the six machines have been added as the backend servers of the listener that is behind the external load balancer. + +![Apply Changes](https://pek3b.qingstor.com/kubesphere-docs/png/20200215232445.png) + +### Step 2: Modify the host.ini + +Go to the taskbox where you downloaded the installer by following the [Multi-node Installation](../multi-node) and complete the following configurations. + +| **Parameter** | **Description** | +|--------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `[all]` | node information. Use the following syntax if you run installation as `root` user:
- ` ansible_connection= ip=`
- ` ansible_host= ip= ansible_ssh_pass=`
If you log in as a non-root user, use the syntax:
- ` ansible_connection= ip= ansible_user= ansible_become_pass=` | +| `[kube-master]` | master node names | +| `[kube-node]` | worker node names | +| `[etcd]` | etcd node names. The number of `etcd` nodes needs to be odd. | +| `[k8s-cluster:children]` | group names of `[kube-master]` and `[kube-node]` | + + +We use **CentOS 7.5** with `root` user to install an HA cluster. Please see the following configuration as an example: + +> Note: +>
+> If the _taskbox_ cannot establish `ssh` connection with the rest nodes, try to use the non-root user configuration. + +#### host.ini example + +```ini +[all] +master1 ansible_connection=local ip=192.168.0.1 +master2 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD +master3 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD +node1 ansible_host=192.168.0.4 ip=192.168.0.4 ansible_ssh_pass=PASSWORD +node2 ansible_host=192.168.0.5 ip=192.168.0.5 ansible_ssh_pass=PASSWORD +node3 ansible_host=192.168.0.6 ip=192.168.0.6 ansible_ssh_pass=PASSWORD + +[kube-master] +master1 +master2 +master3 + +[kube-node] +node1 +node2 +node3 + +[etcd] +master1 +master2 +master3 + +[k8s-cluster:children] +kube-node +kube-master +``` + +### Step 3: Configure the Load Balancer Parameters + +Besides configuring the `common.yaml` by following the [Multi-node Installation](../multi-node), you need to modify the load balancer information in the `common.yaml`. Assume the **VIP** address and listening port of the **internal load balancer** are `192.168.0.253` and `6443`, then you can refer to the following example. + +> - Note that address and port should be indented by two spaces in `common.yaml`, and the address should be VIP. +> - The domain name of the load balancer is "lb.kubesphere.local" by default for internal access. If you need to change the domain name, please uncomment and modify it. + +#### The configuration sample in common.yaml + +```yaml +## External LB example config +## apiserver_loadbalancer_domain_name: "lb.kubesphere.local" +loadbalancer_apiserver: + address: 192.168.0.253 + port: 6443 +``` + +Finally, please refer to the [guide](../storage-configuration) to configure the persistent storage service in `common.yaml` and start your HA cluster installation. + +Then it is ready to install the high availability KubeSphere cluster. diff --git a/content/en/docs/installing-on-linux/public-cloud/multi-node.md b/content/en/docs/installing-on-linux/public-cloud/multi-node.md new file mode 100644 index 000000000..d1cd790ea --- /dev/null +++ b/content/en/docs/installing-on-linux/public-cloud/multi-node.md @@ -0,0 +1,176 @@ +--- +title: "Multi-node Installation" +keywords: 'kubesphere, kubernetes, docker, kubesphere installer' +description: 'The guide for installing KubeSphere on Multi-Node in development or testing environment' + +weight: 2220 +--- + +`Multi-Node` installation enables installing KubeSphere on multiple nodes. Typically, use any one node as _taskbox_ to run the installation task. Please note `ssh` communication is required to be established between taskbox and other nodes. + +- The following instructions are for the default installation without enabling any optional components as we have made them pluggable since v2.1.0. If you want to enable any one, please read [Enable Pluggable Components](../pluggable-components). +- If your machines in total have >= 8 cores and >= 16G memory, we recommend you to install the full package of KubeSphere by [Enabling Optional Components](../complete-installation). +- The installation time depends on your network bandwidth, your computer configuration, the number of nodes, etc. + +## Prerequisites + +If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information. + +## Step 1: Prepare Linux Hosts + +The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements. + +- Time synchronization is required across all nodes, otherwise the installation may not succeed; +- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`; +- If you are using `Ubuntu 18.04`, you need to use the user `root`; +- If the Debian system does not have the sudo command installed, you need to execute `apt update && apt install sudo` command using root before installation. + +### Hardware Recommendation + +- KubeSphere can be installed on any cloud platform. +- The installation speed can be accelerated by increasing network bandwidth. +- If you choose air-gapped installation, ensure your disk of each node is at least 100G. + +| System | Minimum Requirements (Each node) | +| --- | --- | +| CentOS 7.4 ~ 7.7 (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:40 G | +| Ubuntu 16.04/18.04 LTS (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:40 G | +| Red Hat Enterprise Linux Server 7.4 (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:40 G | +| Debian Stretch 9.5 (64 bit)| CPU:2 Core, Memory:4 G, Disk Space:40 G | + +The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes. + +> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide. + +| Host IP | Host Name | Role | +| --- | --- | --- | +|192.168.0.1|master|master, etcd| +|192.168.0.2|node1|node| +|192.168.0.3|node2|node| + +### Cluster Architecture + +#### Single Master, Single Etcd, Two Nodes + +![Architecture](/cluster-architecture.svg) + +## Step 2: Download Installer Package + +**1.** Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`. + +```bash +curl -L https://kubesphere.io/download/stable/latest > installer.tar.gz \ +&& tar -zxf installer.tar.gz && cd kubesphere-all-v2.1.1/conf +``` + +**2.** Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file. + +> Note: +> +> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`. +> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`. +> - master, node1 and node2 are the host names of each node and all host names should be in lowercase. + +### hosts.ini + +```ini +[all] +master ansible_connection=local ip=192.168.0.1 +node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD +node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD + +[kube-master] +master + +[kube-node] +node1 +node2 + +[etcd] +master + +[k8s-cluster:children] +kube-node +kube-master +``` + +> Note: +> +> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here. +> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively. +> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`. +> +> Parameters Specification: +> +> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection. +> - `ansible_host`: The name of the host to be connected. +> - `ip`: The ip of the host to be connected. +> - `ansible_user`: The default ssh user name to use. +> - `ansible_become_pass`: Allows you to set the privilege escalation password. +> - `ansible_ssh_pass`: The password of the host to be connected using root. + +## Step 3: Install KubeSphere to Linux Machines + +> Note: +> +> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default. +> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions. +> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation. +> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts. + +**1.** Enter `scripts` folder, and execute `install.sh` using `root` user: + +```bash +cd ../cripts +./install.sh +``` + +**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume. + +```bash +################################################ + KubeSphere Installer Menu +################################################ +* 1) All-in-one +* 2) Multi-node +* 3) Quit +################################################ +https://kubesphere.io/ 2020-02-24 +################################################ +Please input an option: 2 + +``` + +**3.** Verify the multi-node installation: + +**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go. + +```bash +successsful! +##################################################### +### Welcome to KubeSphere! ### +##################################################### + +Console: http://192.168.0.1:30880 +Account: admin +Password: P@88w0rd + +NOTE:Please modify the default password after login. +##################################################### +``` + +> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components). + +**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in. + +![Login](https://pek3b.qingstor.com/kubesphere-docs/png/20191017172215.png) + +Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up. + +![Landing Page](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png) + +## FAQ + +The installer has been tested on Aliyun, AWS, Huawei Cloud, QingCloud, Tencent Cloud. Please check the [results](https://github.com/kubesphere/ks-installer/issues/23) for details. Also please read the [FAQ of installation](../../faq/faq-install). + +If you have any further questions please do not hesitate to file issues on [GitHub](https://github.com/kubesphere/kubesphere/issues). diff --git a/content/en/docs/installing-on-linux/public-cloud/storage-configuration.md b/content/en/docs/installing-on-linux/public-cloud/storage-configuration.md new file mode 100644 index 000000000..a3d8d5156 --- /dev/null +++ b/content/en/docs/installing-on-linux/public-cloud/storage-configuration.md @@ -0,0 +1,157 @@ +--- +title: "StorageClass Configuration" +keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus' +description: 'Instructions for Setting up StorageClass for KubeSphere' + +weight: 2250 +--- + +Currently, Installer supports the following [Storage Class](https://kubernetes.io/docs/concepts/storage/storage-classes/), providing persistent storage service for KubeSphere (more storage classes will be supported soon). + +- NFS +- Ceph RBD +- GlusterFS +- QingCloud Block Storage +- QingStor NeonSAN +- Local Volume (for development and test only) + +The versions of storage systems and corresponding CSI plugins in the table listed below have been well tested. + +| **Name** | **Version** | **Reference** | +| ----------- | --- |---| +Ceph RBD Server | v0.94.10 | For development and testing, refer to [Install Ceph Storage Server](/zh-CN/appendix/ceph-ks-install/) for details. Please refer to [Ceph Documentation](http://docs.ceph.com/docs/master/) for production. | +Ceph RBD Client | v12.2.5 | Before installing KubeSphere, you need to configure the corresponding parameters in `common.yaml`. Please refer to [Ceph RBD](../storage-configuration/#ceph-rbd) | +GlusterFS Server | v3.7.6 | For development and testing, refer to [Deploying GlusterFS Storage Server](/zh-CN/appendix/glusterfs-ks-install/) for details. Please refer to [Gluster Documentation](https://www.gluster.org/install/) or [Gluster Documentation](http://gluster.readthedocs.io/en/latest/Install-Guide/Install/) for production. Note you need to install [Heketi Manager (V3.0.0)](https://github.com/heketi/heketi/tree/master/docs/admin). | +|GlusterFS Client |v3.12.10|Before installing KubeSphere, you need to configure the corresponding parameters in `common.yaml`. Please refer to [GlusterFS](../storage-configuration/#glusterfs)| +|NFS Client | v3.1.0 | Before installing KubeSphere, you need to configure the corresponding parameters in `common.yaml`. Make sure you have prepared NFS storage server. Please see [NFS Client](../storage-configuration/#nfs) | +QingCloud-CSI|v0.2.0.1|You need to configure the corresponding parameters in `common.yaml` before installing KubeSphere. Please refer to [QingCloud CSI](../storage-configuration/#qingcloud-csi) for details| +NeonSAN-CSI|v0.3.0| Before installing KubeSphere, you need to configure the corresponding parameters in `common.yaml`. Make sure you have prepared QingStor NeonSAN storage server. Please see [Neonsan-CSI](../storage-configuration/#neonsan-csi) | + +> Note: You are only allowed to set ONE default storage classes in the cluster. To specify a default storage class, make sure there is no default storage class already exited in the cluster. + +## Storage Configuration + +After preparing the storage server, you need to refer to the parameters description in the following table. Then modify the corresponding configurations in `conf/common.yaml` accordingly. + +The following describes the storage configuration in `common.yaml`. + +> Note: Local Volume is configured as the default storage class in `common.yaml` by default. If you are going to set other storage class as the default, disable the Local Volume and modify the configuration for other storage class. + +### Local Volume (For developing or testing only) + +A [Local Volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) represents a mounted local storage device such as a disk, partition or directory. Local volumes can only be used as a statically created PersistentVolume. We recommend you to use Local volume in testing or development only since it is quick and easy to install KubeSphere without the struggle to set up persistent storage server. Refer to following table for the definition in `conf/common.yaml`. + +| **Local volume** | **Description** | +| --- | --- | +| local\_volume\_provisioner\_enabled | Whether to use Local as the persistent storage, defaults to true | +| local\_volume\_provisioner\_storage\_class | Storage class name, default value:local | +| local\_volume\_is\_default\_class | Whether to set Local as the default storage class, defaults to true.| + +### NFS + +An NFS volume allows an existing NFS (Network File System) share to be mounted into your Pod. NFS can be configured in `conf/common.yaml`. Note you need to prepare NFS server in advance. + +| **NFS** | **Description** | +| --- | --- | +| nfs\_client\_enable | Whether to use NFS as the persistent storage, defaults to false | +| nfs\_client\_is\_default\_class | Whether to set NFS as default storage class, defaults to false. | +| nfs\_server | The NFS server address, either IP or Hostname | +| nfs\_path | NFS shared directory, which is the file directory shared on the server, see [Kubernetes Documentation](https://kubernetes.io/docs/concepts/storage/volumes/#nfs) | +|nfs\_vers3\_enabled | Specifies which version of the NFS protocol to use, defaults to false which means v4. True means v4 | +|nfs_archiveOnDelete | Archive PVC when deleting. It will automatically remove data from `oldPath` when it sets to false | + +### Ceph RBD + +The open source [Ceph RBD](https://ceph.com/) distributed storage system can be configured to use in `conf/common.yaml`. You need to prepare Ceph storage server in advance. Please refer to [Kubernetes Documentation](https://kubernetes.io/docs/concepts/storage/storage-classes/#ceph-rbd) for more details. + +| **Ceph\_RBD** | **Description** | +| --- | --- | +| ceph\_rbd\_enabled | Whether to use Ceph RBD as the persistent storage, defaults to false | +| ceph\_rbd\_storage\_class | Storage class name | +| ceph\_rbd\_is\_default\_class | Whether to set Ceph RBD as default storage class, defaults to false | +| ceph\_rbd\_monitors | Ceph monitors, comma delimited. This parameter is required, which depends on Ceph RBD server parameters | +| ceph\_rbd\_admin\_id | Ceph client ID that is capable of creating images in the pool. Defaults to “admin” | +| ceph\_rbd\_admin\_secret | Admin_id's secret, secret name for "adminId". This parameter is required. The provided secret must have type “kubernetes.io/rbd” | +| ceph\_rbd\_pool | Ceph RBD pool. Default is “rbd” | +| ceph\_rbd\_user\_id | Ceph client ID that is used to map the RBD image. Default is the same as adminId | +| ceph\_rbd\_user\_secret | Secret for User_id, it is required to create this secret in namespace which used rbd image | +| ceph\_rbd\_fsType | fsType that is supported by Kubernetes. Default: "ext4"| +| ceph\_rbd\_imageFormat | Ceph RBD image format, “1” or “2”. Default is “1” | +|ceph\_rbd\_imageFeatures| This parameter is optional and should only be used if you set imageFormat to “2”. Currently supported features are layering only. Default is “”, and no features are turned on| + +> Note: +> +> The ceph secret, which is created in storage class, like "ceph_rbd_admin_secret" and "ceph_rbd_user_secret", is retrieved using following command in Ceph storage server. + +```bash +ceph auth get-key client.admin +``` + +### GlusterFS + +[GlusterFS](https://docs.gluster.org/en/latest/) is a scalable network filesystem suitable for data-intensive tasks such as cloud storage and media streaming. You need to prepare GlusterFS storage server in advance. Please refer to [Kubernetes Documentation](https://kubernetes.io/docs/concepts/storage/storage-classes/#glusterfs) for further information. + +| **GlusterFS(It requires glusterfs cluster which is managed by heketi)**|**Description** | +| --- | --- | +| glusterfs\_provisioner\_enabled | Whether to use GlusterFS as the persistent storage, defaults to false | +| glusterfs\_provisioner\_storage\_class | Storage class name | +| glusterfs\_is\_default\_class | Whether to set GlusterFS as default storage class, defaults to false | +| glusterfs\_provisioner\_restauthenabled | Gluster REST service authentication boolean that enables authentication to the REST server | +| glusterfs\_provisioner\_resturl | Gluster REST service/Heketi service url which provision gluster volumes on demand. The general format should be "IP address:Port" and this is a mandatory parameter for GlusterFS dynamic provisioner| +| glusterfs\_provisioner\_clusterid | Optional, for example, 630372ccdc720a92c681fb928f27b53f is the ID of the cluster which will be used by Heketi when provisioning the volume. It can also be a list of clusterids | +| glusterfs\_provisioner\_restuser | Gluster REST service/Heketi user who has access to create volumes in the Gluster Trusted Pool | +| glusterfs\_provisioner\_secretName | Optional, identification of Secret instance that contains user password to use when talking to Gluster REST service, Installer will automatically create this secret in kube-system | +| glusterfs\_provisioner\_gidMin | The minimum value of GID range for the storage class | +| glusterfs\_provisioner\_gidMax |The maximum value of GID range for the storage class | +| glusterfs\_provisioner\_volumetype | The volume type and its parameters can be configured with this optional value, for example: ‘Replica volume’: volumetype: replicate:3 | +| jwt\_admin\_key | "jwt.admin.key" field is from "/etc/heketi/heketi.json" in Heketi server | + +**Attention:** + + > Please note: `"glusterfs_provisioner_clusterid"` could be returned from glusterfs server by running the following command: + + ```bash + export HEKETI_CLI_SERVER=http://localhost:8080 + heketi-cli cluster list + ``` + +### QingCloud Block Storage + +[QingCloud Block Storage](https://docs.qingcloud.com/product/Storage/volume/) is supported in KubeSphere as the persistent storage service. If you would like to experience dynamic provisioning when creating volume, we recommend you to use it as your persistent storage solution. KubeSphere integrates [QingCloud-CSI](https://github.com/yunify/qingcloud-csi/blob/master/README_zh.md), and allows you to use various block storage services of QingCloud. With simple configuration, you can quickly expand, clone PVCs and view the topology of PVCs, create/delete snapshot, as well as restore volume from snapshot. + +QingCloud-CSI plugin has implemented the standard CSI. You can easily create and manage different types of volumes in KubeSphere, which are provided by QingCloud. The corresponding PVCs will created with ReadWriteOnce access mode and mounted to running Pods. + +QingCloud-CSI supports create the following five types of volume in QingCloud: + +- High capacity +- Standard +- SSD Enterprise +- Super high performance +- High performance + +|**QingCloud-CSI** | **Description**| +| --- | ---| +| qingcloud\_csi\_enabled|Whether to use QingCloud-CSI as the persistent storage volume, defaults to false | +| qingcloud\_csi\_is\_default\_class| Whether to set QingCloud-CSI as default storage class, defaults to false | +qingcloud\_access\_key\_id ,
qingcloud\_secret\_access\_key| Please obtain it from [QingCloud Console](https://console.qingcloud.com/login) | +|qingcloud\_zone| Zone should be the same as the zone where the Kubernetes cluster is installed, and the CSI plugin will operate on the storage volumes for this zone. For example: zone can be set to these values, such as sh1a (Shanghai 1-A), sh1b (Shanghai 1-B), pek2 (Beijing 2), pek3a (Beijing 3-A), pek3b (Beijing 3-B), pek3c (Beijing 3-C), gd1 (Guangdong 1), gd2a (Guangdong 2-A), ap1 (Asia Pacific 1), ap2a (Asia Pacific 2-A) | +| type | The type of volume in QingCloud platform. In QingCloud platform, 0 represents high performance volume. 3 represents super high performance volume. 1 or 2 represents high capacity volume depending on cluster‘s zone, see [QingCloud Documentation](https://docs.qingcloud.com/product/api/action/volume/create_volumes.html)| +| maxSize, minSize | Limit the range of volume size in GiB| +| stepSize | Set the increment of volumes size in GiB| +| fsType | The file system of the storage volume, which supports ext3, ext4, xfs. The default is ext4| + +### QingStor NeonSAN + +The NeonSAN-CSI plugin supports the enterprise-level distributed storage [QingStor NeonSAN](https://www.qingcloud.com/products/qingstor-neonsan/) as the persistent storage solution. You need prepare the NeonSAN server, then configure the NeonSAN-CSI plugin to connect to its storage server in `conf/common.yaml`. Please refer to [NeonSAN-CSI Reference](https://github.com/wnxn/qingstor-csi/blob/master/docs/reference_zh.md#storageclass-%E5%8F%82%E6%95%B0) for further information. + +| **NeonSAN** | **Description** | +| --- | --- | +| neonsan\_csi\_enabled | Whether to use NeonSAN as the persistent storage, defaults to false | +| neonsan\_csi\_is\_default\_class | Whether to set NeonSAN-CSI as the default storage class, defaults to false| +Neonsan\_csi\_protocol | transportation protocol, user must set the option, such as TCP or RDMA| +| neonsan\_server\_address | NeonSAN server address | +| neonsan\_cluster\_name| NeonSAN server cluster name| +| neonsan\_server\_pool|A comma separated list of pools. Tell plugin to manager these pools. User must set the option, the default value is kube| +| neonsan\_server\_replicas|NeonSAN image replica count. Default: 1| +| neonsan\_server\_stepSize|set the increment of volumes size in GiB. Default: 1| +| neonsan\_server\_fsType|The file system to use for the volume. Default: ext4| diff --git a/content/en/docs/introduction/_index.md b/content/en/docs/introduction/_index.md index e9d7be7aa..25a021201 100644 --- a/content/en/docs/introduction/_index.md +++ b/content/en/docs/introduction/_index.md @@ -1,9 +1,9 @@ --- -title: "introduction" +title: "Introduction" description: "Help you to better understand KubeSphere with detailed graphics and contents" layout: "single" -linkTitle: "introduction" +linkTitle: "Introduction" weight: 1000 @@ -19,4 +19,4 @@ In this chapter, we will demonstrate how to use KubeKey to provision a new Kuber Below you will find some of the most common and helpful pages from this chapter. We highly recommend you to review them at first. -{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} \ No newline at end of file +{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} diff --git a/content/en/docs/multicluster-management/_index.md b/content/en/docs/multicluster-management/_index.md new file mode 100644 index 000000000..da9e078dd --- /dev/null +++ b/content/en/docs/multicluster-management/_index.md @@ -0,0 +1,22 @@ +--- +title: "Multi-cluster Management" +description: "Import a hosted or on-premise Kubernetes cluster into KubeSphere" +layout: "single" + +linkTitle: "Multi-cluster Management" + +weight: 3000 + +icon: "/images/docs/docs.svg" + +--- + +## Installing KubeSphere and Kubernetes on Linux + +In this chapter, we will demonstrate how to use KubeKey to provision a new Kubernetes and KubeSphere cluster based on different infrastructures. Kubekey can help you to quickly build a production-ready cluster architecture on a set of machines from zero to one. It also helps you to easily scale the cluster and install pluggable components on existing architecture. + +## Most Popular Pages + +Below you will find some of the most common and helpful pages from this chapter. We highly recommend you to review them at first. + +{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} diff --git a/content/en/docs/multicluster-management/release-v210.md b/content/en/docs/multicluster-management/release-v210.md new file mode 100644 index 000000000..1eb9cedb7 --- /dev/null +++ b/content/en/docs/multicluster-management/release-v210.md @@ -0,0 +1,10 @@ +--- +title: "Enable Multicluster Management" +keywords: "kubernetes, StorageClass, kubesphere, PVC" +description: "Enable Multicluster Management in KubeSphere" + +linkTitle: "Enable Multicluster Management" +weight: 200 +--- + +TBD diff --git a/content/en/docs/multicluster-management/release-v211.md b/content/en/docs/multicluster-management/release-v211.md new file mode 100644 index 000000000..66048687f --- /dev/null +++ b/content/en/docs/multicluster-management/release-v211.md @@ -0,0 +1,8 @@ +--- +title: "Kubernetes Federation in KubeSphere" +keywords: "kubernetes, multicluster, kubesphere, federation, hybridcloud" +description: "Kubernetes and KubeSphere node management" + +linkTitle: "Kubernetes Federation in KubeSphere" +weight: 100 +--- diff --git a/content/en/docs/multicluster-management/release-v300.md b/content/en/docs/multicluster-management/release-v300.md new file mode 100644 index 000000000..e52dee1e1 --- /dev/null +++ b/content/en/docs/multicluster-management/release-v300.md @@ -0,0 +1,10 @@ +--- +title: "Introduction" +keywords: "kubernetes, multicluster, kubesphere, hybridcloud" +description: "Upgrade KubeSphere" + +linkTitle: "Introduction" +weight: 50 +--- + +TBD diff --git a/content/en/docs/pluggable-components/_index.md b/content/en/docs/pluggable-components/_index.md new file mode 100644 index 000000000..ce07e09e0 --- /dev/null +++ b/content/en/docs/pluggable-components/_index.md @@ -0,0 +1,22 @@ +--- +title: "Enable Pluggable Components" +description: "Enable KubeSphere Pluggable Components" +layout: "single" + +linkTitle: "Enable Pluggable Components" + +weight: 3500 + +icon: "/images/docs/docs.svg" + +--- + +## Installing KubeSphere and Kubernetes on Linux + +In this chapter, we will demonstrate how to use KubeKey to provision a new Kubernetes and KubeSphere cluster based on different infrastructures. Kubekey can help you to quickly build a production-ready cluster architecture on a set of machines from zero to one. It also helps you to easily scale the cluster and install pluggable components on existing architecture. + +## Most Popular Pages + +Below you will find some of the most common and helpful pages from this chapter. We highly recommend you to review them at first. + +{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} diff --git a/content/en/docs/pluggable-components/release-v200.md b/content/en/docs/pluggable-components/release-v200.md new file mode 100644 index 000000000..ba048fe22 --- /dev/null +++ b/content/en/docs/pluggable-components/release-v200.md @@ -0,0 +1,92 @@ +--- +title: "Release Notes For 2.0.0" +keywords: "kubernetes, docker, kubesphere, jenkins, istio, prometheus" +description: "KubeSphere Release Notes For 2.0.0" + +linkTitle: "Release Notes - 2.0.0" +weight: 500 +--- + +KubeSphere 2.0.0 was released on **May 18th, 2019**. + +## What's New in 2.0.0 + +### Component Upgrades + +- Support Kubernetes [Kubernetes 1.13.5](https://github.com/kubernetes/kubernetes/releases/tag/v1.13.5) +- Integrate [QingCloud Cloud Controller](https://github.com/yunify/qingcloud-cloud-controller-manager). After installing load balancer, QingCloud load balancer can be created through KubeSphere console and the backend workload is bound automatically.  +- Integrate [QingStor CSI v0.3.0](https://github.com/yunify/qingstor-csi/tree/v0.3.0) storage plugin and support physical NeonSAN storage system. Support SAN storage service with high availability and high performance. +- Integrate [QingCloud CSI v0.2.1](https://github.com/yunify/qingcloud-csi/tree/v0.2.1) storage plugin and support many types of volume to create QingCloud block services. +- Harbor is upgraded to 1.7.5. +- GitLab is upgraded to 11.8.1. +- Prometheus is upgraded to 2.5.0. + +### Microservice Governance + +- Integrate Istio 1.1.1 and support visualization of service mesh management. +- Enable the access to the project's external websites and the application traffic governance. +- Provide built-in sample microservice [Bookinfo Application](https://istio.io/docs/examples/bookinfo/). +- Support traffic governance. +- Support traffic images. +- Provide load balancing of microservice based on Istio. +- Support canary release. +- Enable blue-green deployment. +- Enable circuit breaking. +- Enable microservice tracing. + +### DevOps (CI/CD Pipeline) + +- CI/CD pipeline provides email notification and supports the email notification during construction. +- Enhance CI/CD graphical editing pipelines, and more pipelines for common plugins and execution conditions. +- Provide source code vulnerability scanning based on SonarQube 7.4. +- Support [Source to Image](https://github.com/kubesphere/s2ioperator) feature. + +### Monitoring + +- Provide Kubernetes component independent monitoring page including etcd, kube-apiserver and kube-scheduler. +- Optimize several monitoring algorithm. +- Optimize monitoring resources. Reduce Prometheus storage and the disk usage up to 80%. + +### Logging + +- Provide unified log console in terms of tenant. +- Enable accurate and fuzzy retrieval. +- Support real-time and history logs. +- Support combined log query based on namespace, workload, Pod, container, key words and time limit.   +- Support detail page of single and direct logs. Pods and containers can be switched. +- [FluentBit Operator](https://github.com/kubesphere/fluentbit-operator) supports logging gathering settings: ElasticSearch, Kafka and Fluentd can be added, activated or turned off as log collectors. Before sending to log collectors, you can configure filtering conditions for needed logs. + +### Alerting and Notifications + +- Email notifications are available for cluster nodes and workload resources.  +- Notification rules: combined multiple monitoring resources are available. Different warning levels, detection cycle, push times and threshold can be configured. +- Time and notifiers can be set. +- Enable notification repeating rules for different levels. + +### Security Enhancement + +- Fix RunC Container Escape Vulnerability [Runc container breakout](https://log.qingcloud.com/archives/5127) +- Fix Alpine Docker's image Vulnerability [Alpine container shadow breakout](https://www.alpinelinux.org/posts/Docker-image-vulnerability-CVE-2019-5021.html) +- Support single and multi-login configuration items. +- Verification code is required after multiple invalid logins. +- Enhance passwords' policy and prevent weak passwords. +- Others security enhancements. + +### Interface Optimization + +- Optimize multiple user experience of console, such as the switch between DevOps project and other projects. +- Optimize many Chinese-English webpages. + +### Others + +- Support Etcd backup and recovery. +- Support regular cleanup of the docker's image. + +## Bugs Fixes + +- Fix delay updates of the resource and deleted pages. +- Fix the left dirty data after deleting the HPA workload. +- Fix incorrect Job status display. +- Correct resource quota, Pod usage and storage metrics algorithm. +- Adjust CPU usage percentages. +- many more bugfix diff --git a/content/en/docs/pluggable-components/release-v201.md b/content/en/docs/pluggable-components/release-v201.md new file mode 100644 index 000000000..2407dce8a --- /dev/null +++ b/content/en/docs/pluggable-components/release-v201.md @@ -0,0 +1,19 @@ +--- +title: "Release Notes For 2.0.1" +keywords: "kubernetes, docker, kubesphere, jenkins, istio, prometheus" +description: "KubeSphere Release Notes For 2.0.1" + +linkTitle: "Release Notes - 2.0.1" +weight: 400 +--- + +KubeSphere 2.0.1 was released on **June 9th, 2019**. + +## Bug Fix + +- Fix the issue that CI/CD pipeline cannot recognize correct special characters in the code branch. +- Fix CI/CD pipeline's issue of being unable to check logs. +- Fix no-log data output problem caused by index document fragmentation abnormity during the log query. +- Fix prompt exceptions when searching for logs that do not exist. +- Fix the line-overlap problem on traffic governance topology and fixed invalid image strategy application. +- Many more bugfix diff --git a/content/en/docs/pluggable-components/release-v202.md b/content/en/docs/pluggable-components/release-v202.md new file mode 100644 index 000000000..3c8fec965 --- /dev/null +++ b/content/en/docs/pluggable-components/release-v202.md @@ -0,0 +1,40 @@ +--- +title: "Release Notes For 2.0.2" +keywords: "kubernetes, docker, kubesphere, jenkins, istio, prometheus" +description: "KubeSphere Release Notes For 2.0.2" + +linkTitle: "Release Notes - 2.0.2" +weight: 300 +--- + +KubeSphere 2.0.2 was released on July 9, 2019, which fixes known bugs and enhances existing feature. If you have installed versions of 1.0.x, 2.0.0 or 2.0.1, please download KubeSphere installer v2.0.2 to upgrade. + +## What's New in 2.0.2 + +### Enhanced Features + +- [API docs](/api-reference/api-docs/) are available on the official website. +- Block brute-force attacks. +- Standardize the maximum length of resource names. +- Upgrade the gateway of project (Ingress Controller) to the version of 0.24.1. Support Ingress grayscale release. + +## List of Fixed Bugs + +- Fix the issue that traffic topology displays resources outside of this project. +- Fix the extra service component issue from traffic topology under specific circumstances. +- Fix the execution issue when "Source to Image" reconstructs images under specific circumstances. +- Fix the page display problem when "Source to Image" job fails. +- Fix the log checking problem when Pod status is abnormal. +- Fix the issue that disk monitor cannot detect some types of volume mounting, such as LVM volume. +- Fix the problem of detecting deployed applications. +- Fix incorrect status of application component. +- Fix host node's number calculation errors. +- Fix input data loss caused by switching reference configuration buttons when adding environmental variables. +- Fix the rerun job issue that the Operator role cannot execute. +- Fix the initialization issue on IPv4 environment uuid. +- Fix the issue that the log detail page cannot be scrolled down to check past logs. +- Fix wrong APIServer addresses in KubeConfig files. +- Fix the issue that DevOps project's name cannot be changed. +- Fix the issue that container logs cannot specify query time. +- Fix the saving problem on relevant repository's secrets under certain circumstances. +- Fix the issue that application's service component creation page does not have image registry's secrets. diff --git a/content/en/docs/pluggable-components/release-v210.md b/content/en/docs/pluggable-components/release-v210.md new file mode 100644 index 000000000..ae876bee6 --- /dev/null +++ b/content/en/docs/pluggable-components/release-v210.md @@ -0,0 +1,155 @@ +--- +title: "Release Notes For 2.1.0" +keywords: "kubernetes, docker, kubesphere, jenkins, istio, prometheus" +description: "KubeSphere Release Notes For 2.1.0" + +linkTitle: "Release Notes - 2.1.0" +weight: 200 +--- + +KubeSphere 2.1.0 was released on Nov 11th, 2019, which fixes known bugs, adds some new features and brings some enhancement. If you have installed versions of 2.0.x, please upgrade it and enjoy the better user experience of v2.1.0. + +## Installer Enhancement + +- Decouple some components and make components including DevOps, service mesh, app store, logging, alerting and notification optional and pluggable +- Add Grafana (v5.2.4) as the optional component +- Upgrade Kubernetes to 1.15.5. It is also compatible with 1.14.x and 1.13.x +- Upgrade [OpenPitrix](https://openpitrix.io/) to v0.4.5 +- Upgrade the log forwarder Fluent Bit to v1.3.2 +- Upgrade Jenkins to v2.176.2 +- Upgrade Istio to 1.3.3 +- Optimize the high availability for core components + +## App Store + +### Features + +Support upload / test / review / deploy / publish/ classify / upgrade / deploy and delete apps, and provide nine built-in applications + +### Upgrade & Enhancement + +- The application repository configuration is moved from global to each workspace +- Support adding application repository to share applications in a workspace + +## Storage + +### Features + +- Support Local Volume with dynamic provisioning +- Provide the real-time monitoring feature for QingCloud block storage + +### Upgrade & Enhancement + +QingCloud CSI is adapted to CSI 1.1.0, supports upgrade, topology, create or delete a snapshot. It also supports creating PVC based on a snapshot + +### BUG Fixes + +Fix the StorageClass list display problem + +## Observability + +### Features + +- Support for collecting the file logs on the disk. It is used for the Pod which preserves the logs as the file on the disk +- Support integrating with external ElasticSearch 7.x +- Ability to search logs containinh Chinese words +- Add initContainer log display +- Ability to export logs +- Support for canceling the notification from alerting + +### UPGRADE & ENHANCEMENT + +- Improve the performance of log search +- Refine the hints when the logging service is abnormal +- Optimize the information when the monitoring metrics request is abnormal +- Support pod anti-affinity rule for Prometheus + +### BUG FIXES + +- Fix the mistaken highlights in the logs search result +- Fix log search not matching phrases correctly +- Fix the issue that log could not be retrieved for a deleted workload when it is searched by workload name +- Fix the issue where the results were truncated when the log is highlighted +- Fix some metrics exceptions: node `inode`, maximum pod tolerance +- Fix the issue with an incorrect number of alerting targets +- Fix filter failure problem of multi-metric monitoring +- Fix the problem of no logging and monitoring information on taint nodes (Adjust the toleration attributes of node-exporter and fluent-bit to deploy on all nodes by default, ignoring taints) + +## DevOps + +### Features + +- Add support for branch exchange and git log export in S2I +- Add B2I, ability to build Binary/WAR/JAR package and release to Kubernetes +- Support dependency cache for the pipeline, S2I, and B2I +- Support delete Kubernetes resource action in `kubernetesDeploy` step +- Multi-branch pipeline supports trigger other pipelines when create or delete the branch + +### Upgrades & Enhancement + +- Support BitBucket in the pipeline +- Support Cron script validation in the pipeline +- Support Jenkinsfile syntax validation +- Support custom the link in SonarQube +- Support event trigger build in the pipeline +- Optimize the agent node selection in the pipeline +- Accelerate the start speed of the pipeline +- Use dynamical volume as the work directory of the Agent in the pipeline, also contributes to Jenkins [#589](https://github.com/jenkinsci/kubernetes-plugin/pull/598) +- Optimize the Jenkins kubernetesDeploy plugin, add more resources and versions (v1, app/v1, extensions/v1beta1、apps/v1beta2、apps/v1beta1、autoscaling/v1、autoscaling/v2beta1、autoscaling/v2beta2、networking.k8s.io/v1、batch/v1beta1、batch/v2alpha1), also contributes to Jenkins [#614](https://github.com/jenkinsci/kubernetes-plugin/pull/614) +- Add support for PV, PVC, Network Policy in deploy step of the pipeline, also contributes to Jenkins [#87](https://github.com/jenkinsci/kubernetes-cd-plugin/pull/87)、[#88](https://github.com/jenkinsci/kubernetes-cd-plugin/pull/88) + +### Bug Fixes + +- Fix the issue that 400 bad request in GitHub Webhook +- incompatible change: DevOps Webhook's URL prefix is changed from `/webhook/xxx` to `/devops_webhook/xxx` + +## Authentication and authority + +### Features + +Support sync and authenticate with AD account + +### Upgrades & Enhancement + +- Reduce the LDAP component's RAM consumption +- Add protection against brute force attacks + +### Bug Fixes + +- Fix LDAP connection pool leak +- Fix the issue where users could not be added in the workspace +- Fix sensitive data transmission leaks + +## User Experience + +### Features + +Ability to wizard management of projects (namespace) that are not assigned to the workspace + +### Upgrades & Enhancement + +- Support bash-completion in web kubectl +- Optimize the host information display +- Add connection test of the email server +- Add prompt on resource list page +- Optimize the project overview page and project basic information +- Simplify the service creation process +- Simplify the workload creation process +- Support real-time status update in the resource list +- optimize YAML editing +- Support image search and image information display +- Add the pod list to the workload page +- Update the web terminal theme +- Support container switching in container terminal +- Optimize Pod information display, and add Pod scheduling information +- More detailed workload status display + +### Bug Fixes + +- Fix the issue where the default request resource of the project is displayed incorrectly +- Optimize the web terminal design, make it much easier to find +- Fix the Pod status update delay +- Fix the issue where a host could not be searched based on roles +- Fix DevOps project quantity error in workspace detail page +- Fix the issue with the workspace list pages not turning properly +- Fix the problem of inconsistent result ordering after query on workspace list page diff --git a/content/en/docs/pluggable-components/release-v211.md b/content/en/docs/pluggable-components/release-v211.md new file mode 100644 index 000000000..d8acba698 --- /dev/null +++ b/content/en/docs/pluggable-components/release-v211.md @@ -0,0 +1,122 @@ +--- +title: "Release Notes For 2.1.1" +keywords: "kubernetes, docker, kubesphere, jenkins, istio, prometheus" +description: "KubeSphere Release Notes For 2.1.1" + +linkTitle: "Release Notes - 2.1.1" +weight: 100 +--- + +KubeSphere 2.1.1 was released on Feb 23rd, 2020, which has fixed known bugs and brought some enhancements. For the users who have installed versions of 2.0.x or 2.1.0, make sure to read the user manual carefully about how to upgrade before doing that, and feel free to raise any questions on [GitHub](https://github.com/kubesphere/kubesphere/issues). + +## What's New in 2.1.1 + +## Installer + +### UPGRADE & ENHANCEMENT + +- Support Kubernetes v1.14.x、v1.15.x、v1.16.x、v1.17.x,also solve the issue of Kubernetes API Compatibility#[1829](https://github.com/kubesphere/kubesphere/issues/1829) +- Simplify the steps of installation on existing Kubernetes, and remove the step of specifying cluster's CA certification, also specifying Etcd certification is no longer mandatory step if users don't need Etcd monitoring metrics +- Backup the configuration of CoreDNS before upgrading + +### BUG FIXES + +- Fix the issue of importing apps to App Store + +## App Store + +### UPGRADE & ENHANCEMENT + +- Upgrade OpenPitrix to v0.4.8 + +### BUG FIXES + +- Fix the latest version display issue for the published app #[1130](https://github.com/kubesphere/kubesphere/issues/1130) +- Fix the column name display issue in app approval list page #[1498](https://github.com/kubesphere/kubesphere/issues/1498) +- Fix the searching issue by app name/workspace #[1497](https://github.com/kubesphere/kubesphere/issues/1497) +- Fix the issue of failing to create app with the same name of previously deleted app #[1821](https://github.com/kubesphere/kubesphere/pull/1821) #[1564](https://github.com/kubesphere/kubesphere/issues/1564) +- Fix the issue of failing to deploy apps in some cases #[1619](https://github.com/kubesphere/kubesphere/issues/1619) #[1730](https://github.com/kubesphere/kubesphere/issues/1730) + +## Storage + +### UPGRADE & ENHANCEMENT + +- Support CSI plugins of Alibaba Cloud and Tencent Cloud + +### BUG FIXES + +- Fix the paging issue of storage class list page #[1583](https://github.com/kubesphere/kubesphere/issues/1583) #[1591](https://github.com/kubesphere/kubesphere/issues/1591) +- Fix the issue that the value of imageFeatures parameter displays '2' when creating ceph storage class #[1593](https://github.com/kubesphere/kubesphere/issues/1593) +- Fix the issue that search filter fails to work in persistent volumes list page #[1582](https://github.com/kubesphere/kubesphere/issues/1582) +- Fix the display issue for abnormal persistent volume #[1581](https://github.com/kubesphere/kubesphere/issues/1581) +- Fix the display issue for the persistent volumes which associated storage class is deleted #[1580](https://github.com/kubesphere/kubesphere/issues/1580) #[1579](https://github.com/kubesphere/kubesphere/issues/1579) + +## Observability + +### UPGRADE & ENHANCEMENT + +- Upgrade Fluent Bit to v1.3.5 #[1505](https://github.com/kubesphere/kubesphere/issues/1505) +- Upgrade Kube-state-metrics to v1.7.2 +- Upgrade Elastic Curator to v5.7.6 #[517](https://github.com/kubesphere/ks-installer/issues/517) +- Fluent Bit Operator support to detect the location of soft linked docker log folder dynamically on host machines +- Fluent Bit Operator support to manage the instance of Fluent Bit by declarative configuration through updating the ConfigMap of Operator +- Fix the issue of sort orders in alert list page #[1397](https://github.com/kubesphere/kubesphere/issues/1397) +- Adjust the metric of container memory usage with 'container_memory_working_set_bytes' + +### BUG FIXES + +- Fix the lag issue of container logs #[1650](https://github.com/kubesphere/kubesphere/issues/1650) +- Fix the display issue that some replicas of workload have no logs on container detail log page #[1505](https://github.com/kubesphere/kubesphere/issues/1505) +- Fix the compatibility issue of Curator to support ElasticSearch 7.x #[517](https://github.com/kubesphere/ks-installer/issues/517) +- Fix the display issue of container log page during container initialization #[1518](https://github.com/kubesphere/kubesphere/issues/1518) +- Fix the blank node issue when these nodes are resized #[1464](https://github.com/kubesphere/kubesphere/issues/1464) +- Fix the display issue of components status in monitor center, to keep them up-to date #[1858](https://github.com/kubesphere/kubesphere/issues/1858) +- Fix the wrong monitoring targets number in alert detail page #[61](https://github.com/kubesphere/console/issues/61) + +## DevOps + +### BUG FIXES + +- Fix the issue of UNSTABLE state not visible in the pipeline #[1428](https://github.com/kubesphere/kubesphere/issues/1428) +- Fix the format issue of KubeConfig in DevOps pipeline #[1529](https://github.com/kubesphere/kubesphere/issues/1529) +- Fix the image repo compatibility issue in B2I, to support image repo of Alibaba Cloud #[1500](https://github.com/kubesphere/kubesphere/issues/1500) +- Fix the paging issue in DevOps pipelines' branches list page #[1517](https://github.com/kubesphere/kubesphere/issues/1517) +- Fix the issue of failing to display pipeline configuration after modifying it #[1522](https://github.com/kubesphere/kubesphere/issues/1522) +- Fix the issue of failing to download generated artifact in S2I job #[1547](https://github.com/kubesphere/kubesphere/issues/1547) +- Fix the issue of [data loss occasionally after restarting Jenkins]( https://kubesphere.com.cn/forum/d/283-jenkins) +- Fix the issue that only 'PR-HEAD' is fetched when binding pipeline with GitHub #[1780](https://github.com/kubesphere/kubesphere/issues/1780) +- Fix 414 issue when updating DevOps credential #[1824](https://github.com/kubesphere/kubesphere/issues/1824) +- Fix wrong s2ib/s2ir naming issue from B2I/S2I #[1840](https://github.com/kubesphere/kubesphere/issues/1840) +- Fix the issue of failing to drag and drop tasks on pipeline editing page #[62](https://github.com/kubesphere/console/issues/62) + +## Authentication and Authorization + +### UPGRADE & ENHANCEMENT + +- Generate client certification through CSR #[1449](https://github.com/kubesphere/kubesphere/issues/1449) + +### BUG FIXES + +- Fix content loss issue in KubeConfig token file #[1529](https://github.com/kubesphere/kubesphere/issues/1529) +- Fix the issue that users with different permission fail to log in on the same browser #[1600](https://github.com/kubesphere/kubesphere/issues/1600) + +## User Experience + +### UPGRADE & ENHANCEMENT + +- Support to edit SecurityContext in workload editing page #[1530](https://github.com/kubesphere/kubesphere/issues/1530) +- Support to configure init container in workload editing page #[1488](https://github.com/kubesphere/kubesphere/issues/1488) +- Add support of startupProbe, also add periodSeconds, successThreshold, failureThreshold parameters in probe editing page #[1487](https://github.com/kubesphere/kubesphere/issues/1487) +- Optimize the status update display of Pods #[1187](https://github.com/kubesphere/kubesphere/issues/1187) +- Optimize the error message report on console #[43](https://github.com/kubesphere/console/issues/43) + +### BUG FIXES + +- Fix the status display issue for the Pods that are not under running status #[1187](https://github.com/kubesphere/kubesphere/issues/1187) +- Fix the issue that the added annotation can't be deleted when creating service of QingCloud LoadBalancer #[1395](https://github.com/kubesphere/kubesphere/issues/1395) +- Fix the display issue when selecting workload on service editing page #[1596](https://github.com/kubesphere/kubesphere/issues/1596) +- Fix the issue of failing to edit configuration file when editing 'Job' #[1521](https://github.com/kubesphere/kubesphere/issues/1521) +- Fix the issue of failing to update the service of 'StatefulSet' #[1513](https://github.com/kubesphere/kubesphere/issues/1513) +- Fix the issue of image searching for QingCloud and Alibaba Cloud image repos #[1627](https://github.com/kubesphere/kubesphere/issues/1627) +- Fix resource ordering issue with the same creation timestamp #[1750](https://github.com/kubesphere/kubesphere/pull/1750) +- Fix the issue of failing to edit configuration file when editing service #[41](https://github.com/kubesphere/console/issues/41) diff --git a/content/en/docs/pluggable-components/release-v300.md b/content/en/docs/pluggable-components/release-v300.md new file mode 100644 index 000000000..98c787c91 --- /dev/null +++ b/content/en/docs/pluggable-components/release-v300.md @@ -0,0 +1,10 @@ +--- +title: "Release Notes For 3.0.0" +keywords: "kubernetes, docker, kubesphere, jenkins, istio, prometheus" +description: "KubeSphere Release Notes For 3.0.0" + +linkTitle: "Release Notes - 3.0.0" +weight: 50 +--- + +TBD diff --git a/content/en/docs/project-user-guide/_index.md b/content/en/docs/project-user-guide/_index.md new file mode 100644 index 000000000..490cd0364 --- /dev/null +++ b/content/en/docs/project-user-guide/_index.md @@ -0,0 +1,23 @@ +--- +title: "Project User Guide" +description: "Help you to better manage resources in a KubeSphere project" +layout: "single" + +linkTitle: "Project User Guide" +weight: 4300 + +icon: "/images/docs/docs.svg" + +--- + +## Installing KubeSphere and Kubernetes on Linux + +In this chapter, we will demonstrate how to use KubeKey to provision a new Kubernetes and KubeSphere cluster based on different infrastructures. Kubekey can help you to quickly build a production-ready cluster architecture on a set of machines from zero to one. It also helps you to easily scale the cluster and install pluggable components on existing architecture. + +## Most Popular Pages + +Below you will find some of the most common and helpful pages from this chapter. We highly recommend you to review them at first. + +{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} + +{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} diff --git a/content/en/docs/project-user-guide/application-workloads/_index.md b/content/en/docs/project-user-guide/application-workloads/_index.md new file mode 100644 index 000000000..d28bdca57 --- /dev/null +++ b/content/en/docs/project-user-guide/application-workloads/_index.md @@ -0,0 +1,7 @@ +--- +linkTitle: "Application Workloads" +weight: 2200 + +_build: + render: false +--- diff --git a/content/en/docs/project-user-guide/application-workloads/app-template.md b/content/en/docs/project-user-guide/application-workloads/app-template.md new file mode 100644 index 000000000..f0d13febd --- /dev/null +++ b/content/en/docs/project-user-guide/application-workloads/app-template.md @@ -0,0 +1,44 @@ +--- +title: "Application Template" +keywords: 'kubernetes, chart, helm, KubeSphere, application' +description: 'Application Template' + +linkTitle: "Application Template" +weight: 2210 +--- + +TBD + +{{< notice note >}} +### This is a simple note. +{{}} + +{{< notice tip >}} +This is a simple tip. +{{}} + +{{< notice info >}} +This is a simple info. +{{}} + +{{< notice warning >}} +This is a simple warning. +{{}} + +{{< tabs >}} + +{{< tab "first" >}} +### Why KubeSphere +{{}} + +{{< tab "second" >}} +``` +console.log('test') +``` +{{}} + +{{< tab "third" >}} +this is third tab +{{}} + +{{}} diff --git a/content/en/docs/project-user-guide/application-workloads/composing-app.md b/content/en/docs/project-user-guide/application-workloads/composing-app.md new file mode 100644 index 000000000..57e705e5c --- /dev/null +++ b/content/en/docs/project-user-guide/application-workloads/composing-app.md @@ -0,0 +1,44 @@ +--- +title: "Composing an App for Microservices" +keywords: 'kubesphere, kubernetes, docker, devops, service mesh, openpitrix' +description: 'Composing an app for microservices' + + +weight: 2260 +--- + +TBD + +{{< notice note >}} +### This is a simple note. +{{}} + +{{< notice tip >}} +This is a simple tip. +{{}} + +{{< notice info >}} +This is a simple info. +{{}} + +{{< notice warning >}} +This is a simple warning. +{{}} + +{{< tabs >}} + +{{< tab "first" >}} +### Why KubeSphere +{{}} + +{{< tab "second" >}} +``` +console.log('test') +``` +{{}} + +{{< tab "third" >}} +this is third tab +{{}} + +{{}} diff --git a/content/en/docs/project-user-guide/application-workloads/cronjob.md b/content/en/docs/project-user-guide/application-workloads/cronjob.md new file mode 100644 index 000000000..3a1a1d401 --- /dev/null +++ b/content/en/docs/project-user-guide/application-workloads/cronjob.md @@ -0,0 +1,44 @@ +--- +title: "CronJobs" +keywords: 'kubesphere, kubernetes, jobs, cronjobs' +description: 'Create a Kubernetes CronJob' + + +weight: 2260 +--- + +TBD + +{{< notice note >}} +### This is a simple note. +{{}} + +{{< notice tip >}} +This is a simple tip. +{{}} + +{{< notice info >}} +This is a simple info. +{{}} + +{{< notice warning >}} +This is a simple warning. +{{}} + +{{< tabs >}} + +{{< tab "first" >}} +### Why KubeSphere +{{}} + +{{< tab "second" >}} +``` +console.log('test') +``` +{{}} + +{{< tab "third" >}} +this is third tab +{{}} + +{{}} diff --git a/content/en/docs/project-user-guide/application-workloads/daemonsets.md b/content/en/docs/project-user-guide/application-workloads/daemonsets.md new file mode 100644 index 000000000..99938c55e --- /dev/null +++ b/content/en/docs/project-user-guide/application-workloads/daemonsets.md @@ -0,0 +1,44 @@ +--- +title: "DaemonSets" +keywords: 'kubesphere, kubernetes, docker, devops, service mesh, openpitrix' +description: 'Kubernetes DaemonSets' + + +weight: 2250 +--- + +TBD + +{{< notice note >}} +### This is a simple note. +{{}} + +{{< notice tip >}} +This is a simple tip. +{{}} + +{{< notice info >}} +This is a simple info. +{{}} + +{{< notice warning >}} +This is a simple warning. +{{}} + +{{< tabs >}} + +{{< tab "first" >}} +### Why KubeSphere +{{}} + +{{< tab "second" >}} +``` +console.log('test') +``` +{{}} + +{{< tab "third" >}} +this is third tab +{{}} + +{{}} diff --git a/content/en/docs/project-user-guide/application-workloads/deployments.md b/content/en/docs/project-user-guide/application-workloads/deployments.md new file mode 100644 index 000000000..ec4e7682d --- /dev/null +++ b/content/en/docs/project-user-guide/application-workloads/deployments.md @@ -0,0 +1,44 @@ +--- +title: "Deployments" +keywords: 'kubesphere, kubernetes, docker, devops, service mesh, openpitrix' +description: 'Kubernetes Deployments' + + +weight: 2230 +--- + +TBD + +{{< notice note >}} +### This is a simple note. +{{}} + +{{< notice tip >}} +This is a simple tip. +{{}} + +{{< notice info >}} +This is a simple info. +{{}} + +{{< notice warning >}} +This is a simple warning. +{{}} + +{{< tabs >}} + +{{< tab "first" >}} +### Why KubeSphere +{{}} + +{{< tab "second" >}} +``` +console.log('test') +``` +{{}} + +{{< tab "third" >}} +this is third tab +{{}} + +{{}} diff --git a/content/en/docs/project-user-guide/application-workloads/ingress.md b/content/en/docs/project-user-guide/application-workloads/ingress.md new file mode 100644 index 000000000..92f56c935 --- /dev/null +++ b/content/en/docs/project-user-guide/application-workloads/ingress.md @@ -0,0 +1,44 @@ +--- +title: "Jobs" +keywords: 'kubesphere, kubernetes, docker, jobs' +description: 'Create a Kubernetes Job' + + +weight: 2260 +--- + +TBD + +{{< notice note >}} +### This is a simple note. +{{}} + +{{< notice tip >}} +This is a simple tip. +{{}} + +{{< notice info >}} +This is a simple info. +{{}} + +{{< notice warning >}} +This is a simple warning. +{{}} + +{{< tabs >}} + +{{< tab "first" >}} +### Why KubeSphere +{{}} + +{{< tab "second" >}} +``` +console.log('test') +``` +{{}} + +{{< tab "third" >}} +this is third tab +{{}} + +{{}} diff --git a/content/en/docs/project-user-guide/application-workloads/jobs.md b/content/en/docs/project-user-guide/application-workloads/jobs.md new file mode 100644 index 000000000..92f56c935 --- /dev/null +++ b/content/en/docs/project-user-guide/application-workloads/jobs.md @@ -0,0 +1,44 @@ +--- +title: "Jobs" +keywords: 'kubesphere, kubernetes, docker, jobs' +description: 'Create a Kubernetes Job' + + +weight: 2260 +--- + +TBD + +{{< notice note >}} +### This is a simple note. +{{}} + +{{< notice tip >}} +This is a simple tip. +{{}} + +{{< notice info >}} +This is a simple info. +{{}} + +{{< notice warning >}} +This is a simple warning. +{{}} + +{{< tabs >}} + +{{< tab "first" >}} +### Why KubeSphere +{{}} + +{{< tab "second" >}} +``` +console.log('test') +``` +{{}} + +{{< tab "third" >}} +this is third tab +{{}} + +{{}} diff --git a/content/en/docs/project-user-guide/application-workloads/s2i-template.md b/content/en/docs/project-user-guide/application-workloads/s2i-template.md new file mode 100644 index 000000000..92f56c935 --- /dev/null +++ b/content/en/docs/project-user-guide/application-workloads/s2i-template.md @@ -0,0 +1,44 @@ +--- +title: "Jobs" +keywords: 'kubesphere, kubernetes, docker, jobs' +description: 'Create a Kubernetes Job' + + +weight: 2260 +--- + +TBD + +{{< notice note >}} +### This is a simple note. +{{}} + +{{< notice tip >}} +This is a simple tip. +{{}} + +{{< notice info >}} +This is a simple info. +{{}} + +{{< notice warning >}} +This is a simple warning. +{{}} + +{{< tabs >}} + +{{< tab "first" >}} +### Why KubeSphere +{{}} + +{{< tab "second" >}} +``` +console.log('test') +``` +{{}} + +{{< tab "third" >}} +this is third tab +{{}} + +{{}} diff --git a/content/en/docs/project-user-guide/application-workloads/services.md b/content/en/docs/project-user-guide/application-workloads/services.md new file mode 100644 index 000000000..92f56c935 --- /dev/null +++ b/content/en/docs/project-user-guide/application-workloads/services.md @@ -0,0 +1,44 @@ +--- +title: "Jobs" +keywords: 'kubesphere, kubernetes, docker, jobs' +description: 'Create a Kubernetes Job' + + +weight: 2260 +--- + +TBD + +{{< notice note >}} +### This is a simple note. +{{}} + +{{< notice tip >}} +This is a simple tip. +{{}} + +{{< notice info >}} +This is a simple info. +{{}} + +{{< notice warning >}} +This is a simple warning. +{{}} + +{{< tabs >}} + +{{< tab "first" >}} +### Why KubeSphere +{{}} + +{{< tab "second" >}} +``` +console.log('test') +``` +{{}} + +{{< tab "third" >}} +this is third tab +{{}} + +{{}} diff --git a/content/en/docs/project-user-guide/application-workloads/statefulsets.md b/content/en/docs/project-user-guide/application-workloads/statefulsets.md new file mode 100644 index 000000000..034fa6a0b --- /dev/null +++ b/content/en/docs/project-user-guide/application-workloads/statefulsets.md @@ -0,0 +1,44 @@ +--- +title: "StatefulSets" +keywords: 'kubesphere, kubernetes, StatefulSets, dashboard, service' +description: 'Kubernetes StatefulSets' + + +weight: 2240 +--- + +TBD + +{{< notice note >}} +### This is a simple note. +{{}} + +{{< notice tip >}} +This is a simple tip. +{{}} + +{{< notice info >}} +This is a simple info. +{{}} + +{{< notice warning >}} +This is a simple warning. +{{}} + +{{< tabs >}} + +{{< tab "first" >}} +### Why KubeSphere +{{}} + +{{< tab "second" >}} +``` +console.log('test') +``` +{{}} + +{{< tab "third" >}} +this is third tab +{{}} + +{{}} diff --git a/content/en/docs/project-user-guide/configuration/_index.md b/content/en/docs/project-user-guide/configuration/_index.md new file mode 100644 index 000000000..2cf101ca5 --- /dev/null +++ b/content/en/docs/project-user-guide/configuration/_index.md @@ -0,0 +1,7 @@ +--- +linkTitle: "Installation" +weight: 2100 + +_build: + render: false +--- \ No newline at end of file diff --git a/content/en/docs/project-user-guide/configuration/configmaps.md b/content/en/docs/project-user-guide/configuration/configmaps.md new file mode 100644 index 000000000..ae6f08d5c --- /dev/null +++ b/content/en/docs/project-user-guide/configuration/configmaps.md @@ -0,0 +1,44 @@ +--- +title: "ConfigMaps" +keywords: 'kubernetes, docker, helm, ConfigMaps' +description: 'Create a Kubernetes ConfigMap' + +linkTitle: "ConfigMaps" +weight: 2110 +--- + +TBD + +{{< notice note >}} +### This is a simple note. +{{}} + +{{< notice tip >}} +This is a simple tip. +{{}} + +{{< notice info >}} +This is a simple info. +{{}} + +{{< notice warning >}} +This is a simple warning. +{{}} + +{{< tabs >}} + +{{< tab "first" >}} +### Why KubeSphere +{{}} + +{{< tab "second" >}} +``` +console.log('test') +``` +{{}} + +{{< tab "third" >}} +this is third tab +{{}} + +{{}} diff --git a/content/en/docs/project-user-guide/configuration/image-registry.md b/content/en/docs/project-user-guide/configuration/image-registry.md new file mode 100644 index 000000000..1e41dbbc1 --- /dev/null +++ b/content/en/docs/project-user-guide/configuration/image-registry.md @@ -0,0 +1,44 @@ +--- +title: "Secrets" +keywords: 'KubeSphere, kubernetes, docker, Secrets' +description: 'Create a Kubernetes Secret' + +linkTitle: "Secrets" +weight: 2130 +--- + +TBD + +{{< notice note >}} +### This is a simple note. +{{}} + +{{< notice tip >}} +This is a simple tip. +{{}} + +{{< notice info >}} +This is a simple info. +{{}} + +{{< notice warning >}} +This is a simple warning. +{{}} + +{{< tabs >}} + +{{< tab "first" >}} +### Why KubeSphere +{{}} + +{{< tab "second" >}} +``` +console.log('test') +``` +{{}} + +{{< tab "third" >}} +this is third tab +{{}} + +{{}} diff --git a/content/en/docs/project-user-guide/configuration/secrets.md b/content/en/docs/project-user-guide/configuration/secrets.md new file mode 100644 index 000000000..1e41dbbc1 --- /dev/null +++ b/content/en/docs/project-user-guide/configuration/secrets.md @@ -0,0 +1,44 @@ +--- +title: "Secrets" +keywords: 'KubeSphere, kubernetes, docker, Secrets' +description: 'Create a Kubernetes Secret' + +linkTitle: "Secrets" +weight: 2130 +--- + +TBD + +{{< notice note >}} +### This is a simple note. +{{}} + +{{< notice tip >}} +This is a simple tip. +{{}} + +{{< notice info >}} +This is a simple info. +{{}} + +{{< notice warning >}} +This is a simple warning. +{{}} + +{{< tabs >}} + +{{< tab "first" >}} +### Why KubeSphere +{{}} + +{{< tab "second" >}} +``` +console.log('test') +``` +{{}} + +{{< tab "third" >}} +this is third tab +{{}} + +{{}} diff --git a/content/en/docs/project-user-guide/grayscale-release/_index.md b/content/en/docs/project-user-guide/grayscale-release/_index.md new file mode 100644 index 000000000..2cf101ca5 --- /dev/null +++ b/content/en/docs/project-user-guide/grayscale-release/_index.md @@ -0,0 +1,7 @@ +--- +linkTitle: "Installation" +weight: 2100 + +_build: + render: false +--- \ No newline at end of file diff --git a/content/en/docs/project-user-guide/grayscale-release/blue-green-deployment.md b/content/en/docs/project-user-guide/grayscale-release/blue-green-deployment.md new file mode 100644 index 000000000..d701b1ced --- /dev/null +++ b/content/en/docs/project-user-guide/grayscale-release/blue-green-deployment.md @@ -0,0 +1,107 @@ +--- +title: "Volume Snapshots" +keywords: 'KubeSphere, kubernetes, docker, helm, jenkins, istio, prometheus' +description: 'Volume Snapshots' + +linkTitle: "Volume Snapshots" +weight: 2130 +--- + +This tutorial explains how to customize KubeSphere configurations in `conf/common.yaml`. You can reference the following section to understand each parameter. + +```yaml +######################### Kubernetes ######################### +# The default k8s version will be installed +kube_version: v1.16.7 + +# The default etcd version will be installed +etcd_version: v3.2.18 + +# Configure a cron job to backup etcd data, which is running on etcd machines. +# Period of running backup etcd job, the unit is minutes. +# The default value 30 means backup etcd every 30 minutes. +etcd_backup_period: 30 + +# How many backup replicas to keep. +# The default value5 means to keep latest 5 backups, older ones will be deleted by order. +keep_backup_number: 5 + +# The location to store etcd backups files on etcd machines. +etcd_backup_dir: "/var/backups/kube_etcd" + +# Add other registry. (For users who need to accelerate image download) +docker_registry_mirrors: + - https://docker.mirrors.ustc.edu.cn + - https://registry.docker-cn.com + - https://mirror.aliyuncs.com + +# Kubernetes network plugin, Calico will be installed by default. Note that Calico and flannel are recommended, which are tested and verified by KubeSphere. +kube_network_plugin: calico + +# A valid CIDR range for Kubernetes services, +# 1. should not overlap with node subnet +# 2. should not overlap with Kubernetes pod subnet +kube_service_addresses: 10.233.0.0/18 + +# A valid CIDR range for Kubernetes pod subnet, +# 1. should not overlap with node subnet +# 2. should not overlap with Kubernetes services subnet +kube_pods_subnet: 10.233.64.0/18 + +# Kube-proxy proxyMode configuration, either ipvs, or iptables +kube_proxy_mode: ipvs + +# Maximum pods allowed to run on every node. +kubelet_max_pods: 110 + +# Enable nodelocal dns cache, see https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.md#nodelocal-dns-cache for further information +enable_nodelocaldns: true + +# Highly Available loadbalancer example config +# apiserver_loadbalancer_domain_name: "lb.kubesphere.local" # Loadbalancer domain name +# loadbalancer_apiserver: # Loadbalancer apiserver configuration, please uncomment this line when you prepare HA install +# address: 192.168.0.10 # Loadbalancer apiserver IP address +# port: 6443 # apiserver port + +######################### KubeSphere ######################### + +# Version of KubeSphere +ks_version: v2.1.0 + +# KubeSphere console port, range 30000-32767, +# but 30180/30280/30380 are reserved for internal service +console_port: 30880 # KubeSphere console nodeport + +#CommonComponent +mysql_volume_size: 20Gi # MySQL PVC size +minio_volume_size: 20Gi # Minio PVC size +etcd_volume_size: 20Gi # etcd PVC size +openldap_volume_size: 2Gi # openldap PVC size +redis_volume_size: 2Gi # Redis PVC size + + +# Monitoring +prometheus_replica: 2 # Prometheus replicas with 2 by default which are responsible for monitoring different segments of data source and provide high availability as well. +prometheus_memory_request: 400Mi # Prometheus request memory +prometheus_volume_size: 20Gi # Prometheus PVC size +grafana_enabled: true # enable grafana or not + + +## Container Engine Acceleration +## Use nvidia gpu acceleration in containers +# nvidia_accelerator_enabled: true # enable Nvidia GPU accelerator or not. It supports hybrid node with GPU and CPU installed. +# nvidia_gpu_nodes: # The GPU nodes specified in hosts.ini. FOr now we only support Ubuntu 16.04 +# - kube-gpu-001 # The host name of the GPU node specified in hosts.ini +``` + +## How to Configure a GPU Node + +You may want to use GPU nodes for special purpose such as machine learning. Let's say you have a GPU node called `node2` in `hosts.ini`, then in the file `common.yaml` specify the following configuration. Please be aware the `- node2` has two spaces indent. + +```yaml + nvidia_accelerator_enabled: true + nvidia_gpu_nodes: + - node2 +``` + +> Note: The GPU node now only supports Ubuntu 16.04. diff --git a/content/en/docs/project-user-guide/grayscale-release/canary-release.md b/content/en/docs/project-user-guide/grayscale-release/canary-release.md new file mode 100644 index 000000000..d701b1ced --- /dev/null +++ b/content/en/docs/project-user-guide/grayscale-release/canary-release.md @@ -0,0 +1,107 @@ +--- +title: "Volume Snapshots" +keywords: 'KubeSphere, kubernetes, docker, helm, jenkins, istio, prometheus' +description: 'Volume Snapshots' + +linkTitle: "Volume Snapshots" +weight: 2130 +--- + +This tutorial explains how to customize KubeSphere configurations in `conf/common.yaml`. You can reference the following section to understand each parameter. + +```yaml +######################### Kubernetes ######################### +# The default k8s version will be installed +kube_version: v1.16.7 + +# The default etcd version will be installed +etcd_version: v3.2.18 + +# Configure a cron job to backup etcd data, which is running on etcd machines. +# Period of running backup etcd job, the unit is minutes. +# The default value 30 means backup etcd every 30 minutes. +etcd_backup_period: 30 + +# How many backup replicas to keep. +# The default value5 means to keep latest 5 backups, older ones will be deleted by order. +keep_backup_number: 5 + +# The location to store etcd backups files on etcd machines. +etcd_backup_dir: "/var/backups/kube_etcd" + +# Add other registry. (For users who need to accelerate image download) +docker_registry_mirrors: + - https://docker.mirrors.ustc.edu.cn + - https://registry.docker-cn.com + - https://mirror.aliyuncs.com + +# Kubernetes network plugin, Calico will be installed by default. Note that Calico and flannel are recommended, which are tested and verified by KubeSphere. +kube_network_plugin: calico + +# A valid CIDR range for Kubernetes services, +# 1. should not overlap with node subnet +# 2. should not overlap with Kubernetes pod subnet +kube_service_addresses: 10.233.0.0/18 + +# A valid CIDR range for Kubernetes pod subnet, +# 1. should not overlap with node subnet +# 2. should not overlap with Kubernetes services subnet +kube_pods_subnet: 10.233.64.0/18 + +# Kube-proxy proxyMode configuration, either ipvs, or iptables +kube_proxy_mode: ipvs + +# Maximum pods allowed to run on every node. +kubelet_max_pods: 110 + +# Enable nodelocal dns cache, see https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.md#nodelocal-dns-cache for further information +enable_nodelocaldns: true + +# Highly Available loadbalancer example config +# apiserver_loadbalancer_domain_name: "lb.kubesphere.local" # Loadbalancer domain name +# loadbalancer_apiserver: # Loadbalancer apiserver configuration, please uncomment this line when you prepare HA install +# address: 192.168.0.10 # Loadbalancer apiserver IP address +# port: 6443 # apiserver port + +######################### KubeSphere ######################### + +# Version of KubeSphere +ks_version: v2.1.0 + +# KubeSphere console port, range 30000-32767, +# but 30180/30280/30380 are reserved for internal service +console_port: 30880 # KubeSphere console nodeport + +#CommonComponent +mysql_volume_size: 20Gi # MySQL PVC size +minio_volume_size: 20Gi # Minio PVC size +etcd_volume_size: 20Gi # etcd PVC size +openldap_volume_size: 2Gi # openldap PVC size +redis_volume_size: 2Gi # Redis PVC size + + +# Monitoring +prometheus_replica: 2 # Prometheus replicas with 2 by default which are responsible for monitoring different segments of data source and provide high availability as well. +prometheus_memory_request: 400Mi # Prometheus request memory +prometheus_volume_size: 20Gi # Prometheus PVC size +grafana_enabled: true # enable grafana or not + + +## Container Engine Acceleration +## Use nvidia gpu acceleration in containers +# nvidia_accelerator_enabled: true # enable Nvidia GPU accelerator or not. It supports hybrid node with GPU and CPU installed. +# nvidia_gpu_nodes: # The GPU nodes specified in hosts.ini. FOr now we only support Ubuntu 16.04 +# - kube-gpu-001 # The host name of the GPU node specified in hosts.ini +``` + +## How to Configure a GPU Node + +You may want to use GPU nodes for special purpose such as machine learning. Let's say you have a GPU node called `node2` in `hosts.ini`, then in the file `common.yaml` specify the following configuration. Please be aware the `- node2` has two spaces indent. + +```yaml + nvidia_accelerator_enabled: true + nvidia_gpu_nodes: + - node2 +``` + +> Note: The GPU node now only supports Ubuntu 16.04. diff --git a/content/en/docs/project-user-guide/grayscale-release/overview.md b/content/en/docs/project-user-guide/grayscale-release/overview.md new file mode 100644 index 000000000..b9b129818 --- /dev/null +++ b/content/en/docs/project-user-guide/grayscale-release/overview.md @@ -0,0 +1,10 @@ +--- +title: "Volumes" +keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus' +description: 'Create Volumes (PVCs)' + +linkTitle: "Volumes" +weight: 2110 +--- + +TBD diff --git a/content/en/docs/project-user-guide/grayscale-release/traffic-mirroring.md b/content/en/docs/project-user-guide/grayscale-release/traffic-mirroring.md new file mode 100644 index 000000000..d701b1ced --- /dev/null +++ b/content/en/docs/project-user-guide/grayscale-release/traffic-mirroring.md @@ -0,0 +1,107 @@ +--- +title: "Volume Snapshots" +keywords: 'KubeSphere, kubernetes, docker, helm, jenkins, istio, prometheus' +description: 'Volume Snapshots' + +linkTitle: "Volume Snapshots" +weight: 2130 +--- + +This tutorial explains how to customize KubeSphere configurations in `conf/common.yaml`. You can reference the following section to understand each parameter. + +```yaml +######################### Kubernetes ######################### +# The default k8s version will be installed +kube_version: v1.16.7 + +# The default etcd version will be installed +etcd_version: v3.2.18 + +# Configure a cron job to backup etcd data, which is running on etcd machines. +# Period of running backup etcd job, the unit is minutes. +# The default value 30 means backup etcd every 30 minutes. +etcd_backup_period: 30 + +# How many backup replicas to keep. +# The default value5 means to keep latest 5 backups, older ones will be deleted by order. +keep_backup_number: 5 + +# The location to store etcd backups files on etcd machines. +etcd_backup_dir: "/var/backups/kube_etcd" + +# Add other registry. (For users who need to accelerate image download) +docker_registry_mirrors: + - https://docker.mirrors.ustc.edu.cn + - https://registry.docker-cn.com + - https://mirror.aliyuncs.com + +# Kubernetes network plugin, Calico will be installed by default. Note that Calico and flannel are recommended, which are tested and verified by KubeSphere. +kube_network_plugin: calico + +# A valid CIDR range for Kubernetes services, +# 1. should not overlap with node subnet +# 2. should not overlap with Kubernetes pod subnet +kube_service_addresses: 10.233.0.0/18 + +# A valid CIDR range for Kubernetes pod subnet, +# 1. should not overlap with node subnet +# 2. should not overlap with Kubernetes services subnet +kube_pods_subnet: 10.233.64.0/18 + +# Kube-proxy proxyMode configuration, either ipvs, or iptables +kube_proxy_mode: ipvs + +# Maximum pods allowed to run on every node. +kubelet_max_pods: 110 + +# Enable nodelocal dns cache, see https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.md#nodelocal-dns-cache for further information +enable_nodelocaldns: true + +# Highly Available loadbalancer example config +# apiserver_loadbalancer_domain_name: "lb.kubesphere.local" # Loadbalancer domain name +# loadbalancer_apiserver: # Loadbalancer apiserver configuration, please uncomment this line when you prepare HA install +# address: 192.168.0.10 # Loadbalancer apiserver IP address +# port: 6443 # apiserver port + +######################### KubeSphere ######################### + +# Version of KubeSphere +ks_version: v2.1.0 + +# KubeSphere console port, range 30000-32767, +# but 30180/30280/30380 are reserved for internal service +console_port: 30880 # KubeSphere console nodeport + +#CommonComponent +mysql_volume_size: 20Gi # MySQL PVC size +minio_volume_size: 20Gi # Minio PVC size +etcd_volume_size: 20Gi # etcd PVC size +openldap_volume_size: 2Gi # openldap PVC size +redis_volume_size: 2Gi # Redis PVC size + + +# Monitoring +prometheus_replica: 2 # Prometheus replicas with 2 by default which are responsible for monitoring different segments of data source and provide high availability as well. +prometheus_memory_request: 400Mi # Prometheus request memory +prometheus_volume_size: 20Gi # Prometheus PVC size +grafana_enabled: true # enable grafana or not + + +## Container Engine Acceleration +## Use nvidia gpu acceleration in containers +# nvidia_accelerator_enabled: true # enable Nvidia GPU accelerator or not. It supports hybrid node with GPU and CPU installed. +# nvidia_gpu_nodes: # The GPU nodes specified in hosts.ini. FOr now we only support Ubuntu 16.04 +# - kube-gpu-001 # The host name of the GPU node specified in hosts.ini +``` + +## How to Configure a GPU Node + +You may want to use GPU nodes for special purpose such as machine learning. Let's say you have a GPU node called `node2` in `hosts.ini`, then in the file `common.yaml` specify the following configuration. Please be aware the `- node2` has two spaces indent. + +```yaml + nvidia_accelerator_enabled: true + nvidia_gpu_nodes: + - node2 +``` + +> Note: The GPU node now only supports Ubuntu 16.04. diff --git a/content/en/docs/project-user-guide/project-administration/_index.md b/content/en/docs/project-user-guide/project-administration/_index.md new file mode 100644 index 000000000..2cf101ca5 --- /dev/null +++ b/content/en/docs/project-user-guide/project-administration/_index.md @@ -0,0 +1,7 @@ +--- +linkTitle: "Installation" +weight: 2100 + +_build: + render: false +--- \ No newline at end of file diff --git a/content/en/docs/project-user-guide/project-administration/project-gateway.md b/content/en/docs/project-user-guide/project-administration/project-gateway.md new file mode 100644 index 000000000..d701b1ced --- /dev/null +++ b/content/en/docs/project-user-guide/project-administration/project-gateway.md @@ -0,0 +1,107 @@ +--- +title: "Volume Snapshots" +keywords: 'KubeSphere, kubernetes, docker, helm, jenkins, istio, prometheus' +description: 'Volume Snapshots' + +linkTitle: "Volume Snapshots" +weight: 2130 +--- + +This tutorial explains how to customize KubeSphere configurations in `conf/common.yaml`. You can reference the following section to understand each parameter. + +```yaml +######################### Kubernetes ######################### +# The default k8s version will be installed +kube_version: v1.16.7 + +# The default etcd version will be installed +etcd_version: v3.2.18 + +# Configure a cron job to backup etcd data, which is running on etcd machines. +# Period of running backup etcd job, the unit is minutes. +# The default value 30 means backup etcd every 30 minutes. +etcd_backup_period: 30 + +# How many backup replicas to keep. +# The default value5 means to keep latest 5 backups, older ones will be deleted by order. +keep_backup_number: 5 + +# The location to store etcd backups files on etcd machines. +etcd_backup_dir: "/var/backups/kube_etcd" + +# Add other registry. (For users who need to accelerate image download) +docker_registry_mirrors: + - https://docker.mirrors.ustc.edu.cn + - https://registry.docker-cn.com + - https://mirror.aliyuncs.com + +# Kubernetes network plugin, Calico will be installed by default. Note that Calico and flannel are recommended, which are tested and verified by KubeSphere. +kube_network_plugin: calico + +# A valid CIDR range for Kubernetes services, +# 1. should not overlap with node subnet +# 2. should not overlap with Kubernetes pod subnet +kube_service_addresses: 10.233.0.0/18 + +# A valid CIDR range for Kubernetes pod subnet, +# 1. should not overlap with node subnet +# 2. should not overlap with Kubernetes services subnet +kube_pods_subnet: 10.233.64.0/18 + +# Kube-proxy proxyMode configuration, either ipvs, or iptables +kube_proxy_mode: ipvs + +# Maximum pods allowed to run on every node. +kubelet_max_pods: 110 + +# Enable nodelocal dns cache, see https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.md#nodelocal-dns-cache for further information +enable_nodelocaldns: true + +# Highly Available loadbalancer example config +# apiserver_loadbalancer_domain_name: "lb.kubesphere.local" # Loadbalancer domain name +# loadbalancer_apiserver: # Loadbalancer apiserver configuration, please uncomment this line when you prepare HA install +# address: 192.168.0.10 # Loadbalancer apiserver IP address +# port: 6443 # apiserver port + +######################### KubeSphere ######################### + +# Version of KubeSphere +ks_version: v2.1.0 + +# KubeSphere console port, range 30000-32767, +# but 30180/30280/30380 are reserved for internal service +console_port: 30880 # KubeSphere console nodeport + +#CommonComponent +mysql_volume_size: 20Gi # MySQL PVC size +minio_volume_size: 20Gi # Minio PVC size +etcd_volume_size: 20Gi # etcd PVC size +openldap_volume_size: 2Gi # openldap PVC size +redis_volume_size: 2Gi # Redis PVC size + + +# Monitoring +prometheus_replica: 2 # Prometheus replicas with 2 by default which are responsible for monitoring different segments of data source and provide high availability as well. +prometheus_memory_request: 400Mi # Prometheus request memory +prometheus_volume_size: 20Gi # Prometheus PVC size +grafana_enabled: true # enable grafana or not + + +## Container Engine Acceleration +## Use nvidia gpu acceleration in containers +# nvidia_accelerator_enabled: true # enable Nvidia GPU accelerator or not. It supports hybrid node with GPU and CPU installed. +# nvidia_gpu_nodes: # The GPU nodes specified in hosts.ini. FOr now we only support Ubuntu 16.04 +# - kube-gpu-001 # The host name of the GPU node specified in hosts.ini +``` + +## How to Configure a GPU Node + +You may want to use GPU nodes for special purpose such as machine learning. Let's say you have a GPU node called `node2` in `hosts.ini`, then in the file `common.yaml` specify the following configuration. Please be aware the `- node2` has two spaces indent. + +```yaml + nvidia_accelerator_enabled: true + nvidia_gpu_nodes: + - node2 +``` + +> Note: The GPU node now only supports Ubuntu 16.04. diff --git a/content/en/docs/project-user-guide/project-administration/project-members.md b/content/en/docs/project-user-guide/project-administration/project-members.md new file mode 100644 index 000000000..caa49c5b2 --- /dev/null +++ b/content/en/docs/project-user-guide/project-administration/project-members.md @@ -0,0 +1,107 @@ +--- +title: "StorageClass" +keywords: 'KubeSphere, kubernetes, docker, helm, jenkins, istio, prometheus' +description: 'StorageClass' + +linkTitle: "Volume Snapshots" +weight: 2130 +--- + +This tutorial explains how to customize KubeSphere configurations in `conf/common.yaml`. You can reference the following section to understand each parameter. + +```yaml +######################### Kubernetes ######################### +# The default k8s version will be installed +kube_version: v1.16.7 + +# The default etcd version will be installed +etcd_version: v3.2.18 + +# Configure a cron job to backup etcd data, which is running on etcd machines. +# Period of running backup etcd job, the unit is minutes. +# The default value 30 means backup etcd every 30 minutes. +etcd_backup_period: 30 + +# How many backup replicas to keep. +# The default value5 means to keep latest 5 backups, older ones will be deleted by order. +keep_backup_number: 5 + +# The location to store etcd backups files on etcd machines. +etcd_backup_dir: "/var/backups/kube_etcd" + +# Add other registry. (For users who need to accelerate image download) +docker_registry_mirrors: + - https://docker.mirrors.ustc.edu.cn + - https://registry.docker-cn.com + - https://mirror.aliyuncs.com + +# Kubernetes network plugin, Calico will be installed by default. Note that Calico and flannel are recommended, which are tested and verified by KubeSphere. +kube_network_plugin: calico + +# A valid CIDR range for Kubernetes services, +# 1. should not overlap with node subnet +# 2. should not overlap with Kubernetes pod subnet +kube_service_addresses: 10.233.0.0/18 + +# A valid CIDR range for Kubernetes pod subnet, +# 1. should not overlap with node subnet +# 2. should not overlap with Kubernetes services subnet +kube_pods_subnet: 10.233.64.0/18 + +# Kube-proxy proxyMode configuration, either ipvs, or iptables +kube_proxy_mode: ipvs + +# Maximum pods allowed to run on every node. +kubelet_max_pods: 110 + +# Enable nodelocal dns cache, see https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.md#nodelocal-dns-cache for further information +enable_nodelocaldns: true + +# Highly Available loadbalancer example config +# apiserver_loadbalancer_domain_name: "lb.kubesphere.local" # Loadbalancer domain name +# loadbalancer_apiserver: # Loadbalancer apiserver configuration, please uncomment this line when you prepare HA install +# address: 192.168.0.10 # Loadbalancer apiserver IP address +# port: 6443 # apiserver port + +######################### KubeSphere ######################### + +# Version of KubeSphere +ks_version: v2.1.0 + +# KubeSphere console port, range 30000-32767, +# but 30180/30280/30380 are reserved for internal service +console_port: 30880 # KubeSphere console nodeport + +#CommonComponent +mysql_volume_size: 20Gi # MySQL PVC size +minio_volume_size: 20Gi # Minio PVC size +etcd_volume_size: 20Gi # etcd PVC size +openldap_volume_size: 2Gi # openldap PVC size +redis_volume_size: 2Gi # Redis PVC size + + +# Monitoring +prometheus_replica: 2 # Prometheus replicas with 2 by default which are responsible for monitoring different segments of data source and provide high availability as well. +prometheus_memory_request: 400Mi # Prometheus request memory +prometheus_volume_size: 20Gi # Prometheus PVC size +grafana_enabled: true # enable grafana or not + + +## Container Engine Acceleration +## Use nvidia gpu acceleration in containers +# nvidia_accelerator_enabled: true # enable Nvidia GPU accelerator or not. It supports hybrid node with GPU and CPU installed. +# nvidia_gpu_nodes: # The GPU nodes specified in hosts.ini. FOr now we only support Ubuntu 16.04 +# - kube-gpu-001 # The host name of the GPU node specified in hosts.ini +``` + +## How to Configure a GPU Node + +You may want to use GPU nodes for special purpose such as machine learning. Let's say you have a GPU node called `node2` in `hosts.ini`, then in the file `common.yaml` specify the following configuration. Please be aware the `- node2` has two spaces indent. + +```yaml + nvidia_accelerator_enabled: true + nvidia_gpu_nodes: + - node2 +``` + +> Note: The GPU node now only supports Ubuntu 16.04. diff --git a/content/en/docs/project-user-guide/project-administration/project-quota.md b/content/en/docs/project-user-guide/project-administration/project-quota.md new file mode 100644 index 000000000..b9b129818 --- /dev/null +++ b/content/en/docs/project-user-guide/project-administration/project-quota.md @@ -0,0 +1,10 @@ +--- +title: "Volumes" +keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus' +description: 'Create Volumes (PVCs)' + +linkTitle: "Volumes" +weight: 2110 +--- + +TBD diff --git a/content/en/docs/project-user-guide/project-administration/project-roles.md b/content/en/docs/project-user-guide/project-administration/project-roles.md new file mode 100644 index 000000000..d701b1ced --- /dev/null +++ b/content/en/docs/project-user-guide/project-administration/project-roles.md @@ -0,0 +1,107 @@ +--- +title: "Volume Snapshots" +keywords: 'KubeSphere, kubernetes, docker, helm, jenkins, istio, prometheus' +description: 'Volume Snapshots' + +linkTitle: "Volume Snapshots" +weight: 2130 +--- + +This tutorial explains how to customize KubeSphere configurations in `conf/common.yaml`. You can reference the following section to understand each parameter. + +```yaml +######################### Kubernetes ######################### +# The default k8s version will be installed +kube_version: v1.16.7 + +# The default etcd version will be installed +etcd_version: v3.2.18 + +# Configure a cron job to backup etcd data, which is running on etcd machines. +# Period of running backup etcd job, the unit is minutes. +# The default value 30 means backup etcd every 30 minutes. +etcd_backup_period: 30 + +# How many backup replicas to keep. +# The default value5 means to keep latest 5 backups, older ones will be deleted by order. +keep_backup_number: 5 + +# The location to store etcd backups files on etcd machines. +etcd_backup_dir: "/var/backups/kube_etcd" + +# Add other registry. (For users who need to accelerate image download) +docker_registry_mirrors: + - https://docker.mirrors.ustc.edu.cn + - https://registry.docker-cn.com + - https://mirror.aliyuncs.com + +# Kubernetes network plugin, Calico will be installed by default. Note that Calico and flannel are recommended, which are tested and verified by KubeSphere. +kube_network_plugin: calico + +# A valid CIDR range for Kubernetes services, +# 1. should not overlap with node subnet +# 2. should not overlap with Kubernetes pod subnet +kube_service_addresses: 10.233.0.0/18 + +# A valid CIDR range for Kubernetes pod subnet, +# 1. should not overlap with node subnet +# 2. should not overlap with Kubernetes services subnet +kube_pods_subnet: 10.233.64.0/18 + +# Kube-proxy proxyMode configuration, either ipvs, or iptables +kube_proxy_mode: ipvs + +# Maximum pods allowed to run on every node. +kubelet_max_pods: 110 + +# Enable nodelocal dns cache, see https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.md#nodelocal-dns-cache for further information +enable_nodelocaldns: true + +# Highly Available loadbalancer example config +# apiserver_loadbalancer_domain_name: "lb.kubesphere.local" # Loadbalancer domain name +# loadbalancer_apiserver: # Loadbalancer apiserver configuration, please uncomment this line when you prepare HA install +# address: 192.168.0.10 # Loadbalancer apiserver IP address +# port: 6443 # apiserver port + +######################### KubeSphere ######################### + +# Version of KubeSphere +ks_version: v2.1.0 + +# KubeSphere console port, range 30000-32767, +# but 30180/30280/30380 are reserved for internal service +console_port: 30880 # KubeSphere console nodeport + +#CommonComponent +mysql_volume_size: 20Gi # MySQL PVC size +minio_volume_size: 20Gi # Minio PVC size +etcd_volume_size: 20Gi # etcd PVC size +openldap_volume_size: 2Gi # openldap PVC size +redis_volume_size: 2Gi # Redis PVC size + + +# Monitoring +prometheus_replica: 2 # Prometheus replicas with 2 by default which are responsible for monitoring different segments of data source and provide high availability as well. +prometheus_memory_request: 400Mi # Prometheus request memory +prometheus_volume_size: 20Gi # Prometheus PVC size +grafana_enabled: true # enable grafana or not + + +## Container Engine Acceleration +## Use nvidia gpu acceleration in containers +# nvidia_accelerator_enabled: true # enable Nvidia GPU accelerator or not. It supports hybrid node with GPU and CPU installed. +# nvidia_gpu_nodes: # The GPU nodes specified in hosts.ini. FOr now we only support Ubuntu 16.04 +# - kube-gpu-001 # The host name of the GPU node specified in hosts.ini +``` + +## How to Configure a GPU Node + +You may want to use GPU nodes for special purpose such as machine learning. Let's say you have a GPU node called `node2` in `hosts.ini`, then in the file `common.yaml` specify the following configuration. Please be aware the `- node2` has two spaces indent. + +```yaml + nvidia_accelerator_enabled: true + nvidia_gpu_nodes: + - node2 +``` + +> Note: The GPU node now only supports Ubuntu 16.04. diff --git a/content/en/docs/project-user-guide/storage/_index.md b/content/en/docs/project-user-guide/storage/_index.md new file mode 100644 index 000000000..2cf101ca5 --- /dev/null +++ b/content/en/docs/project-user-guide/storage/_index.md @@ -0,0 +1,7 @@ +--- +linkTitle: "Installation" +weight: 2100 + +_build: + render: false +--- \ No newline at end of file diff --git a/content/en/docs/project-user-guide/storage/volume-snapshots.md b/content/en/docs/project-user-guide/storage/volume-snapshots.md new file mode 100644 index 000000000..d701b1ced --- /dev/null +++ b/content/en/docs/project-user-guide/storage/volume-snapshots.md @@ -0,0 +1,107 @@ +--- +title: "Volume Snapshots" +keywords: 'KubeSphere, kubernetes, docker, helm, jenkins, istio, prometheus' +description: 'Volume Snapshots' + +linkTitle: "Volume Snapshots" +weight: 2130 +--- + +This tutorial explains how to customize KubeSphere configurations in `conf/common.yaml`. You can reference the following section to understand each parameter. + +```yaml +######################### Kubernetes ######################### +# The default k8s version will be installed +kube_version: v1.16.7 + +# The default etcd version will be installed +etcd_version: v3.2.18 + +# Configure a cron job to backup etcd data, which is running on etcd machines. +# Period of running backup etcd job, the unit is minutes. +# The default value 30 means backup etcd every 30 minutes. +etcd_backup_period: 30 + +# How many backup replicas to keep. +# The default value5 means to keep latest 5 backups, older ones will be deleted by order. +keep_backup_number: 5 + +# The location to store etcd backups files on etcd machines. +etcd_backup_dir: "/var/backups/kube_etcd" + +# Add other registry. (For users who need to accelerate image download) +docker_registry_mirrors: + - https://docker.mirrors.ustc.edu.cn + - https://registry.docker-cn.com + - https://mirror.aliyuncs.com + +# Kubernetes network plugin, Calico will be installed by default. Note that Calico and flannel are recommended, which are tested and verified by KubeSphere. +kube_network_plugin: calico + +# A valid CIDR range for Kubernetes services, +# 1. should not overlap with node subnet +# 2. should not overlap with Kubernetes pod subnet +kube_service_addresses: 10.233.0.0/18 + +# A valid CIDR range for Kubernetes pod subnet, +# 1. should not overlap with node subnet +# 2. should not overlap with Kubernetes services subnet +kube_pods_subnet: 10.233.64.0/18 + +# Kube-proxy proxyMode configuration, either ipvs, or iptables +kube_proxy_mode: ipvs + +# Maximum pods allowed to run on every node. +kubelet_max_pods: 110 + +# Enable nodelocal dns cache, see https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.md#nodelocal-dns-cache for further information +enable_nodelocaldns: true + +# Highly Available loadbalancer example config +# apiserver_loadbalancer_domain_name: "lb.kubesphere.local" # Loadbalancer domain name +# loadbalancer_apiserver: # Loadbalancer apiserver configuration, please uncomment this line when you prepare HA install +# address: 192.168.0.10 # Loadbalancer apiserver IP address +# port: 6443 # apiserver port + +######################### KubeSphere ######################### + +# Version of KubeSphere +ks_version: v2.1.0 + +# KubeSphere console port, range 30000-32767, +# but 30180/30280/30380 are reserved for internal service +console_port: 30880 # KubeSphere console nodeport + +#CommonComponent +mysql_volume_size: 20Gi # MySQL PVC size +minio_volume_size: 20Gi # Minio PVC size +etcd_volume_size: 20Gi # etcd PVC size +openldap_volume_size: 2Gi # openldap PVC size +redis_volume_size: 2Gi # Redis PVC size + + +# Monitoring +prometheus_replica: 2 # Prometheus replicas with 2 by default which are responsible for monitoring different segments of data source and provide high availability as well. +prometheus_memory_request: 400Mi # Prometheus request memory +prometheus_volume_size: 20Gi # Prometheus PVC size +grafana_enabled: true # enable grafana or not + + +## Container Engine Acceleration +## Use nvidia gpu acceleration in containers +# nvidia_accelerator_enabled: true # enable Nvidia GPU accelerator or not. It supports hybrid node with GPU and CPU installed. +# nvidia_gpu_nodes: # The GPU nodes specified in hosts.ini. FOr now we only support Ubuntu 16.04 +# - kube-gpu-001 # The host name of the GPU node specified in hosts.ini +``` + +## How to Configure a GPU Node + +You may want to use GPU nodes for special purpose such as machine learning. Let's say you have a GPU node called `node2` in `hosts.ini`, then in the file `common.yaml` specify the following configuration. Please be aware the `- node2` has two spaces indent. + +```yaml + nvidia_accelerator_enabled: true + nvidia_gpu_nodes: + - node2 +``` + +> Note: The GPU node now only supports Ubuntu 16.04. diff --git a/content/en/docs/project-user-guide/storage/volumes.md b/content/en/docs/project-user-guide/storage/volumes.md new file mode 100644 index 000000000..b9b129818 --- /dev/null +++ b/content/en/docs/project-user-guide/storage/volumes.md @@ -0,0 +1,10 @@ +--- +title: "Volumes" +keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus' +description: 'Create Volumes (PVCs)' + +linkTitle: "Volumes" +weight: 2110 +--- + +TBD diff --git a/content/en/docs/quick-start/_index.md b/content/en/docs/quick-start/_index.md index ed0563e5b..7d0a17eee 100644 --- a/content/en/docs/quick-start/_index.md +++ b/content/en/docs/quick-start/_index.md @@ -1,11 +1,11 @@ --- -title: "quick-start" +title: "Quick Start" description: "Help you to better understand KubeSphere with detailed graphics and contents" layout: "single" -linkTitle: "quick-start" +linkTitle: "Quick Start" -weight: 3000 +weight: 1500 icon: "/images/docs/docs.svg" @@ -19,4 +19,4 @@ In this chapter, we will demonstrate how to use KubeKey to provision a new Kuber Below you will find some of the most common and helpful pages from this chapter. We highly recommend you to review them at first. -{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} \ No newline at end of file +{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} diff --git a/content/en/docs/quick-start/admin-quick-start.md b/content/en/docs/quick-start/admin-quick-start.md deleted file mode 100644 index 4cfe593af..000000000 --- a/content/en/docs/quick-start/admin-quick-start.md +++ /dev/null @@ -1,170 +0,0 @@ ---- -title: "Getting Started with Multi-tenant Management" -keywords: 'kubesphere, kubernetes, docker, multi-tenant' -description: 'The guide to get familiar with KubeSphere multi-tenant management' - -linkTitle: "1" -weight: 3010 ---- - - -## Objective - -This is the first lab exercise of KubeSphere. We strongly suggest you to learn it with your hands. This guide shows how to create workspace, role and user account which are required for next lab exercises. Moreover, you will learn how to create project and DevOps project within your workspace where is the place your workloads are running. After this lab, you will get familiar with KubeSphere multi-tenant management system. - -## Prerequisites - -You need to have a KubeSphere installed. - -## Estimated Time - -About 15 minutes - -## Architecture - -KubeSphere system is organized into **three** hierarchical structures of tenants which are cluster, workspace and project. Here a project is a Kubernetes namespace. - -As shown below, you can create multiple workspaces within a Kubernetes cluster. Under each workspace you can also create multiple projects. - -For each level, there are multiple built-in roles. and it allows you to create role with customized authorization as well. This hierarchy list is appropriate for enterprise users who have different teams or groups, and different roles within each team. - -![Architecture](https://pek3b.qingstor.com/kubesphere-docs/png/20200105121616.png) - -## Hands-on Lab - -### Task 1: Create Roles and Accounts - -The first task is going to create an account and a role, and assign the role to the user. This task must be done using the built-in user `admin` with the role `cluster-admin`. - -There are three built-in roles in the cluster level as shown below. - -| Built-in Roles | Description | -| --- | --- | -| cluster-admin | It has the privilege to manage any resources in the cluster. | -| workspaces-manager | It is able to manage workspaces including creating, deleting and managing the users of a workspace. | -| cluster-regular | Regular users have no authorization to manage resources before being invited to a workspaces. The access right is decided by the assigned role to the specific workspace or project.| - -Here is an example showing you how to create a new role named `users-manager`, grant **account management** and **role management** capabilities to the role, then create a new account named `user-manager` and grant it the users-manager role. - -| Account Name | Cluster Role | Responsibility | -| --- | --- | --- | -| user-manager | users-manager | Manage cluster accounts and roles | - -1.1 Log in with the built-in user `admin`, click **Platform → Platform Roles**. You can see the role list as follows. Click **Create** to create a role which is used to manage all accounts and roles. - -![Roles](https://pek3b.qingstor.com/kubesphere-docs/png/20190716112614.png#align=left&display=inline&height=998&originHeight=998&originWidth=2822&search=&status=done&width=2822) - -1.2. Fill in the basic information and authorization settings of the role. - -- Name: `users-manager` -- Description: Describe the role's responsibilities, here we type `Manage accounts and roles` - - -1.3. Check all the access rights on the options of `Account Management` and `Role Management`; then click **Create**. - -![Authorization Settings](https://pek3b.qingstor.com/kubesphere-docs/png/20200305172551.png) - -1.4. Click **Platform → Accounts**. You can see the account list in the current cluster. Then click **Create**. - -![Account List](https://pek3b.qingstor.com/kubesphere-docs/png/20190716112945.png#align=left&display=inline&height=822&originHeight=822&originWidth=2834&search=&status=done&width=2834) - -1.5. Fill in the new user's basic information. Set the username as `user-manager`; select the role `users-manager` and fill other items as required. Then click **OK** to create this account. - -![Create Account](https://pek3b.qingstor.com/kubesphere-docs/png/20200105152641.png) - -1.6. Then log out and log in with the user `user-manager` to create four accounts that are going to be used in next lab exercises. Once login, enter **Platform → Accounts**, then create the four accounts in the following table. - -| Account Name | Cluster Role | Responsibility | -| --- | --- | --- | -| ws-manager | workspaces-manager | Create and manage all workspaces | -| ws-admin | cluster-regular | Manage all resources under a specific workspace (This example is used to invite new members to join a workspace.) | -| project-admin | cluster-regular | Create and manage projects, DevOps projects and invite new members into the projects | -| project-regular | cluster-regular | The regular user will be invited to the project and DevOps project by the project-admin. We use this account to create workloads, pipelines and other resources under the specified project. | - -1.7. Verify the four accounts that we have created. - -![Verify Accounts](https://pek3b.qingstor.com/kubesphere-docs/png/20190716114245.png#align=left&display=inline&height=1494&originHeight=1494&originWidth=2794&search=&status=done&width=2794) - -### Task 2: Create a Workspace - -The second task is going to create a workspace using the user `ws-manager` created in the previous task. As we know, it is a workspace admin. - -Workspace is the base for KubeSphere multi-tenant management. It is also the basic logic unit for projects, DevOps projects and organization members. - -2.1. Log in KubeSphere with `ws-manager` which has the authorization to manage all workspaces on the platform. - -Click **Platform → Workspace** on the left top corner. You can see there is only one default workspace **system-workspace** listed in the page, which is for running system related components and services. You are not allowed to delete this workspace. - -Click **Create** in the workspace list page, name the new workspace `demo-workspace` and assign the user `ws-admin` as the workspace admin as the screenshot shown below: - -![Workspace List](https://pek3b.qingstor.com/kubesphere-docs/png/20190716130007.png#align=left&display=inline&height=736&originHeight=736&originWidth=1804&search=&status=done&width=1804) - -2.2. Logout and sign in with `ws-admin` after `demo-workspace` is created. Then click **View Workspace**, select **Workspace Settings → Workspace Members** and click **Invite Member**. - -![Invite Members](https://pek3b.qingstor.com/kubesphere-docs/png/20200105155226.png) - -2.3. Invite both `project-admin` and `project-regular` and grant them `workspace-regular` accordingly, click **OK** to save it. Now there are three members in the `demo-workspace`. - -| User Name | Role in the Workspace | Responsibility | -| --- | --- | --- | -| ws-admin | workspace-admin | Manage all resources under the workspace (We use this account to invite new members into the workspace). | -| project-admin | workspace-regular | Create and manage projects, DevOps projects, and invite new members to join. | -| project-regular | workspace-viewer | Will be invited by project-admin to join the project and DevOps project. We use this account to create workloads, pipelines, etc. | - -![Workspace Members](https://pek3b.qingstor.com/kubesphere-docs/png/20190716130517.png#align=left&display=inline&height=1146&originHeight=1146&originWidth=1318&search=&status=done&width=1318) - -### Task 3: Create a Project - -This task is going to show how to create a project and some related operations in the project using Project Admin. - -3.1. Sign in with `project-admin` created in the first task, then click **Create** and select **Create a resource project**. - -![Project List](https://pek3b.qingstor.com/kubesphere-docs/png/20190716131852.png#align=left&display=inline&height=1322&originHeight=1322&originWidth=2810&search=&status=done&width=2810) - -3.2. Name it `demo-project`, then set the CPU limit to 1 Core and memory limit to 1000 Mi in the Advanced Settings, then click **Create**. - -3.3. Choose **Project Settings → Project Members** and click **Invite Member**. - -![Invite Project Members](https://pek3b.qingstor.com/kubesphere-docs/png/20200105160247.png) - -3.4. Invite `project-regular` to this project and grant this user the role **operator**. - -![Built-in Projects Roles](https://pek3b.qingstor.com/kubesphere-docs/png/20190716132840.png#align=left&display=inline&height=1038&originHeight=1038&originWidth=1646&search=&status=done&width=1646) - -![Project Roles](https://pek3b.qingstor.com/kubesphere-docs/png/20190716132920.png#align=left&display=inline&height=518&originHeight=518&originWidth=2288&search=&status=done&width=2288) - -#### Set Gateway - -Before creating a route which is the Kubernetes Ingress, you need to enable a gateway for this project. The gateway is an [Nginx ingress controller](https://github.com/kubernetes/ingress-nginx) running in the project. - -3.5. We continue to use `project-admin`. Choose **Project Settings → Advanced Settings** and click **Set Gateway**. - -![Gateway Page](https://pek3b.qingstor.com/kubesphere-docs/png/20200105161214.png) - -3.6. Choose the access method **NodePort** and click **Save**. - -![Set Gateway](https://pek3b.qingstor.com/kubesphere-docs/png/20190716134742.png#align=left&display=inline&height=946&originHeight=946&originWidth=2030&search=&status=done&width=2030) - -3.7. Now we are able to see the Gateway Address, the NodePort of http and https appeared in the page. - -> Note: If you want to expose services using LoadBalancer type, you need to use the [LoadBalancer plugin of cloud provider](https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/). If your Kubernetes cluster is running on bare metal environment, we recommend you to use [Porter](https://github.com/kubesphere/porter) as the LoadBalancer plugin. - -![NodePort Gateway](https://pek3b.qingstor.com/kubesphere-docs/png/20200105161335.png) - -### Task 4: Create DevOps Project (Optional) - -> Prerequisite: You need to install [KubeSphere DevOps system](../../installation/install-devops), which is a pluggable component providing CI/CD pipeline, Binary-to-image and Source-to-image features. - -4.1. We still use the account `project-admin` to demonstrate this task. Click **Workbench** and click **Create** button, then select **Create a DevOps project**. - -![Workbench](https://pek3b.qingstor.com/kubesphere-docs/png/20200105162512.png) - -4.2. Fill in the basic information, e.g. name it `demo-devops`, then click **Create** button. It will take a while to initialize before switching to `demo-devops` page. - -![demo-devops](https://pek3b.qingstor.com/kubesphere-docs/png/20200105162623.png) - -4.3. Similarly, navigate to **Project Management → Project Members**, then click **Invite Member** and grant `project-regular` the role of `maintainer`, which is allowed to create pipeline, credentials, etc. - -![Invite DevOps member](https://pek3b.qingstor.com/kubesphere-docs/png/20200105162710.png) - -Congratulations! You've been familiar with KubeSphere multi-tenant management mechanism. In the next few tutorials, we will use the account `project-regular` to demonstrate how to create applications and resources under the project and the DevOps project. diff --git a/content/en/docs/quick-start/all-in-one-on-linux.md b/content/en/docs/quick-start/all-in-one-on-linux.md new file mode 100644 index 000000000..4237501c5 --- /dev/null +++ b/content/en/docs/quick-start/all-in-one-on-linux.md @@ -0,0 +1,8 @@ +--- +title: "All-in-one on Linux" +keywords: 'kubesphere, kubernetes, docker, multi-tenant' +description: 'All-in-one on Linux' + +linkTitle: "All-in-one on Linux" +weight: 3010 +--- diff --git a/content/en/docs/quick-start/app-store.md b/content/en/docs/quick-start/app-store.md deleted file mode 100644 index ab1455c0f..000000000 --- a/content/en/docs/quick-start/app-store.md +++ /dev/null @@ -1,162 +0,0 @@ ---- -title: "Application Store" -keywords: 'kubepshere, kubernetes, docker, helm, openpitrix, application store' -description: 'Application lifecycle management in Helm-based application store sponsored by OpenPitrix' - - -linkTitle: "13" -weight: 3130 ---- - -KubeSphere integrates open source [OpenPitrix](https://github.com/openpitrix/openpitrix) to set up app store and app repository services which provide full lifecycle of application management. Application Store supports three kinds of application deployment as follows: - -> - **Global application store** provides one-click deployment service for Helm-based applications. It provides nine built-in applications for testing. -> - **Application template** provides a way for developers and ISVs to share applications with users in a workspace. It also supports importing third-party application repositories within workspace. -> - **Composing application** means users can quickly compose multiple microservices into a complete application through the one-stop console. - -![App Store](https://pek3b.qingstor.com/kubesphere-docs/png/20200212172234.png) - -## Objective - -In this tutorial, we will walk you through how to use [EMQ X](https://www.emqx.io/) as a demo application to demonstrate the **global application store** and **application lifecycle management** including upload / submit / review / test / release / upgrade / delete application templates. - -## Prerequisites - -- You need to install [Application Store (OpenPitrix)](../../installation/install-openpitrix). -- You need to create a workspace and a project, see [Get Started with Multi-tenant Management](../admin-quick-start). - -## Hands-on Lab - -### Step 1: Create Customized Role and Account - -In this step, we will create two accounts, i.e., `isv` for ISVs and `reviewer` for app technical reviewers. - -1.1. First of all, we need to create a role for app reviewers. Log in KubeSphere console with the account `admin`, go to **Platform → Platform Roles**, then click **Create** and name it `app-review`, choose **App Template** in the authorization settings list, then click **Create**. - -![Authorization Settings](https://pek3b.qingstor.com/kubesphere-docs/png/20200305172646.png) - -1.2. Create an account `reviewer`, and grant the role of **app-review** to it. - -1.3. Similarly, create an account `isv`, and grant the role of **cluster-regular** to it. - -![Create Roles](https://pek3b.qingstor.com/kubesphere-docs/png/20200212180757.png) - -1.4. Invite the accounts that we created above to an existing workspace such as `demo-workspace`, and grant them the role of `workspace-admin`. - -### Step 2: Upload and Submit Application - -2.1. Log in KubeSphere with `isv`, enter the workspace. We are going to upload the EMQ X app to this workspace. First please download [EMQ X chart v1.0.0](https://github.com/kubesphere/tutorial/raw/master/tutorial%205%20-%20app-store/emqx-v1.0.0-rc.1.tgz) and click **Upload Template** by choosing **App Templates**. - -> Note we are going to upload a newer version of EMQ X to demo the upgrade feature later on. - -![App Templates](https://pek3b.qingstor.com/kubesphere-docs/png/20200212183110.png) - -2.2. Click **Upload**, then click **Upload Helm Chart Package** to upload the chart. - -![Upload Template](https://pek3b.qingstor.com/kubesphere-docs/png/20200212183634.png) - -2.3. Click **OK**. Now download [EMQ Icon](https://github.com/kubesphere/tutorial/raw/master/tutorial%205%20-%20app-store/emqx-logo.png) and click **Upload icon** to upload App logo. Click **OK** when you are done. - -![EMQ Template](https://pek3b.qingstor.com/kubesphere-docs/png/20200212232222.png) - -2.4. At this point, you will be able to see the status displays `draft`, which means this app is under developing. The uploaded app is visible for all members in the same workspace. - -![Template List](https://pek3b.qingstor.com/kubesphere-docs/png/20200212232332.png) - -2.5. Enter app template detailed page by clicking on EMQ X from the list. You can edit the basic information of this app by clicking **Edit Info**. - -![Edit Template](https://pek3b.qingstor.com/kubesphere-docs/png/20200212232811.png) - -2.6. You can customize the app's basic information by filling in the table as the following screenshot. - -![Customize Template](https://pek3b.qingstor.com/kubesphere-docs/png/20200213143953.png) - -2.7. Save your changes, then you can test this application by deploying to Kubernetes. Click on the **Test Deploy** button. - -![Save Template](https://pek3b.qingstor.com/kubesphere-docs/png/20200213152954.png) - -2.8. Select project that you want to deploy into, then click **Deploy**. - -![Deploy Template](https://pek3b.qingstor.com/kubesphere-docs/png/20200213153820.png) - -2.9. Wait for a few minutes, then switch to the tab **Deployed Instances**. You will find EMQ X App has been deployed successfully. - -![Template Instance](https://pek3b.qingstor.com/kubesphere-docs/png/20200213161854.png) - -2.10. At this point, you can click `Submit Review` to submit this application to `reviewer`. - -![Submit Template](https://pek3b.qingstor.com/kubesphere-docs/png/20200213162159.png) - -2.11. As shown in the following graph, the app status has been changed to `Submitted`. Now app reviewer can review it. - -![Template Status](https://pek3b.qingstor.com/kubesphere-docs/png/20200213162811.png) - -### Step 3: Review Application - -3.1. Log out, then use `reviewer` account to log in KubeSphere. Navigate to **Platform → App Management → App Review**. - -![Review List](https://pek3b.qingstor.com/kubesphere-docs/png/20200213163535.png) - -3.2. Click **Review** by clicking the vertical three dots at the end of app item in the list, then you start to review the app's basic information, introduction, chart file and updated logs from the pop-up windows. - -![EMQ Info](https://pek3b.qingstor.com/kubesphere-docs/png/20200213163802.png) - -3.3. It is the reviewer's responsibility to judge if the app satisfies the criteria of the Global App Store or not, if yes, then click `Pass`; otherwise, `Reject` it. - -### Step 4: Release Application to Store - -4.1. Log out and switch to use `isv` to log in KubeSphere. Now `isv` can release the EMQ X application to the global application store which means all users in this platform can find and deploy this application. - -4.2. Enter the demo workspace and navigate to the EMQ X app from the template list. Enter the detailed page and expand the version list, then click **Release to Store**, choose **OK** in the pop-up windows. - -![Release EMQ](https://pek3b.qingstor.com/kubesphere-docs/png/20200213171324.png) - -4.3. At this point, EMQ X has been released to application store. - -![Audit Records](https://pek3b.qingstor.com/kubesphere-docs/png/20200213171705.png) - -4.4. Go to **App Store** in the top menu, you will see the app in the list. - -![EMQ on Store](https://pek3b.qingstor.com/kubesphere-docs/png/20200213172436.png) - -4.5. At this point, we can use any role of users to access EMQ X application. Click into the application detailed page to go through its basic information. You can click **Deploy** button to deploy the application to Kubernetes. - -![Deploy EMQ](https://pek3b.qingstor.com/kubesphere-docs/png/20200213172650.png) - -### Step 5: Create Application Category - -Depending on the business needs, `Reviewer` can create multiple categories for different types of applications. It is similar as tag and can be used in application store to filter applications, e.g. Big data, Middleware, IoT, etc. - -As for EMQ X application, we can create a category and name it `IOT`. First switch back to the user `Reviewer`, go to **Platform → App Management → App Categories** - -![Create Category](https://pek3b.qingstor.com/kubesphere-docs/png/20200213172046.png) - -Then click **Uncategorized** and find EMQ X, change its category to `IOT` and save it. - -> Note usually reviewer should create necessary categories in advance according to the requirements of the store. Then ISVs categorize their applications as appropriate before submitting for review. - -![Categorize EMQ](https://pek3b.qingstor.com/kubesphere-docs/png/20200213172311.png) - -### Step 6: Add New Version - -6.1. KubeSphere supports adding new versions of existing applications for users to quickly upgrade. Let's continue to use `isv` account and enter the EMQ X template page in the workspace. - -![Create New Version](https://pek3b.qingstor.com/kubesphere-docs/png/20200213173325.png) - -6.2. Download [EMQ X v4.0.2](https://github.com/kubesphere/tutorial/raw/master/tutorial%205%20-%20app-store/emqx-v4.0.2.tgz), then click on the **New Version** on the right, upload the package that you just downloaded. - -![Upload New Version](https://pek3b.qingstor.com/kubesphere-docs/png/20200213173744.png) - -6.3. Click **OK** when you upload successfully. - -![New Version Info](https://pek3b.qingstor.com/kubesphere-docs/png/20200213174026.png) - -6.4. At this point, you can test the new version and submit it to `Reviewer`. This process is similar to the one for the first version. - -![Submit New Version](https://pek3b.qingstor.com/kubesphere-docs/png/20200213174256.png) - -6.5. After you submit the new version, the rest of process regarding review and release are also similar to the first version that we demonstrated above. - -### Step 7: Upgrade - -After the new version has been released to application store, all users can upgrade from this application. diff --git a/content/en/docs/quick-start/b2i-war.md b/content/en/docs/quick-start/b2i-war.md deleted file mode 100644 index 3be0a3c14..000000000 --- a/content/en/docs/quick-start/b2i-war.md +++ /dev/null @@ -1,129 +0,0 @@ ---- -title: "Binary to Image - Publish Artifacts to Kubernetes" -keywords: "kubesphere, kubernetes, docker, B2I, binary to image, jenkins" -description: "Deploy Artifacts to Kubernetes Using Binary to Image" - -linkTitle: "8" -weight: 3080 ---- - -## What is Binary to Image - -As similar as [Source to Image (S2I)](../source-to-image), Binary to Image (B2I) is a toolkit and workflow for building reproducible container images from binary executables like Jar, War, binary package, etc. All you need to do is to upload your artifact, and specify the image repository such as DockerHub or Harbor to where you want to push. After you run a B2I process, your image will be pushed to the target repository and your application will be automatically deployed to Kubernetes as well. - -## How does B2I Improve CD Efficiency - -From the introduction above we can see B2I bridges your binary executables to cloud native services with no complicated configurations or coding which is extremely useful for legacy applications and the users who are not familiar with Docker and Kubernetes. Moreover, with B2I tool, as said, you do not need to write Dockerfile, it not only reduces learning costs but improves publishing efficiency, which enables developers to focus on business itself. In a word, B2I can greatly empower enterprises for continuous delivery that is one of the keys to digital transformation. - -The following figure describes the step-by-step process of B2I. B2I has instrumented and streamlined the steps, so it takes few clicks to complete in KubeSphere console. - -![B2I Process](https://pek3b.qingstor.com/kubesphere-docs/png/20200108144952.png) - -> - ① Create B2I in KubeSphere console and upload artifact or binary package -> - ② B2I will create K8s Job, Deployment and Service based on the uploaded binary -> - ③ Automatically package the artifact into Docker image -> - ④ Push image to DockerHub or Harbor -> - ⑤ B2I Job will pull the image from registry for Deployment created in the second step -> - ⑥ Automatically publish the service to Kubernetes -> -> Note: In the process, the B2I Job also reports status in the backend. - -In this document, we will walk you through how to use B2I in KubeSphere. For more testing purposes on your own, we provide five artifact packages that you can download from the sites in the following tables. - -|Artifact Package (Click to download) | GitHub Repository| -| --- | ---- | -| [b2i-war-java8.war](https://github.com/kubesphere/tutorial/raw/master/tutorial%204%20-%20s2i-b2i/b2i-war-java8.war)| [Spring-MVC-Showcase](https://github.com/spring-projects/spring-mvc-showcase)| -|[b2i-war-java11.war](https://github.com/kubesphere/tutorial/raw/master/tutorial%204%20-%20s2i-b2i/b2i-war-java11.war)| [SpringMVC5](https://github.com/kubesphere/s2i-java-container/tree/master/tomcat/examples/springmvc5) -|[b2i-binary](https://github.com/kubesphere/tutorial/raw/master/tutorial%204%20-%20s2i-b2i/b2i-binary)| [DevOps-go-sample](https://github.com/runzexia/devops-go-sample) | -|[b2i-jar-java11.jar](https://github.com/kubesphere/tutorial/raw/master/tutorial%204%20-%20s2i-b2i/b2i-jar-java11.jar) |[java-maven-example](https://github.com/kubesphere/s2i-java-container/tree/master/java/examples/maven) | -|[b2i-jar-java8.jar](https://github.com/kubesphere/tutorial/raw/master/tutorial%204%20-%20s2i-b2i/b2i-jar-java8.jar) | [devops-java-sample](https://github.com/kubesphere/devops-java-sample) | - -## Prerequisites - -- You have installed [KubeSphere DevOps System](../../installation/install-devops). -- You have created a workspace, a project and a `project-regular` account. Please see [Get Started with Multi-tenant Management](../admin-quick-start). -- Set CI dedicated node for building images, please refer to [Set CI Node for Dependency Cache](../../devops/devops-ci-node). This is not mandatory but recommended for development and production environment since it caches artifacts dependency. - -## Hands-on Lab - -In this lab, we will learn how to use B2I by creating service in KubeSphere, and how to automatically complete six steps described in the workflow graph above. - -### Step 1: Create a Secret - -We need to create a secret since B2I Job will push the image to DockerHub. If you have finished [S2I lab](../source-to-image), you already have the secret created. Otherwise, log in KubeSphere with the account `project-regular`. Go to your project and create the secret for DockerHub. Please reference [Creating Common-used Secrets](../../configuration/secrets#create-common-used-secrets). - -### Step 2: Create a Service - -2.1. Select **Application Workloads → Services**, then click **Create** to create a new service through the artifact. - -![Create Service](https://pek3b.qingstor.com/kubesphere-docs/png/20200108170544.png) - -2.2. Scroll down to **Build a new service through the artifact** and choose **war**. We will use the [Spring-MVC-Showcase](https://github.com/spring-projects/spring-mvc-showcase) project as a sample by uploading the WAR artifact ([b2i-war-java8](https://github.com/kubesphere/tutorial/raw/master/tutorial%204%20-%20s2i-b2i/b2i-war-java8.war)) to KubeSphere. - -2.3. Enter service name `b2i-war-java8`, click **Next**. - -2.4. Refer to the following instructions to fill in **Build Settings**. - -- Upload `b2i-war-java8.war` to KubeSphere. -- Choose `tomcat85-java8-centos7:latest` as the build environment. -- Enter `/` or `/` as image name. -- Tag the image, for instance, `latest`. -- Select `dockerhub-secret` that we created in previous step as target image repository: . - -![Build Settings](https://pek3b.qingstor.com/kubesphere-docs/png/20200108175747.png) - -2.5. Click **Next** to the **Container Settings** and configure the basic info as shown in the figure below. - -![Container Settings](https://pek3b.qingstor.com/kubesphere-docs/png/20200108175907.png) - -2.6. Click **Next** and continue to click **Next** to skip **Mount Volumes**. - -2.7. Check **Internet Access** and choose **NodePort**, then click **Create**. - -![Internet Access](https://pek3b.qingstor.com/kubesphere-docs/png/20200108180015.png) - -### Step 3: Verify B2I Build Status - -3.1. Choose **Image Builder** and click into **b2i-war-java8-xxx** to inspect B2I building status. - -![Image Builder](https://pek3b.qingstor.com/kubesphere-docs/png/20200108181100.png) - -3.2. Now it is ready to verify the status. You can expand the Job records to inspect the rolling logs. Normally, it will execute successfully in 2~4 minutes. - -![Job Records](https://pek3b.qingstor.com/kubesphere-docs/png/20200108181133.png) - -### Step 4: Verify the resources created by B2I - -#### Service - -![Service](https://pek3b.qingstor.com/kubesphere-docs/png/20200108182649.png) - -#### Deployment - -![Deployment](https://pek3b.qingstor.com/kubesphere-docs/png/20200108182707.png) - -#### Job - -![Job](https://pek3b.qingstor.com/kubesphere-docs/png/20200108183640.png) - -Alternatively, if you want to use command line to inspect those resources, you can use web kubectl from the Toolbox at the bottom right of console. Note it requires cluster admin account to open the tool. - -![Web Kubectl](https://pek3b.qingstor.com/kubesphere-docs/png/20200108184829.png) - -### Step 5: Access the Service - -Click into service **b2i-war-java8**. We can get the NodePort and Endpoints. Thereby we can access the **Spring-MVC-Showcase** service via Endpoints within cluster, or browse the web service externally using `http://{$Node IP}:{$NodePort}/{$Binary-Package-Name}/`. - -![Resource Info](https://pek3b.qingstor.com/kubesphere-docs/png/20200108185210.png) - -For the example above, enter **http://139.198.111.111:30182/b2i-war-java8/** to access **Spring-MVC-Showcase**. Make sure the traffic can pass through the NodePort. - -![Access the Service](https://pek3b.qingstor.com/kubesphere-docs/png/20200108190256.png) - -### Step 6: Verify Image in DockerHub - -Sign in DockerHub with your account, you can find the image was successfully pushed to DockerHub with tag `latest`. - - ![Image in DockerHub](https://pek3b.qingstor.com/kubesphere-docs/png/20200108191311.png) - -Congratulation! Now you know how to use B2I to package your artifacts into Docker image, however, without learning Docker. diff --git a/content/en/docs/quick-start/bookinfo-canary.md b/content/en/docs/quick-start/bookinfo-canary.md deleted file mode 100644 index 780d8d35a..000000000 --- a/content/en/docs/quick-start/bookinfo-canary.md +++ /dev/null @@ -1,155 +0,0 @@ ---- -title: "Managing Canary Release of Microservice App based on Istio" -keywords: 'kubesphere, kubernetes, docker, istio, canary release, jaeger' -description: 'How to manage canary release of microservices using Istio platform' - - -linkTitle: "11" -weight: 3110 ---- - -[Istio](https://istio.io/),as an open source service mesh, provides powerful traffic management which makes canary release of a microservice possible. **Canary release** provides canary rollouts, and staged rollouts with percentage-based traffic splits. - -> The following paragraph is from [Istio](https://istio.io/docs/concepts/traffic-management/) official website. - -Istio’s traffic routing rules let you easily control the flow of traffic and API calls between services. Istio simplifies configuration of service-level properties like circuit breakers, timeouts, and retries, and makes it easy to set up important tasks like A/B testing, canary rollouts, and staged rollouts with percentage-based traffic splits. It also provides out-of-box failure recovery features that help make your application more robust against failures of dependent services or the network. - -KubeSphere provides three kinds of grayscale strategies based on Istio, including blue-green deployment, canary release and traffic mirroring. - -## Objective - -In this tutorial, we are going to deploy a Bookinfo sample application composed of four separate microservices to demonstrate the canary release, tracing and traffic monitoring using Istio on KubeSphere. - -## Prerequisites - -- You need to [Enable Service Mesh System](../../installation/install-servicemesh). -- You need to complete all steps in [Getting Started with Multi-tenant Management](../admin-quick-start.md). -- Log in with `project-admin` and go to your project, then navigate to **Project Settings → Advanced Settings → Set Gateway** and turn on **Application Governance**. - -### What is Bookinfo Application - -The Bookinfo application is composed of four distributed microservices as shown below. There are three versions of the Reviews microservice. - -- The **productpage** microservice calls the details and reviews microservices to populate the page. -- The **details** microservice contains book information. -- The **reviews** microservice contains book reviews. It also calls the ratings microservice. -- The **ratings** microservice contains book ranking information that accompanies a book review. - -The end-to-end architecture of the application is shown below, see [Bookinfo Application](https://istio.io/docs/examples/bookinfo/) for more details. - -![Bookinfo Application](https://pek3b.qingstor.com/kubesphere-docs/png/20190718152533.png#align=left&display=inline&height=1030&originHeight=1030&originWidth=1712&search=&status=done&width=1712) - -## Hands-on Lab - -### Step 1: Deploy Bookinfo Application - -1.1. Log in with account `project-regular` and enter the **demo-project**, navigate to **Application Workloads → Applications**, click **Deploy Sample Application**. - -![Application List](https://pek3b.qingstor.com/kubesphere-docs/png/20200210234559.png) - -1.2. Click **Create** in the pop-up window, then the Bookinfo application will be deployed automatically, and the application components are listed in the following page, as well as the routes and hostname. - -![Create Bookinfo Application](https://pek3b.qingstor.com/kubesphere-docs/png/20200210235159.png) - -1.3. Now you can access the Bookinfo homepage as the following screenshot shown via **Click to visit** button. Click on the **Normal user** to enter into the summary page. - -![Product Page](https://pek3b.qingstor.com/kubesphere-docs/png/20190718161448.png#align=left&display=inline&height=922&originHeight=922&originWidth=2416&search=&status=done&width=2416) - -> Note you need to make the URL above accessible from your computer. - -1.4. Notice that at this point it only shows **- Reviewer1** and **- Reviewer2** without any stars at the Book Reviews section. This is the initial status of this step. - -![Review Page](https://pek3b.qingstor.com/kubesphere-docs/png/20190718161819.png#align=left&display=inline&height=986&originHeight=986&originWidth=2854&search=&status=done&width=2854) - -### Step 2: Create Canary Release for Reviews Service - -2.1. Back to KubeSphere console, choose **Grayscale Release**, and click **Create Canary Release Job**. Then select **Canary Release** and click **Create Job**. - -![Grayscale Release List](https://pek3b.qingstor.com/kubesphere-docs/png/20190718162152.png#align=left&display=inline&height=748&originHeight=748&originWidth=2846&search=&status=done&width=2846) - -![Create Grayscale release](https://pek3b.qingstor.com/kubesphere-docs/png/20190718162308.png#align=left&display=inline&height=1416&originHeight=1416&originWidth=2822&search=&status=done&width=2822) - -2.2. Fill in the basic information, e.g. name it `canary-release`, click **Next** and select **reviews** as the canary service, then click **Next**. - -![Reviews New Version](https://pek3b.qingstor.com/kubesphere-docs/png/20190718162550.png#align=left&display=inline&height=926&originHeight=926&originWidth=1908&search=&status=done&width=1908) - -2.3. Enter `v2` as **Grayscale Release Version Number** and fill in the new image box with `kubesphere/examples-bookinfo-reviews-v2:1.13.0`. You can simply change the version of the default value in the box from `v1` to `v2`. Then click **Next**. - -![Reviews New Version Info](https://pek3b.qingstor.com/kubesphere-docs/png/20190718162840.png#align=left&display=inline&height=754&originHeight=754&originWidth=1910&search=&status=done&width=1910) - -2.4. The canary release supports **Forward by traffic ratio** and **Forward by request content**. In this tutorial we choose adjusting the traffic ratio to manage traffic routing between v1 and v2. Drag the slider to adjust v2 up 30% traffic, and v2 up 70%. - -![Policy Config](https://pek3b.qingstor.com/kubesphere-docs/png/20190718163639.png#align=left&display=inline&height=750&originHeight=750&originWidth=1846&search=&status=done&width=1846) - -2.5. Click **Create** when you have completed the configuration, then you are able to see the `canary-release` has been created successfully. - -![Canary Release](https://pek3b.qingstor.com/kubesphere-docs/png/20190718164216.png#align=left&display=inline&height=850&originHeight=850&originWidth=2822&search=&status=done&width=2822) - -### Step 3: Verify the Canary Release - -When you visit the Bookinfo website again and refresh your browser repeatedly, you will be able to see that the Bookinfo reviews section switch between v1 and v2 at a random rate of about 30% and 70% respectively. - -![Verify Canary Release](https://pek3b.qingstor.com/kubesphere-docs/png/bookinfo-canary.gif#align=left&display=inline&height=1016&originHeight=1016&originWidth=2844&search=&status=done&width=2844) - -### Step 4: Inspect the Traffic Topology Graph - -4.1. Connect to your SSH Client, use the following command to introduce real traffic to simulate the access to the bookinfo application every 0.5 seconds. - -```bash -$ curl http://productpage.demo-project.192.168.0.88.nip.io:32565/productpage?u=normal - - % Total % Received % Xferd Average Speed Time Time Time Current - Dload Upload Total Spent Left Speed - 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0< 74 5183 74 3842 0 0 73957 0 --:--:-- --:--:-- --:--:-- 73884 - ··· -``` - -4.2. From the traffic management diagram, you can easily see the service invocation and dependencies, health, performance between different microservices. - -![Inject Traffic](https://pek3b.qingstor.com/kubesphere-docs/png/20190718170256.png#align=left&display=inline&height=1338&originHeight=1338&originWidth=2070&search=&status=done&width=2070) - -4.3. Click on the reviews card. The traffic monitoring graph will come out including real-time data of **Success rate**, **Traffic** and **Duration**. - -![Traffic Graph](https://pek3b.qingstor.com/kubesphere-docs/png/20190718170727.png#align=left&display=inline&height=1150&originHeight=1150&originWidth=2060&search=&status=done&width=2060) - -### Step 5: Inspect the Tracing Details - -KubeSphere provides distributed tracing feature based on [Jaeger](https://www.jaegertracing.io/), which is used for monitoring and troubleshooting microservices-based distributed application. - -5.1. Choose **Tracing** tab. You can clearly see all phases and internal calls of a request, as well as the period in each phase. - -![Tracing](https://pek3b.qingstor.com/kubesphere-docs/png/20190718171052.png#align=left&display=inline&height=1568&originHeight=1568&originWidth=2824&search=&status=done&width=2824) - -5.2. Click any item, you can even drill down to see the request details and this request is being processed by which machine or container. - -![Request Details](https://pek3b.qingstor.com/kubesphere-docs/png/20190718173117.png#align=left&display=inline&height=1382&originHeight=1382&originWidth=2766&search=&status=done&width=2766) - -### Step 6: Take Over All Traffic - -6.1. As mentioned previously, when the canary version v2 is released, it could be used to send a portion of traffic to the canary version. Publishers can test the new version online and collect user feedbacks. - -Switch to **Grayscale Release** tab, click into **canary-release**. - -![Canary Release List](https://pek3b.qingstor.com/kubesphere-docs/png/20190718181326.png#align=left&display=inline&height=756&originHeight=756&originWidth=2824&search=&status=done&width=2824) - -6.2. Click **···** at **reviews v2** and select **Take Over**. Then 100% of traffic will be sent to the new version v2. - -> Note: If anything goes wrong along the way, we can abort and roll back to the previous version v1 in no time. - -![Adjust Traffic](https://pek3b.qingstor.com/kubesphere-docs/png/20190718181413.png#align=left&display=inline&height=1438&originHeight=1438&originWidth=2744&search=&status=done&width=2744) - -6.3. Open the Bookinfo page again and refresh the browsers several times. We can find that it only shows the result of reviews v2, i.e., ratings with black stars. - -![New Traffic Result](https://pek3b.qingstor.com/kubesphere-docs/png/20190718235627.png#align=left&display=inline&height=1108&originHeight=1108&originWidth=2372&search=&status=done&width=2372) - -### Step 7: Take Down the Old Version - -When the new version v2 has been released online and takes over all the traffic successfully. Also, the testing results and online users feedback are confirmed to be correct. You can take down the old version and remove the resources of v1. - -Click on the **Job Offline** button to take down the old version. - -![Take Down Old Version](https://pek3b.qingstor.com/kubesphere-docs/png/20190719001803.png#align=left&display=inline&height=1466&originHeight=1466&originWidth=2742&search=&status=done&width=2742) - -> Notice: If take down a specific version of the component, the associated workloads and Istio related configuration resources will be removed simultaneously. It turns out that v1 is being replaced by v2. - -![Canary Release Result](https://pek3b.qingstor.com/kubesphere-docs/png/20190719001945.png#align=left&display=inline&height=1418&originHeight=1418&originWidth=1988&search=&status=done&width=1988) diff --git a/content/en/docs/quick-start/composing-an-app.md b/content/en/docs/quick-start/composing-an-app.md new file mode 100644 index 000000000..d7705622f --- /dev/null +++ b/content/en/docs/quick-start/composing-an-app.md @@ -0,0 +1,8 @@ +--- +title: "Compose and deploy a Wordpress App" +keywords: 'kubesphere, kubernetes, docker, multi-tenant' +description: 'Compose and deploy a Wordpress App' + +linkTitle: "Compose and deploy a Wordpress App" +weight: 3050 +--- diff --git a/content/en/docs/quick-start/create-workspace-and-project.md b/content/en/docs/quick-start/create-workspace-and-project.md new file mode 100644 index 000000000..954f8648d --- /dev/null +++ b/content/en/docs/quick-start/create-workspace-and-project.md @@ -0,0 +1,8 @@ +--- +title: "Create Workspace, Project, Account, Role" +keywords: 'kubesphere, kubernetes, docker, multi-tenant' +description: 'Create Workspace, Project, Account, and Role' + +linkTitle: "Create Workspace, Project, Account, Role" +weight: 3030 +--- diff --git a/content/en/docs/quick-start/deploy-bookinfo-to-k8s.md b/content/en/docs/quick-start/deploy-bookinfo-to-k8s.md new file mode 100644 index 000000000..032dac164 --- /dev/null +++ b/content/en/docs/quick-start/deploy-bookinfo-to-k8s.md @@ -0,0 +1,8 @@ +--- +title: "Deploy a Bookinfo App" +keywords: 'kubesphere, kubernetes, docker, multi-tenant' +description: 'Deploy a Bookinfo App' + +linkTitle: "Deploy a Bookinfo App" +weight: 3040 +--- diff --git a/content/en/docs/quick-start/devops-online.md b/content/en/docs/quick-start/devops-online.md deleted file mode 100644 index 9c2f992cd..000000000 --- a/content/en/docs/quick-start/devops-online.md +++ /dev/null @@ -1,289 +0,0 @@ ---- -title: "Create a Jenkinsfile-based Pipeline for Spring Boot Project" -keywords: 'kubesphere, kubernetes, docker, spring boot, jenkins, devops, ci/cd, pipeline' -description: 'Create a Jenkinsfile-based Pipeline to deploy Spring Boot Project to Kubernetes' - -linkTitle: "9" -weight: 3090 ---- - -## Objective - -In this tutorial, we will show you how to create a pipeline based on the Jenkinsfile from a GitHub repository. Using the pipeline, we will deploy a demo application to a development environment and a production environment respectively. Meanwhile, we will demo a branch that is used to test dependency caching capability. In this demo, it takes a relatively long time to finish the pipeline for the first time. However, it runs very faster since then. It proves the cache works well since this branch pulls lots of dependency from internet initially. - -> Note: -> KubeSphere supports two kinds of pipeline, i.e., Jenkinsfile in SCM which is introduced in this document and [Jenkinsfile out of SCM](../jenkinsfile-out-of-scm). Jenkinsfile in SCM requires an internal Jenkinsfile in Source Control Management (SCM). In another word, Jenkfinsfile serves as a part of SCM. KubeSphere DevOps system will automatically build a CI/CD pipeline depending on existing Jenkinsfile of the code repository. You can define workflow like Stage, Step and Job in the pipeline. - -## Prerequisites - -- You need to [enable KubeSphere DevOps System](../../installation/install-devops). -- You need to have a DokcerHub account and a GitHub account. -- You need to create a workspace, a DevOps project, and a **project-regular** user account, and this account needs to be invited into a DevOps project, see [Get Started with Multi-tenant Management](../admin-quick-start). -- Set CI dedicated node for building pipeline, please refer to [Set CI Node for Dependency Cache](../../devops/devops-ci-node). - -## Pipeline Overview - -There are eight stages as shown below in the pipeline that is going to demonstrate. - -![Pipeline Overview](https://pek3b.qingstor.com/kubesphere-docs/png/20190512155453.png#align=left&display=inline&height=1302&originHeight=1302&originWidth=2180&search=&status=done&width=2180) - -> Note: - -> - **Stage 1. Checkout SCM**: Checkout source code from GitHub repository. -> - **Stage 2. Unit test**: It will continue to execute next stage after unit test passed. -> - **Stage 3. SonarQube analysis**:Process sonarQube code quality analysis. -> - **Stage 4.** **Build & push snapshot image**: Build the image based on selected branches in the behavioral strategy. Push the tag of `SNAPSHOT-$BRANCH_NAME-$BUILD_NUMBER` to DockerHub, among which, the `$BUILD_NUMBER` is the operation serial number in the pipeline's activity list. -> - **Stage 5. Push the latest image**: Tag the master branch as latest and push it to DockerHub. -> - **Stage 6. Deploy to dev**: Deploy master branch to Dev environment. verification is needed for this stage. -> - **Stage 7. Push with tag**: Generate tag and released to GitHub. Then push the tag to DockerHub. -> - **Stage 8. Deploy to production**: Deploy the released tag to the Production environment. - -## Hands-on Lab - -### Step 1: Create Credentials - -> Note: If there are special characters in your account or password, please encode it using https://www.urlencoder.org/, then paste the encoded result into credentials below. - -1.1. Log in KubeSphere with the account `project-regular`, enter into the created DevOps project and create the following three credentials under **Project Management → Credentials**: - -|Credential ID| Type | Where to use | -| --- | --- | --- | -| dockerhub-id | Account Credentials | DockerHub | -| github-id | Account Credentials | GitHub | -| demo-kubeconfig | kubeconfig | Kubernetes | - -1.2. We need to create an additional credential `sonar-token` for SonarQube token, which is used in stage 3 (SonarQube analysis) mentioned above. Refer to [Access SonarQube Console and Create Token](../../installation/install-sonarqube) to copy the token and paste here. Then press **OK** button. - -![sonar-token](https://pek3b.qingstor.com/kubesphere-docs/png/20200226171101.png) - -In total, we have created four credentials in this step. - -![Credentials](https://pek3b.qingstor.com/kubesphere-docs/png/20200107105153.png) - -### Step 2: Modify Jenkinsfile in Repository - -#### Fork Project - -Log in GitHub. Fork the [devops-java-sample](https://github.com/kubesphere/devops-java-sample) from GitHub repository to your own GitHub. - -![Fork Sample](https://pek3b.qingstor.com/kubesphere-docs/png/fork-repo.png#align=left&display=inline&height=910&originHeight=910&originWidth=2034&search=&status=done&width=2034) - -#### Edit Jenkinsfile - -2.1. After forking the repository to your own GitHub, open the file **Jenkinsfile-online** under root directory. - -![Open File](https://kubesphere-docs.pek3b.qingstor.com/png/jenkinsonline.png#align=left&display=inline&height=1140&originHeight=1140&originWidth=2192&search=&status=done&width=2192) - -2.2. Click the editing logo in GitHub UI to edit the values of environment variables. - -![Jenkinsfile](https://kubesphere-docs.pek3b.qingstor.com/png/env.png#align=left&display=inline&height=1538&originHeight=1538&originWidth=1956&search=&status=done&width=1956) - -| Editing Items | Value | Description | -| :--- | :--- | :--- | -| DOCKER\_CREDENTIAL\_ID | dockerhub-id | Fill in DockerHub's credential ID to log in your DockerHub. | -| GITHUB\_CREDENTIAL\_ID | github-id | Fill in the GitHub credential ID to push the tag to GitHub repository. | -| KUBECONFIG\_CREDENTIAL\_ID | demo-kubeconfig | kubeconfig credential ID is used to access to the running Kubernetes cluster. | -| REGISTRY | docker.io | Set the web name of docker.io by default for pushing images. | -| DOCKERHUB\_NAMESPACE | your-dockerhub-account | Replace it to your DockerHub's account name. (It can be the Organization name under the account.) | -| GITHUB\_ACCOUNT | your-github-account | Change your GitHub account name, such as `https://github.com/kubesphere/`. Fill in `kubesphere` which can also be the account's Organization name. | -| APP\_NAME | devops-java-sample | Application name | -| SONAR\_CREDENTIAL\_ID | sonar-token | Fill in the SonarQube token credential ID for code quality test. | - -**Note: The command parameter `-o` of Jenkinsfile's `mvn` indicates that the offline mode is on. This tutorial has downloaded relevant dependencies to save time and to adapt to network interference in certain environments. The offline mode is on by default.** - -2.3. After editing the environmental variables, click **Commit changes** at the top of GitHub page, then submit the updates to the master branch. - -### Step 3: Create Projects - -In this step, we will create two projects, i.e. `kubesphere-sample-dev` and `kubesphere-sample-prod`, which are development environment and production environment respectively. - -#### Create The First Project - -> Tip:The account `project-admin` should be created in advance since it is used as the reviewer of the CI/CD Pipeline. - -3.1. Use the account `project-admin` to log in KubeSphere. Click **Create** button, then choose **Create a resource project**. Fill in basic information for the project. Click **Next** after complete. - -- Name: `kubesphere-sample-dev`. -- Alias: `development environment`. - - -3.2. Leave the default values at Advanced Settings. Click **Create**. - -3.3. Now invite `project-regular` user into `kubesphere-sample-dev`. Choose **Project Settings → Project Members**. Click **Invite Member** to invite `project-regular` and grant this account the role of `operator`. - -#### Create the Second Project - -Similarly, create a project named `kubesphere-sample-prod` following the steps above. This project is the production environment. Then invite `project-regular` to the project of `kubesphere-sample-prod`, and grant it the role of `operator` as well. - -> Note: When the CI/CD pipeline succeeded. You will see the demo application's Deployment and Service have been deployed to `kubesphere-sample-dev` and `kubesphere-sample-prod.` respectively. - -![Project List](https://pek3b.qingstor.com/kubesphere-docs/png/20200107142252.png) - -### Step 4: Create a Pipeline - -#### Fill in Basic Information - -4.1. Switch the login user to `project-regular`. Enter into the DevOps project `demo-devops`. click **Create** to build a new pipeline. - -![Pipeline List](https://pek3b.qingstor.com/kubesphere-docs/png/20200107142659.png) - -4.2. Fill in the pipeline's basic information in the pop-up window, name it `jenkinsfile-in-scm`, click **Code Repository**. - -![New Pipeline](https://pek3b.qingstor.com/kubesphere-docs/png/20200107143247.png) - -#### Add Repository - -4.3. Click **Get Token** to generate a new GitHub token if you do not have one. Then paste the token to the edit box. - -![Get Token](https://pek3b.qingstor.com/kubesphere-docs/png/20200107143539.png) - -![GitHub Token](https://pek3b.qingstor.com/kubesphere-docs/png/20200107143648.png) - -4.4. Click **Confirm**, choose your account. All the code repositories related to this token will be listed on the right. Select **devops-java-sample** and click **Select this repo**, then click **Next**. - -![Select Repo](https://pek3b.qingstor.com/kubesphere-docs/png/20200107143818.png) - -#### Advanced Settings - -Now we are on the advanced setting page. - - - -4.5. In the behavioral strategy, KubeSphere pipeline has set three strategies by default. Since this demo has not applied the strategy of **Discover PR from Forks,**, this strategy can be deleted. - -![Remove Behavioral Strategy](https://pek3b.qingstor.com/kubesphere-docs/png/20200107144107.png) - - - -4.6. The path is **Jenkinsfile** by default. Please change it to `Jenkinsfile-online`, which is the file name of Jenkinsfile in the repository located in root directory. - -> Note: Script path is the Jenkinsfile path in the code repository. It indicates the repository's root directory. If the file location changes, the script path should also be changed. - -![Change Jenkinsfile Path](https://pek3b.qingstor.com/kubesphere-docs/png/20200107145113.png) - -4.7. **Scan Repo Trigger** can be customized according to the team's development preference. We set it to `5 minutes`. Click **Create** when complete advanced settings. - - - -![Advanced Settings](https://pek3b.qingstor.com/kubesphere-docs/png/20200107145528.png) - -#### Run the Pipeline - -Refresh browser manually or you may need to click `Scan Repository`, then you can find two activities triggered. Or you may want to trigger them manually as the following instructions. - -4.8. Click **Run** on the right. According to the **Behavioral Strategy**, it will load the branches that have Jenkinsfile. Just keep the default branch `master`. Since there is no default value in the Jenkinsfile file, put in a tag number in the  **TAG_NAME** such as `v0.0.1`. Click **OK** to trigger a new activity. - -> Note: TAG\_NAME is used to generate release and images with tag in GitHub and DockerHub. Please notice that `TAG_NAME` should not duplicate the existing `tag` name in the code repository. Otherwise the pipeline can not run.   - -![Run Pipeline](https://pek3b.qingstor.com/kubesphere-docs/png/20200107230822.png) - -At this point, the pipeline for the master branch is running. - -> Note: Click **Branch** to switch to the branch list and review which branches are running. The branch here is determined by the **Behavioral Strategy.** - -![Tag Name](https://pek3b.qingstor.com/kubesphere-docs/png/20200107232100.png) - -#### Review Pipeline - -When the pipeline runs to the step of `input` -it will pause. You need to click **Continue** manually. Please note that there are three stages defined in the Jenkinsfile-online. Therefore, the pipeline will be reviewed three times in the three stages of `deploy to dev, push with tag, deploy to production`. - -![](https://pek3b.qingstor.com/kubesphere-docs/png/20200108001020.png) - -> Note: In real development or production scenario, it requires someone who has higher authority (e.g. release manager) to review the pipeline and the image, as well as the code analysis result. They have the authority to determine whether to approve push and deploy. In Jenkinsfile, the `input` step supports you to specify who to review the pipeline. If you want to specify a user `project-admin` to review, you can add a field in the Jenkinsfile. If there are multiple users, you need to use commas to separate them as follows: - -```groovy -··· -input(id: 'release-image-with-tag', message: 'release image with tag?', submitter: 'project-admin,project-admin1') -··· -``` - -### Step 5: Check Pipeline Status - -5.1. Click into **Activity → master → Task Status**, you can see the pipeline running status. Please note that the pipeline will keep initializing for several minutes when the creation just completed. There are eight stages in the sample pipeline and they have been defined individually in [Jenkinsfile-online](https://github.com/kubesphere/devops-java-sample/blob/master/Jenkinsfile-online). - -![Pipeline stages](https://pek3b.qingstor.com/kubesphere-docs/png/20200108002652.png) - -5.2. Check the pipeline running logs by clicking **Show Logs** at the top right corner. The page shows dynamic logs outputs, operating status and time etc. - -For each step, click specific stage on the left to inspect the logs. The logs can be downloaded to local for further analysis. - -![Pipeline Logs](https://pek3b.qingstor.com/kubesphere-docs/png/20200108003016.png) - -### Step 6: Verify Pipeline Running Results - -6.1. Once you successfully executed the pipeline, click `Code Quality` to check the results through SonarQube as the follows (reference only). - -![SQ Results](https://pek3b.qingstor.com/kubesphere-docs/png/20200108003257.png) - -6.2. The Docker image built by the pipeline has been successfully pushed to DockerHub, since we defined `push to DockerHub` stage in Jenkinsfile-online. In DockerHub you will find the image with tag v0.0.1 that we configured before running the pipeline, also you will find the images with tags`SNAPSHOT-master-6`(SNAPSHOT-branch-serial number) and `latest` have been pushed to DockerHub. - -![DockerHub Images](https://pek3b.qingstor.com/kubesphere-docs/png/20200108134653.png) - -At the same time, a new tag and a new release have been generated in GitHub. - -![GitHub Release](https://pek3b.qingstor.com/kubesphere-docs/png/20200108133933.png) - -The sample application will be deployed to `kubesphere-sample-dev` and `kubesphere-sample-prod` as deployment and service. - -| Environment | URL | Namespace | Deployment | Service | -| :--- | :--- | :--- | :--- | :--- | -| Dev | `http://{NodeIP}:{$30861}` | kubesphere-sample-dev | ks-sample-dev | ks-sample-dev | -| Production | `http://{$NodeIP}:{$30961}` | kubesphere-sample-prod | ks-sample | ks-sample | - -6.3. Enter into these two projects, you can find the application's resources have been deployed to Kubernetes successully. For example, lets verify the Deployments and Services under project `kubesphere-sample-dev`: - -#### Deployments - -![Deployments](https://pek3b.qingstor.com/kubesphere-docs/png/20200108135508.png) - -#### Services - -![Services](https://pek3b.qingstor.com/kubesphere-docs/png/20200108135541.png) - -### Step 7: Visit Sample Service - -7.1. You can switch to use `admin` account to open **web kubectl** from **Toolbox**. Enter into project `kubesphere-sample-dev`, select **Application Workloads → Services** and click into `ks-sample-dev` service. - -![Web Kubectl](https://pek3b.qingstor.com/kubesphere-docs/png/20200108140233.png) - -7.2. Open **web kubectl** from **Toolbox**, try to access as the following: - -> Note: curl Endpoints or {$Virtual IP}:{$Port} or {$Node IP}:{$NodePort} - -```bash -$ curl 10.233.90.9:8080 -Really appreciate your star, that's the power of our life. -``` - -7.3. Similarly, you can test the service in project `kubesphere-sample-pro` - -> Note: curl Endpoints or {$Virtual IP}:{$Port} or {$Node IP}:{$NodePort} - -```bash -$ curl 10.233.90.17:8080 -Really appreciate your star, that's the power of our life. -``` - -Configurations! You are familiar with KubeSphere DevOps pipeline, and you can continue to learn how to build CI/CD pipeline with a graphical panel and visualize your workflow in the next tutorial. diff --git a/content/en/docs/quick-start/enable-pluggable-compoents.md b/content/en/docs/quick-start/enable-pluggable-compoents.md new file mode 100644 index 000000000..390d6dd9e --- /dev/null +++ b/content/en/docs/quick-start/enable-pluggable-compoents.md @@ -0,0 +1,8 @@ +--- +title: "Enable Pluggable Components" +keywords: 'kubesphere, kubernetes, docker, multi-tenant' +description: 'Enable Pluggable Components' + +linkTitle: "Enable Pluggable Components" +weight: 3060 +--- diff --git a/content/en/docs/quick-start/hpa.md b/content/en/docs/quick-start/hpa.md deleted file mode 100644 index a299c9184..000000000 --- a/content/en/docs/quick-start/hpa.md +++ /dev/null @@ -1,165 +0,0 @@ ---- -title: "Create Horizontal Pod Autoscaler for Deployment" -keywords: 'kubesphere, kubernetes, docker, HPA, Horizontal Pod Autoscaler' -description: 'How to scale deployment replicas using horizontal Pod autoscaler' - -linkTitle: "6" -weight: 3060 ---- - -The Horizontal Pod Autoscaler (HPA) automatically scales the number of pods in a deployment based on observed CPU utilization or memory usage. The controller periodically adjusts the number of replicas in a deployment to match the observed average CPU utilization or memory usage to the target value specified by user. - -## How does the HPA work - -The Horizontal Pod Autoscaler is implemented as a control loop with a period of default 30 seconds controlled by the controller manager HPA sync-period flag. For per-pod resource metrics like CPU, the controller fetches the metrics from the resource metrics API for each pod targeted by the Horizontal Pod Autoscaler. See [Horizontal Pod Autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) for more details. - -![HPA Arch](https://pek3b.qingstor.com/kubesphere-docs/png/20190716214909.png#alt=) - -## Objective - -This document walks you through an example of configuring Horizontal Pod Autoscaler for the hpa-example deployment. In addition, we will create a deployment to send an infinite loop of queries to the hpa-example application, demonstrating its autoscaling function and the HPA principle. - -## Estimate Time - -About 25 minutes - -## Prerequisites - -- You need to [enable HPA](../../installation/install-metrics-server). -- You need to create a workspace, a project, and a `project-regular` user account, and this account needs to be invited into the project with the role `operator`. Please refer to [Get started with multi-tenant management](../admin-quick-start). - -## Hands-on Lab - -### Step 1: Create Stateless Service - -1.1. Log in with `project-regular` account. Enter **demo-project**, then select **Application Workloads → Services** - -![Service List](https://pek3b.qingstor.com/kubesphere-docs/png/20200221075410.png) - -1.2. Click **Create Service** and choose **Stateless service**, name it `hpa`, then click **Next**. - -![Create Service](https://pek3b.qingstor.com/kubesphere-docs/png/20200221075509.png) - -1.3. Click **Add Container Image**, then input `mirrorgooglecontainers/hpa-example`and press return key. It will automatically search and load the image information, choose `Use Default Ports`. - -![Add Container](https://pek3b.qingstor.com/kubesphere-docs/png/20200221075857.png) - -1.4. Click `√` to save it, and click **Next**. Then skip **Mount Volumes** and **Advanced Settings**, and click **Create**. At this point, the stateless service `hpa` has been created successfully. - -> Note: At the same time, the corresponding Deployment and Service have been created in KubeSphere. - -![HPA Service](https://pek3b.qingstor.com/kubesphere-docs/png/20200221080648.png) - -### Step 2: Configure HPA - -2.1. Choose **Workloads → Deployments**. Enter `hpa` to view its detailed page. - -![Deployment List](https://pek3b.qingstor.com/kubesphere-docs/png/20200221081356.png) - -2.2. Choose **More → Horizontal Pod Autoscaler**. - -![HPA Menu](https://pek3b.qingstor.com/kubesphere-docs/png/20200221081517.png) - -2.3. Give some sample values for HPA configuration as follows. Click **OK** to finish the configuration. - -- CPU Request Target(%): `50` (represents the percent of target CPU utilization) -- Min Replicas Number: `1` -- Max Replicas Number: `10` - -> Note: After setting HPA for Deployment, it will create a `Horizontal Pod Autoscaler` in Kubernetes for autoscaling. - -![HPA Settings](https://pek3b.qingstor.com/kubesphere-docs/png/20200221083958.png) - -### Step 3: Create Load-generator - -3.1. In the current project, navigate to **Workloads → Deployments**. Click **Create** and fill in the basic information in the pop-up window, name it `load-generator`, click **Next**. - -3.2. Click on the **Add Container Image**, enter `busybox` into Image edit box, and press return key. - -3.3. Scroll down to **Start command**. Add commands and parameters as follows. These commands are used to request hpa service and create CPU load. - -#### Run command - -```bash -sh,-c -``` - -#### Parameters - -> Note the http address example is like http://{$service-name}.{$project-name}.svc.cluster.local. You need to replace the following http address with the actual name of service and project. - -```bash -while true; do wget -q -O- http://hpa.demo-project.svc.cluster.local; done -``` - -![Load Generator configuration](https://pek3b.qingstor.com/kubesphere-docs/png/20200221090034.png) - -3.4. Click on the `√` button when you are done, then click **Next**. We do not use volume in this demo, therefore click **Next → Create** to complete the creation. - -So far, we have created two deployments, i.e. `hpa` and `load-generator`, and one service, i.e. `hpa`. - -![Deployments](https://pek3b.qingstor.com/kubesphere-docs/png/20190716222833.png#alt=) - -### Step 4: Verify HPA - -#### View Deployment Status - -Choose **Workloads → Deployments**, enter the deployment `hpa` to view detailed page. Please pay attention to the replicas, Pod status and CPU utilization, as well as the Pods monitoring graphs. - -![Deployment Status](https://pek3b.qingstor.com/kubesphere-docs/png/20200221091126.png) - -#### View HPA Status - -When the `load-generator` Pod works, it will continuously request `hpa` service. As shown from the following screenshot, the CPU utilization is significantly increased after refreshing the page. Currently it is rising to `1012%`, and the desired replicas and current replicas is rising to `10/10`. - -![HPA Status](https://pek3b.qingstor.com/kubesphere-docs/png/20200221091504.png) - -After around two minutes, the CPU decreased to `509%`, which proves the principle of HPA. - -![HPA Changed Status](https://pek3b.qingstor.com/kubesphere-docs/png/20200221092228.png) - -### Step 5: Verify Monitoring - -5.1. Scroll down to the Pods list, and pay attention to the first Pod that we created. Generally, we can see the CPU usage of the Pod shows a significant upward trend in the monitoring graph. When HPA starts working, the CPU usage has an obvious decreased trend. Finally it tends to be smooth. - -![HPA Monitoring](https://pek3b.qingstor.com/kubesphere-docs/png/20200221093302.png) - -#### View workloads monitoring - -5.2. Switch to the **Monitoring** tab and select `Last 30 minutes` in the filter. - -![Detailed Monitoring](https://pek3b.qingstor.com/kubesphere-docs/png/20200221092927.png) - -#### View all replicas monitoring - -5.3. Click **View all replicas** on the right of monitoring graph to inspect all replicas monitoring graphs. - -![Replicas Monitoring](https://pek3b.qingstor.com/kubesphere-docs/png/20200221093939.png) - -### Step 6: Stop Load Generation - -6.1. Go back to **Workloads → Deployments** and delete `load-generator` to cease the load increasing. - -6.2. Inspect the status of the `hpa` again. You will find that its current CPU utilization has slowly dropped to 10% **in a few minutes**. Eventually the HPA reduces its deployment replicas to one which is the initial value. The trend in the monitoring curve can also help us to understand the working principle of HPA. - -![Stop Load Generator](https://pek3b.qingstor.com/kubesphere-docs/png/20200221095630.png) - -6.3. Now, drill into the **Pod** detailed page from Pod list, inspect the monitoring graph and review the CPU utilization and Network inbound/outbound trends. We can find the trends match this HPA example. - -![HPA Result](https://pek3b.qingstor.com/kubesphere-docs/png/20200221094853.png) - -6.4. Then drill into the container of this Pod, we can find it has the same trend as the Pod. - -![Pod Monitoring](https://pek3b.qingstor.com/kubesphere-docs/png/20200221095007.png) - -## Modify HPA Settings - -If you need to modify the settings of the HPA, you can go to the deployment detailed page, and click **More → Horizontal Pod Autoscaler**, edit the pop-up window at your will. - -## Cancel HPA - -If you do not need HPA for deployment, you can click **··· → Cancel**. - -![Cancel HPA](https://pek3b.qingstor.com/kubesphere-docs/png/20200221095420.png) - -Congratulation! You have been familiar with how to set HPA for deployment through KubeSphere console. diff --git a/content/en/docs/quick-start/ingress-canary.md b/content/en/docs/quick-start/ingress-canary.md deleted file mode 100644 index 54afcc8f2..000000000 --- a/content/en/docs/quick-start/ingress-canary.md +++ /dev/null @@ -1,312 +0,0 @@ ---- -title: "Canary Release based on Ingress-Nginx" -keywords: "nginx, kubernetes, kubesphere, istio, canary release" -description: "Canary release on Kubernetes based on Ingress-Nginx" - -linkTitle: "12" -weight: 3120 ---- - -As we demonstrated in [Managing Canary Release of Microservice App based on Istio](../bookinfo-canary), you can use KubeSphere to implement grayscale release in your project based on Istio. However, many users are not using Istio. Most projects from these users are pretty simple so that we need to provide a light-weight solution for this case. - -[Ingress-Nginx](https://github.com/kubernetes/ingress-nginx/releases/tag/nginx-0.21.0) brings a new feature with "Canary", which could be used as a load balancer for gateway. The canary annotation enables the Ingress spec to act as an alternative service for requests to route to depending on the applied rules, and control the traffic splits. KubeSphere built-in gateway of each project supports the "Canary" feature of [Ingress-Nginx](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#canary). - -We have elaborated on the scenarios of grayscale in the Istio bookinfo guide. In this document we are going to demonstrate how to use KubeSphere route and gateway, namely, Ingress and Ingress-Controller, to implement grayscale release. - -> Note: The demo YAML files has been uploaded to [GitHub](https://github.com/kubesphere/tutorial). - -## Ingress-Nginx Annotation - -Based on [Nginx Ingress Controller](https://github.com/kubernetes/ingress-nginx/#nginx-ingress-controller), KubeSphere implements the gateway in each project, namely, Kubernetes namespace, serving as the traffic entry and a reverse proxy of each project. Nginx annotations support the following rules after `nginx.ingress.kubernetes.io/canary: "true"` is set. Please refer to [Nginx Annotations](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#canary) for further explanation. - -- `nginx.ingress.kubernetes.io/canary-by-header` -- `nginx.ingress.kubernetes.io/canary-by-header-value` -- `nginx.ingress.kubernetes.io/canary-weight` -- `nginx.ingress.kubernetes.io/canary-by-cookie` - -> Note: Canary rules are evaluated in order of precedence. Precedence is as follows: -> `canary-by-header - > canary-by-cookie - > canary-weight`. - -The four annotation rules above can be generally divided into the following two categories: - -- The canary rules based on the weight - -![Weight-Based Canary](https://pek3b.qingstor.com/kubesphere-docs/png/20200229182539.png) - -- The canary rules based on the user request - -![User-Based Canary](https://pek3b.qingstor.com/kubesphere-docs/png/20200229182554.png) - -## Prerequisites - -- You need to complete all steps in [Getting Started with Multi-tenant Management](../admin-quick-start.md). - -## Hands-on Lab - -### Step 1: Create Project and Application - -1.1. Use `project-admin` account to log in KubeSphere, create a project `ingress-demo` under the workspace `demo-workspace`. Go to **Project Settings → Advanced Settings**, click **Set Gateway**, and click **Save** to open the gateway in this project. Note it defaults to **NodePort**. - -![Set Gateway](https://pek3b.qingstor.com/kubesphere-docs/png/20200229123307.png) - -1.2. We are going to use command line to create the resources provided by the following yaml files. Log in KubeSphere with `admin` account, open **Web kubectl** from the **Toolbox** at the bottom-right corner of console, then use the following command to create production resources `Deployment` and `Service`: - -```bash -$ kubectl apply -f production.yaml -n ingress-demo -deployment.extensions/production created -service/production created -``` - -The file is as follows: - -#### production.yaml - -```yaml -apiVersion: apps/v1 -kind: Deployment -metadata: - name: production - labels: - app: production -spec: - replicas: 1 - selector: - matchLabels: - app: production - template: - metadata: - labels: - app: production - spec: - containers: - - name: production - image: mirrorgooglecontainers/echoserver:1.10 - ports: - - containerPort: 8080 - env: - - name: NODE_NAME - valueFrom: - fieldRef: - fieldPath: spec.nodeName - - name: POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - - name: POD_IP - valueFrom: - fieldRef: - fieldPath: status.podIP - ---- - -apiVersion: v1 -kind: Service -metadata: - name: production - labels: - app: production -spec: - ports: - - port: 80 - targetPort: 8080 - protocol: TCP - name: http - selector: - app: production -``` - -1.3. Now create a `production` Ingress. - -```bash -$ kubectl apply -f production.ingress -n ingress-demo -ingress.extensions/production created -``` - -The file is as follows: - -#### production.ingress - -```yaml -apiVersion: extensions/v1beta1 -kind: Ingress -metadata: - name: production - annotations: - kubernetes.io/ingress.class: nginx -spec: - rules: - - host: kubesphere.io - http: - paths: - - backend: - serviceName: production - servicePort: 80 -``` - -### Step 2: Access the Production Application - -You can verify each resource by navigating to the corresponding lists from the console. - -**Deployment** -![Deployment](https://pek3b.qingstor.com/kubesphere-docs/png/20200229122819.png) - -**Service** -![Service](https://pek3b.qingstor.com/kubesphere-docs/png/20200229122918.png) - -**Route (Ingress)** -![Ingress](https://pek3b.qingstor.com/kubesphere-docs/png/20200229122939.png) - -Use the command to access the application of production. - -> Note: `192.168.0.88` is the gateway address of each project and `30205` is the NodePort. You need yo replace with the actual values from the Route details page. - -```bash -$ curl --resolve kubesphere.io:30205:192.168.0.88 kubesphere.io:30205 - -Hostname: production-6b4bb8d58d-7r889 - -Pod Information: - node name: ks-allinone - pod name: production-6b4bb8d58d-7r889 - pod namespace: ingress-demo - pod IP: 10.233.87.165 - -Server values: - server_version=nginx: 1.12.2 - lua: 10010 - -Request Information: - client_address=10.233.87.225 - method=GET - real path=/ - query= - request_version=1.1 - request_scheme=http - request_uri=http://kubesphere.io:8080/ - -Request Headers: - accept=*/* - host=kubesphere.io:30205 - user-agent=curl/7.29.0 -apiVersion: extensions/v1beta1 - x-forwarded-for=192.168.0.88 - x-forwarded-host=kubesphere.io:30205 - x-forwarded-port=80 - x-forwarded-proto=http - x-original-uri=/ - x-real-ip=192.168.0.88 - x-request-id=9596df96e994ea05bece2ebbe689a2cc - x-scheme=http - -Request Body: - -no body in request- -``` - -### Step 3: Create Canary Version of the Application - -Same as above, refer to the yaml files that we used in **production** to create an application of **canary** version, including `Deployment` and `Service`, you just need to replace the occurrences of `production` with `canary` in those yaml files. - -### Step 4: Ingress-Nginx Annotation Rules - -#### Set Canary Release based on Weight - -A typical scenario of the rule is based on weight, that is, `blue-green` deployment. You can set the weight from `0` to `100` to implement that kind of application release. At any time, only one of the environments is production. For this example, currently green is production and blue is canary. Initially, the weight of canary is set to `0` which means no traffic is forwarded to this release. You can introduce a small portion of traffic to blue version step by step, test and verify it. If everything is OK then you can shift all requests from green to blue by set the weight of blue to `100` which makes blue the production release. In a word, with such canary releasing process, the application is upgraded smoothly. - -4.1. Now create a `canary` Ingress. The following file uses `canary-weight` annotation to introduce `30%` of all traffic to the canary version. - -```bash -$ kubectl apply -f weighted-canary.ingress -n ingress-demo -ingress.extensions/canary created -``` - -The yaml file is as follows. - -```yaml -apiVersion: extensions/v1beta1 -kind: Ingress -metadata: - name: canary - annotations: - kubernetes.io/ingress.class: nginx - nginx.ingress.kubernetes.io/canary: "true" - nginx.ingress.kubernetes.io/canary-weight: "30" -spec: - rules: - - host: kubesphere.io - http: - paths: - - backend: - serviceName: canary - servicePort: 80 -``` - -4.2. Verify the Weighted Canary Release - -> Note: Although we set `30%` of traffic to the canary, the traffic ratio may fluctuate to a small extent. - -```bash -for i in $(seq 1 10); do curl -s --resolve kubesphere.io:30205:192.168.0.88 kubesphere.io:30205 | grep "Hostname"; done -``` - -![Canary Release based on Weight](https://pek3b.qingstor.com/kubesphere-docs/png/20200205162603.png) - -#### Set Canary Release based on Request Header - -4.3. Go to **Application Workloads → Routes**, click into the detailed page of route `canary`, then go to **More → Edit Annotations**. Follow the screenshot below, add a row of annotation with `nginx.ingress.kubernetes.io/canary-by-header: canary` to the Ingress of canary release created above. The header to use for notifying the Ingress to route the request to the service specified in the canary Ingress. - -> Note: Canary rules are evaluated in order of precedence. Precedence is as follows: -> `canary-by-header - > canary-by-cookie - > canary-weight`. Thus the the old annotation `canary-weight` will be ignored. - -![Edit annotation](https://pek3b.qingstor.com/kubesphere-docs/png/20200304220417.png) - -4.4. Add different header in the request, and access the application domain name. More specifically, - -- When the request header is set to `always`, it will be routed to the canary. -- When the header is set to `never`, it will never be routed to the canary. - -> Note: For any other value, the header will be ignored and the request compared against the other canary rules by precedence. - -```bash -for i in $(seq 1 10); do curl -s -H "canary: never" --resolve kubesphere.io:30205:192.168.0.88 kubesphere.io:30205 | grep "Hostname"; done -``` - -![Request Header](https://pek3b.qingstor.com/kubesphere-docs/png/20200205231401.png) - -We set the `canary: other-value` in the header, the Ingress with canary value `30%` to take precedence over others. - -```bash -for i in $(seq 1 10); do curl -s -H "canary: other-value" --resolve kubesphere.io:30205:192.168.0.88 kubesphere.io:30205 | grep "Hostname"; done -``` - -![Request Header](https://pek3b.qingstor.com/kubesphere-docs/png/20200205231455.png) - -4.5. Now we can add a new row of annotation `nginx.ingress.kubernetes.io/canary-by-header-value: user-value` which is for notifying the Ingress to route the request to the service specified in the canary Ingress. - -![Canary by Header Value](https://pek3b.qingstor.com/kubesphere-docs/png/20200305093713.png) - -4.6. Access the domain name as follows, when the request header is set to this value, it will be routed to the canary version. For any other header value, the header will be ignored and the request is compared against the other canary rules by precedence. - -> Note: It allows users to customize the value of Request Header. - -```bash -for i in $(seq 1 10); do curl -s -H "canary: user-value" --resolve kubesphere.io:30205:192.168.0.88 kubesphere.io:30205 | grep "Hostname"; done -``` - -![Request Header](https://pek3b.qingstor.com/kubesphere-docs/png/20200205231634.png) - -#### Based on Cookie - -4.7. Similar to Request Header, the cookie to use for notifying the Ingress to route the request to the service specified in the canary Ingress. When the cookie value is set to always, it will be routed to the canary version, otherwise, it will never be routed to the canary version. For any other value, the cookie will be ignored and the request is compared against the other canary rules by precedence. For example, if we only allow the users from London to access the canary version, we can set the annotation with `nginx.ingress.kubernetes.io/canary-by-cookie: "users_from_London"`. At this point, the system will check the user request, if the requests are from London, then set the value of cookie `users_from_London` to `always`, in order to ensure only the users from London access the canary version. - -## Conclusion - -Grayscale release can ensure overall system stability. You can find problems and make adjustments at the initial gray scale to minimize the degree of impact. We have demonstrated four annotation rules of Ingress-Nginx. It is convenient and light-weight for users who want to implement grayscale release without Istio. - -## Reference - -- [NGINX Ingress Controller - Annotations](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#canary) -- [canary deployment with ingress-nginx](https://www.elvinefendi.com/2018/11/25/canary-deployment-with-ingress-nginx.html) -- [Canary Deployments on Kubernetes without Service Mesh](https://medium.com/@domi.stoehr/canary-deployments-on-kubernetes-without-service-mesh-425b7e4cc862) diff --git a/content/en/docs/quick-start/ingress-demo.md b/content/en/docs/quick-start/ingress-demo.md deleted file mode 100644 index bcb05e09f..000000000 --- a/content/en/docs/quick-start/ingress-demo.md +++ /dev/null @@ -1,184 +0,0 @@ ---- -title: "Expose your App: Creating a Service and Ingress" -keywords: 'kubesphere, kubernetes, ingress, route' -description: 'How to expose your application through KubeSphere' - -linkTitle: "2" -weight: 3020 ---- - -In each project, namely, Kubernetes namespace, KubeSphere has pre-installed a load balancer which is Nginx Ingress Controller. You need to activate it before using it. As we know, ingress and ingress controller are used to expose services outside. The website [Kubernetes-ingress](https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/complete-example) provides an example showing how to use ingress. Let's take a demo website `https://cafe.example.com` as an example. If users access the URL `https://cafe.example.com/coffee`, it will return "Coffee Ordering System". Similarly, when access the URL `https://cafe.example.com/tea`, it will return "Tea Ordering System". - -To elaborate this demo, we will create two stateless applications which include Deployments, Services and Ingress in this tutorial. - -![Ingress](https://pek3b.qingstor.com/kubesphere-docs/png/20190716144703.png#alt=) - -## Prerequisites - -You have completed all steps in [Getting Started with Multi-tenant Management](../admin-quick-start) including [enabling gateway](../admin-quick-start#set-gateway). - -## Estimated Time - -About 20 minutes - -## Hands-on Lab - -### Step 1: Create a Tea Service - -In this section, we will create a "Tea Ordering System" service as the following. - -1.1. Sign in with `project-regular`, then enter `demo-project`. Choose **Application Workloads → Services** and click **Create Service**. - -![Services List](https://pek3b.qingstor.com/kubesphere-docs/png/20200105164644.png) - -1.2. Choose the type `Stateless Service` in Service Type, and name it `tea-svc`, click **Next**. - -![Service Types](https://pek3b.qingstor.com/kubesphere-docs/png/20200105164821.png) - -1.3. Click **Add Container Image**. Then fill in the **Image** with `nginxdemos/hello:plain-text`, press Enter button, click **Use Default Ports** and choose `√`, then click **Next**. - -![Create Tea Serice](https://pek3b.qingstor.com/kubesphere-docs/png/20200105165118.png) - -1.4. It is not required to mount volumes or configure advanced settings in this step. Just click **Next** to skip it, then click **Create** to complete `tea-svc` creation. - -![Services List after creation](https://pek3b.qingstor.com/kubesphere-docs/png/20200105165745.png) - -### Step 2: Create a Coffee Service - -2.1. Similarly, click **Create** button to create a "Coffee Ordering System" service. - -2.2. Name it `coffee-svc` and click **Next**, click **Add Container Image**. Then fill in the **Image** with `nginxdemos/hello:plain-text` and press Enter button, click **Use Default Ports** and choose `√`. Other steps are the same as the creation of the service tea-svc. - -![Services List](https://pek3b.qingstor.com/kubesphere-docs/png/20200105171944.png) - -### Step 3: Create a TLS Certificate - -Since the domain name bound in the route, namely, Ingress, is the HTTPS protocol, we need to create a secret to store the TLS certificate. - -3.1. Choose **Configuration Center → Secrets**, then click **Create**. - -![Secrets List](https://pek3b.qingstor.com/kubesphere-docs/png/20200105174409.png) - -3.2. Name it `cafe-secret`, click **Next**. Select the `TLS` from the Type dropdown menu, then copy and paste Credential and Private Key as follows, click **Create** when you have done. - -#### Credential - -```bash ------BEGIN CERTIFICATE----- -MIIDLjCCAhYCCQDAOF9tLsaXWjANBgkqhkiG9w0BAQsFADBaMQswCQYDVQQGEwJV -UzELMAkGA1UECAwCQ0ExITAfBgNVBAoMGEludGVybmV0IFdpZGdpdHMgUHR5IEx0 -ZDEbMBkGA1UEAwwSY2FmZS5leGFtcGxlLmNvbSAgMB4XDTE4MDkxMjE2MTUzNVoX -DTIzMDkxMTE2MTUzNVowWDELMAkGA1UEBhMCVVMxCzAJBgNVBAgMAkNBMSEwHwYD -VQQKDBhJbnRlcm5ldCBXaWRnaXRzIFB0eSBMdGQxGTAXBgNVBAMMEGNhZmUuZXhh -bXBsZS5jb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCp6Kn7sy81 -p0juJ/cyk+vCAmlsfjtFM2muZNK0KtecqG2fjWQb55xQ1YFA2XOSwHAYvSdwI2jZ -ruW8qXXCL2rb4CZCFxwpVECrcxdjm3teViRXVsYImmJHPPSyQgpiobs9x7DlLc6I -BA0ZjUOyl0PqG9SJexMV73WIIa5rDVSF2r4kSkbAj4Dcj7LXeFlVXH2I5XwXCptC -n67JCg42f+k8wgzcRVp8XZkZWZVjwq9RUKDXmFB2YyN1XEWdZ0ewRuKYUJlsm692 -skOrKQj0vkoPn41EE/+TaVEpqLTRoUY3rzg7DkdzfdBizFO2dsPNFx2CW0jXkNLv -Ko25CZrOhXAHAgMBAAEwDQYJKoZIhvcNAQELBQADggEBAKHFCcyOjZvoHswUBMdL -RdHIb383pWFynZq/LuUovsVA58B0Cg7BEfy5vWVVrq5RIkv4lZ81N29x21d1JH6r -jSnQx+DXCO/TJEV5lSCUpIGzEUYaUPgRyjsM/NUdCJ8uHVhZJ+S6FA+CnOD9rn2i -ZBePCI5rHwEXwnnl8ywij3vvQ5zHIuyBglWr/Qyui9fjPpwWUvUm4nv5SMG9zCV7 -PpuwvuatqjO1208BjfE/cZHIg8Hw9mvW9x9C+IQMIMDE7b/g6OcK7LGTLwlFxvA8 -7WjEequnayIphMhKRXVf1N349eN98Ez38fOTHTPbdJjFA/PcC+Gyme+iGt5OQdFh -yRE= ------END CERTIFICATE----- -``` - -#### Private Key - -```bash ------BEGIN RSA PRIVATE KEY----- -MIIEowIBAAKCAQEAqeip+7MvNadI7if3MpPrwgJpbH47RTNprmTStCrXnKhtn41k -G+ecUNWBQNlzksBwGL0ncCNo2a7lvKl1wi9q2+AmQhccKVRAq3MXY5t7XlYkV1bG -CJpiRzz0skIKYqG7Pcew5S3OiAQNGY1DspdD6hvUiXsTFe91iCGuaw1Uhdq+JEpG -wI+A3I+y13hZVVx9iOV8FwqbQp+uyQoONn/pPMIM3EVafF2ZGVmVY8KvUVCg15hQ -dmMjdVxFnWdHsEbimFCZbJuvdrJDqykI9L5KD5+NRBP/k2lRKai00aFGN684Ow5H -c33QYsxTtnbDzRcdgltI15DS7yqNuQmazoVwBwIDAQABAoIBAQCPSdSYnQtSPyql -FfVFpTOsoOYRhf8sI+ibFxIOuRauWehhJxdm5RORpAzmCLyL5VhjtJme223gLrw2 -N99EjUKb/VOmZuDsBc6oCF6QNR58dz8cnORTewcotsJR1pn1hhlnR5HqJJBJask1 -ZEnUQfcXZrL94lo9JH3E+Uqjo1FFs8xxE8woPBqjZsV7pRUZgC3LhxnwLSExyFo4 -cxb9SOG5OmAJozStFoQ2GJOes8rJ5qfdvytgg9xbLaQL/x0kpQ62BoFMBDdqOePW -KfP5zZ6/07/vpj48yA1Q32PzobubsBLd3Kcn32jfm1E7prtWl+JeOFiOznBQFJbN -4qPVRz5hAoGBANtWyxhNCSLu4P+XgKyckljJ6F5668fNj5CzgFRqJ09zn0TlsNro -FTLZcxDqnR3HPYM42JERh2J/qDFZynRQo3cg3oeivUdBVGY8+FI1W0qdub/L9+yu -edOZTQ5XmGGp6r6jexymcJim/OsB3ZnYOpOrlD7SPmBvzNLk4MF6gxbXAoGBAMZO -0p6HbBmcP0tjFXfcKE77ImLm0sAG4uHoUx0ePj/2qrnTnOBBNE4MvgDuTJzy+caU -k8RqmdHCbHzTe6fzYq/9it8sZ77KVN1qkbIcuc+RTxA9nNh1TjsRne74Z0j1FCLk -hHcqH0ri7PYSKHTE8FvFCxZYdbuB84CmZihvxbpRAoGAIbjqaMYPTYuklCda5S79 -YSFJ1JzZe1Kja//tDw1zFcgVCKa31jAwciz0f/lSRq3HS1GGGmezhPVTiqLfeZqc -R0iKbhgbOcVVkJJ3K0yAyKwPTumxKHZ6zImZS0c0am+RY9YGq5T7YrzpzcfvpiOU -ffe3RyFT7cfCmfoOhDCtzukCgYB30oLC1RLFOrqn43vCS51zc5zoY44uBzspwwYN -TwvP/ExWMf3VJrDjBCH+T/6sysePbJEImlzM+IwytFpANfiIXEt/48Xf60Nx8gWM -uHyxZZx/NKtDw0V8vX1POnq2A5eiKa+8jRARYKJLYNdfDuwolxvG6bZhkPi/4EtT -3Y18sQKBgHtKbk+7lNJVeswXE5cUG6EDUsDe/2Ua7fXp7FcjqBEoap1LSw+6TXp0 -ZgrmKE8ARzM47+EJHUviiq/nupE15g0kJW3syhpU9zZLO7ltB0KIkO9ZRcmUjo8Q -cpLlHMAqbLJ8WYGJCkhiWxyal6hYTyWY4cVkC0xtTl/hUE9IeNKo ------END RSA PRIVATE KEY----- -``` - -![Create Secret](https://pek3b.qingstor.com/kubesphere-docs/png/20190716163243.png#alt=) - -### Step 4: Create a Cafe Ingress - -Now we are ready to expose the two services with Ingress. - -4.1. Choose **Application Workloads → Routes**, and click **Create Route** button. - -4.2. Name it `cafe-ingress`, click **Next → Add Route Rule**. - -4.3. Choose **Specify Domain** and fill in the table as follows: - - -- HostName: `cafe.example.com` -- Protocol: Choose `https` -- Secret Name: Choose `cafe-secret` -- Paths: - - - Input `/coffee`, then choose `coffee-svc` as the backend service and select `80` as the port - - Click **Add Path**, input `/tea`, then choose `tea-svc` as the backend service and select `80` as the port - -![Create Ingress](https://pek3b.qingstor.com/kubesphere-docs/png/20200105175539.png) - -4.4. Click `√` and **Next** after you have done, then click **Create**. We can see the `cafe-ingress` has been created successfully. - -![Ingress List](https://pek3b.qingstor.com/kubesphere-docs/png/20200105175641.png) - -### Step 5: Access the Application Ingress - -So far, we have exposed two different applications via route and its rules. We can access the **tea** and **coffee** applications through different paths. - -![Services Info](https://pek3b.qingstor.com/kubesphere-docs/png/20200105180222.png) - -For example, when we visit `https://cafe.example.com:{$HTTPS_PORT}/coffee`, the back-end Pod of coffee-svc should respond to the request. We can switch to `admin` account to log in KubeSphere and open **web kubectl** from **Toolbox** at the bottom right corner. - -As shown in the following demo, the Server name and Server address is corresponding to the Pod `coffee-svc-yfhqwu-7b7bbf49f4-6c55l`. Please note the resolve information of the curl command is from the screenshot above. You should replace it with your real information. - -```bash -$ curl --resolve cafe.example.com:30000:192.168.0.54 https://cafe.example.com:30000/coffee --insecure -Server address: 10.233.90.5:80 -Server name: coffee-svc-yfhqwu-7b7bbf49f4-6c55l -Date: 05/Jan/2020:10:01:48 +0000 -URI: /coffee -Request ID: 6fb79c32e0b99653d2f226eef374e798 -``` - -![Pods](https://pek3b.qingstor.com/kubesphere-docs/png/20200105180954.png) - -Similarly, when we visit `https://cafe.example.com:{$HTTPS_PORT}/tea`, the back-end Pod of tea-svc should respond to the request. As shown in the following demo, the Server name and Server address is corresponding to the Pod `tea-svc-9fukgs-754cbc8b9b-rfhpr`. - -```bash -$ curl --resolve cafe.example.com:30000:192.168.0.54 https://cafe.example.com:30000/tea --insecure -Server address: 10.233.90.4:80 -Server name: tea-svc-9fukgs-754cbc8b9b-rfhpr -Date: 05/Jan/2020:10:07:16 +0000 -URI: /tea -Request ID: 2173c1565b368a5258368d15f55ca050 -``` - -![Access Tea Service](https://pek3b.qingstor.com/kubesphere-docs/png/20200105181039.png) - -## Conclusion - -As we can see from the instructions above, it demonstrates that the route has successfully forwarded different requests to the corresponding back-end services, and the services redirect traffic to one of the corresponding service’s backend Pods. diff --git a/content/en/docs/quick-start/jenkinsfile-out-of-scm.md b/content/en/docs/quick-start/jenkinsfile-out-of-scm.md deleted file mode 100644 index 847e8eacb..000000000 --- a/content/en/docs/quick-start/jenkinsfile-out-of-scm.md +++ /dev/null @@ -1,343 +0,0 @@ ---- -title: "Graphical CI/CD Pipeline without Jenkinsfile" -keywords: 'KubeSphere, kubernetes, docker, jenkins, cicd, graphical pipeline' -description: 'Create a none-Jenkinsfile CI/CD pipeline with graphical editing panel' - - -linkTitle: "10" -weight: 3100 ---- - -We have demonstrated how to create a Jenkinsfile-based pipeline for Spring Boot Project in the last tutorial. It requires users are familiar with Jenkinsfile. However, Jenkinsfile is a non-trivial thing to learn and some people even do not know about it. Therefore, unlike the last tutorial, we are going to show how to create a CI/CD pipeline without Jenkinsfile by visually editing the workflow through KubeSphere console. - -## Objective - -We will use the graphical editing panel in KubeSphere console to create a pipeline, which automates the processes and release the sample project to Kubernetes development environment. If you have tried the Jenkinsfile-based pipeline, the build steps for this tutorial are easy to understand. The sample project in this tutorial is same to the [one](https://github.com/kubesphere/devops-java-sample) that we used in the [last tutorial](../devops-online). - -## Prerequisites - -- You need to [enable KubeSphere DevOps System](../../installation/install-devops). -- You need to create [DockerHub](http://www.dockerhub.com/) account. -- You need to create a workspace, a DevOps project, and a **project-regular** user account, and this account needs to be invited into the DevOps project as the role of maintainer, please refer to [Getting Started with Multi-tenant Management](../admin-quick-start). -- Configure email server for notification in pipeline, please refer to [Set Email Server for KubeSphere Pipeline](../../devops/jenkins-email). -- Set CI dedicated node for building pipeline, please refer to [Set CI Node for Dependency Cache](../../devops/devops-ci-node). - -## Pipeline Overview - -The sample pipeline includes the following six stages. - -![Pipeline](https://pek3b.qingstor.com/kubesphere-docs/png/20190516091714.png#align=left&display=inline&height=1278&originHeight=1278&originWidth=2190&search=&status=done&width=2190) - -> To elaborate every stage: -> -> - **Stage 1. Checkout SCM:** Pull the GitHub repository code; -> - **Stage 2. Unit test**: The pipeline will continue running the next stage only if the unit test is passed; -> - **Stage 3. Code Analysis**: Configure SonarQube for static code quality check and analysis; -> - **Stage 4. Build and Push**: Build the image and push the it to DockerHub with tag `snapshot-$BUILD_NUMBER` where `$BUILD_NUMBER` is the serial number of the pipeline active list; -> - **Stage 5. Artifacts**: Generate the artifact (jar package) and save it; -> - **Stage 6. Deploy to DEV**: Deploy the project to the development environment. It requires an approval in this stage. An email will be sent after the deployment is successful. - -## Hands-on Lab - -### Step 1: Create Credentials - -We need to create **three** credentials for DockerHub, Kubernetes and SonarQube respectively. If you have finished the last lab [Create a Jenkinsfile-based Pipeline for Spring Boot Project](../devops-online#step-1-create-credentials), you already have the credentials created. Otherwise, please refer to [create credentials](../devops-online#step-1-create-credentials) to create them that are used in the pipeline. - -![Create Credentials](https://pek3b.qingstor.com/kubesphere-docs/png/20200221223754.png) - -### Step 2: Create Project - -The sample pipeline will deploy the [sample](https://github.com/kubesphere/devops-java-sample) to Kubernetes namespace, thus we need to create a project in KubeSphere. If you do not finish the last lab, please refer to the [step](../devops-online#create-the-first-project) to create a project named `kubesphere-sample-dev` by using `project-admin`, then invite the account `project-regular` into this project and assign the role of `operator` to this account. - -### Step 3: Create Pipeline - -Follow the steps below to create a pipeline using graphical editing panel. - -#### Fill in the basic information - -3.1. In the DevOps project, select the **Pipeline** on the left and click **Create**. - -![Create Pipeline](https://pek3b.qingstor.com/kubesphere-docs/png/20200221225029.png) - -3.2. In the pop-up window, name it `graphical-pipeline`, click **Next**. - -#### Advanced Settings - -3.3. Keep clicking **Add Parameter** to add **three** string parameters as follows. These parameters will be used in the Docker command of the pipeline. Click **Create** when you are done. - -| Parameter Type | Name | Default Value | Description | -| --- | --- | --- | --- | -| String | REGISTRY | The sample repository address is `docker.io`. | Image Registry | -| String | DOCKERHUB_NAMESPACE | Fill in your DockerHub account which can also be the Organization name under the account. | DockerHub Namespace | -| String | APP_NAME | Fill the application name with `devops-sample`. | Application Name | - -![Advanced Settings](https://pek3b.qingstor.com/kubesphere-docs/png/20200222155944.png) - -### Step 4: Editing pipeline - -This pipeline consists of six stages. We will demonstrate the steps and tasks in each stage. - -#### Stage I: Pull Source Code (Checkout SCM) - -The graphical editing panel includes two areas, i.e., **canvas** on the left and **content** on the right. It will generate Jenkinsfile after creating a pipeline in the panel, which is much more user-friendly for developers. - -> Note: Pipeline includes `scripted pipeline` and `declarative pipeline`, and the panel supports `declarative pipeline`. For pipeline syntax, see [Jenkins Documentation](https://jenkins.io/doc/book/pipeline/syntax/). - -4.1.1. As follows, select **node** from the drop-down list of **agent type** in the content area, input `maven` in the label. - -> Note: The agent is used to define execution environment. The agent directive tells Jenkins where and how to execute the pipeline or a specific stage. Please refer to [Jenkins Agent](https://jenkins.io/doc/pipeline/tour/agents/) for further information. - -![Select Agent](https://pek3b.qingstor.com/kubesphere-docs/png/20200303174821.png) - -4.1.2. In the canvas area, click the **+** button to add a stage. Click the box with title `No Name` that encloses the box **Add Step**, name it `Checkout SCM` in the content area on the right of the panel. - -![Checkout SCM](https://pek3b.qingstor.com/kubesphere-docs/png/20200221234417.png) - -4.1.3. Click **Add Step**. Select **git** from the content area. For now, fill in the pop-up window as follows: - -- Url: Input GitHub repository URL `https://github.com/kubesphere/devops-java-sample.git`. Please replace the url with your own repository. -- Credential ID: Leave it blank as it is for using a private repository. -- Branch: Leave it as blank. Blank means default to master. - -When you are done, click **OK** to save it and you will see the first stage created. - -![GitHub repository](https://pek3b.qingstor.com/kubesphere-docs/png/20200221234935.png) - -#### Stage II: Unit Test - -4.2.1. Click **+** on the right of the stage **Checkout SCM** to add another stage for performing a unit test in the container, name it `Unit Test`. - -![Unit Test](https://pek3b.qingstor.com/kubesphere-docs/png/20200221235115.png) - -4.2.2. Click **Add Step** and select **container**, name it `maven`, then click **OK**. - -![maven](https://pek3b.qingstor.com/kubesphere-docs/png/20200221235323.png) - -4.2.3. In the content area, click **Add nesting steps** in the `maven` container created above to add a nested step. Then select **shell** and enter the following command in the pop-up window: - -```bash -mvn clean -o -gs `pwd`/configuration/settings.xml test -``` - -Click **OK** to save it. - -![maven container](https://pek3b.qingstor.com/kubesphere-docs/png/20200221235629.png) - -#### Stage III: Code Analysis - -4.3.1. Same as above, click **+** on the right of the stage **Unit Test** to continue adding a stage for configuring SonarQube, which is used to perform static code quality analysis in the container, name it `Code Analysis`. - -![Code Analysis](https://pek3b.qingstor.com/kubesphere-docs/png/20200222000007.png) - -4.3.2. Click **Add Step** in **Code Analysis**, and select **container**,name it `maven`,then click **OK**. - -![Code Analysis](https://pek3b.qingstor.com/kubesphere-docs/png/20200222000204.png) - -4.3.3. Click **Add nesting steps** in the `maven` container created above to add a nested step and select **withCredentials**, Select the previously created credential ID `sonar-token` and input `SONAR_TOKEN` in the text variable, then click **OK**. - -![withCredentials](https://pek3b.qingstor.com/kubesphere-docs/png/20200222000531.png) - -4.3.4. In the task **withCredential** on the right, click **Add nesting steps** (the first one),then select **withSonarQubeEnv**, leave the default name `sonar`, click **OK** to save it. - -![Code Analysis](https://pek3b.qingstor.com/kubesphere-docs/png/20200222000743.png) - -![withSonarQubeEnv](https://pek3b.qingstor.com/kubesphere-docs/png/20200222000936.png) - -4.3.5. Click **Add nesting steps** (the first one) in the **withSonarQubeEnv**. Then select **shell** on the right, enter the following commands in the pop-up window for SonarQube branch and authentication, and click **OK** to save the information. - -```shell -mvn sonar:sonar -o -gs `pwd`/configuration/settings.xml -Dsonar.branch=$BRANCH_NAME -Dsonar.login=$SONAR_TOKEN -``` - -![SonarQube branch](https://pek3b.qingstor.com/kubesphere-docs/png/20200222161853.png) - -4.3.6. Click on the **Add nesting steps** (the third one) on the right, select **timeout**. Input `1` to time, and select `Hours` in unit. - -Click **OK** to save it. - -![SonarQube timeout](https://pek3b.qingstor.com/kubesphere-docs/png/20200222001544.png) - -4.3.7. In the `timeout`, click **Add nesting steps** (the first one). Then select **waitforSonarQubeGate** and keep the default `Start the follow-up task after inspection` in the popup window. - -Click **OK** to save it. - -![waitforSonarQubeGate](https://pek3b.qingstor.com/kubesphere-docs/png/20200222001847.png) - -#### Stage IV: Build and Push the Image - -4.4.1. Similarly, click **+** on the right of the stage of **Code Analysis** to add another stage to build and push images to DockerHub, name it `Build and Push`. - -4.4.2. Click **Add Step** and select **container**,name it `maven`,then click **OK**. - -![maven container](https://pek3b.qingstor.com/kubesphere-docs/png/20200222112517.png) - -4.16. Click **Add nesting steps** in the contain `maven`, and select **shell** on the right, enter the following command in the pop-up window: - -```shell -mvn -o -Dmaven.test.skip=true -gs `pwd`/configuration/settings.xml clean package -``` - -4.4.3. Then continue to click **Add nesting steps** on the right, select **shell** in the pop-up window, enter the following command to build a Docker image based on the [Dockerfile](https://github.com/kubesphere/devops-java-sample/blob/master/Dockerfile-online): - -> Please DO NOT miss the dot `.` at the end of the command. - -```shell -docker build -f Dockerfile-online -t $REGISTRY/$DOCKERHUB_NAMESPACE/$APP_NAME:SNAPSHOT-$BUILD_NUMBER . -``` - -![Build Docker image](https://pek3b.qingstor.com/kubesphere-docs/png/20200222113131.png) - -Click **OK** to save it. - -4.4.4. Similarly, click **Add nesting steps** again and select **withCredentials** on the right. Fill in the pop-up window as follows: - -> Note: Considering the security, the account information are not allowed to be exposed in plaintext in the script. - -- Credential ID:Select the DockerHub credentials you created, e.g. `dockerhub-id` -- Password variable:Enter `DOCKER_PASSWORD` -- Username variable:Enter `DOCKER_USERNAME` - -Click **OK** to save the it. - -![DockerHub credentials](https://pek3b.qingstor.com/kubesphere-docs/png/20200222113442.png) - -4.4.5. Click **Add nesting steps** (the first one) in the **withCredentials** step created above, select **shell** and enter the following command in the pop-up window, which is used to log in Docker Hub: - -```shell -echo "$DOCKER_PASSWORD" | docker login $REGISTRY -u "$DOCKER_USERNAME" --password-stdin -``` - -Click **OK** to save the it. - -![docker login](https://pek3b.qingstor.com/kubesphere-docs/png/20200222114937.png) - -4.4.6. As above, click **Add nesting steps** in the **withCredentials** step again, choose **shell** and enter the following command to push the SNAPSHOT image to DockerHub: - -```shell -docker push $REGISTRY/$DOCKERHUB_NAMESPACE/$APP_NAME:SNAPSHOT-$BUILD_NUMBER -``` - -![docker push](https://pek3b.qingstor.com/kubesphere-docs/png/20200222120214.png) - -#### Stage V: Generate Artifact - -4.5.1. Click **+** on the right of the **Build and Push** stage, here we add another stage to save artifacts. This example uses the jar package and name it `Artifacts`. - -![Save Artifacts](https://pek3b.qingstor.com/kubesphere-docs/png/20200222120540.png) - -4.5.2. Click **Add Step** in **Artifacts** stage, select **archiveArtifacts**. Enter `target/*.jar` in the pop-up window, which is used to set the archive path of artifact in Jenkins. - -Click **OK** to save the it. - -![Artifacts](https://pek3b.qingstor.com/kubesphere-docs/png/20200222121035.png) - -#### Stage VI: Deploy to Dev - -4.6.1. Click **+** on the right of the stage **Artifacts** to add the last stage, name it `Deploy to Dev`. This stage is used to deploy resources to development environment, namely, the project of `kubesphere-sample-dev`. - -4.6.2. Click **Add Step** in **Deploy to Dev**, select **input** and enter `@project-admin` in the pop-up window, assigning account `project-admin` to review this pipeline. - -Click **OK** to save the it. - -4.6.3. Click **Add Step** on the right,select **kubernetesDeploy**. Fill in the pop-up window as below and click **Confirm** to save the information: - -- Kubeconfig: select `demo-kubeconfig` -- Configuration file path: Enter `deploy/no-branch-dev/**` which is the related path of the Kubernetes [yaml](https://github.com/kubesphere/devops-java-sample/tree/master/deploy/no-branch-dev). - -Click **OK** to save the it. - -![Deploy to Kubernetes](https://pek3b.qingstor.com/kubesphere-docs/png/20200222153404.png) - -4.6.4. Similarly, click **Add Step** to send an email notification to the user after the pipeline runs successfully, select **mail** and fill in the information. - -> Note: Make sure you have [configured email server](../../devops/jenkins-email) in `ks-jenkins`. Please refer to Jenkins email configuration. If not yet, skip this step and you still can run this pipeline. - -At this point, the total six stages of the pipeline have been edited completely, click **Confirm → Save**, it will generate Jenkinsfile as well. - -![Complete Pipeline](https://pek3b.qingstor.com/kubesphere-docs/png/20200222154407.png) - -### Step 5: Run Pipeline - -5.1. The pipeline created by the graphical editing panel needs to be manually run. Click **Run**, you can see the three string parameters defined in the third step. Click **OK** to start this pipeline. - -![Run Pipeline](https://pek3b.qingstor.com/kubesphere-docs/png/20200222160330.png) - -5.2. You can see the status of the pipeline in the **Activity** list. Click **Activity** to view the detailed running status. - -5.3. Enter the first activity to view detailed page. - -![View detailed page](https://pek3b.qingstor.com/kubesphere-docs/png/20200222163341.png) - -> Note: If the previous steps are running correctly, you can see that the pipeline has successfully run to the last stage in a few minutes. Since we set the review step and specify the account `project-admin` as the reviewer. Therefore, we need to switch to use `project-admin` to manually review and approve it. - -5.4. Log out, and log in with account `project-admin`. Enter into the pipeline `graphical-pipeline` of the DevOps project that we used above. Drill into **Activity** to view the running status. You can see the pipeline has run to the **Deploy to DEV** stage. Click **Proceed** to approve it. - -![Activity](https://pek3b.qingstor.com/kubesphere-docs/png/20200222170334.png) - -### Step 6: View Pipeline - -6.1. Log back the account `project-regular`. After a few minutes, the pipeline runs successfully. Click **Activity** list in the pipeline to view the current running pipeline serial number. This page shows the running status of each stage in the pipeline. - -![View Pipeline](https://pek3b.qingstor.com/kubesphere-docs/png/20200222182230.png) - -6.2. Click **Show Logs** on the top right of the current page to inspect the logs. The pop-up window shows the specific logs, running status and time of each stage. Click on a specific stage and expand its specific log on the right. You can debug any problems based on the logs which also can be downloaded to your local file for further analysis. - -![Show Logs](https://pek3b.qingstor.com/kubesphere-docs/png/20200222171027.png) - -### Step 7: Check Code Quality - -Back to the **Activity** page, click **Code quality** to check the analysis of the code quality for the demo project, which is provided by the SonarQube. The sample code is simple and does not show bugs or vulnerabilities. Click on the SonarQube icon on the right to access SonarQube. Please refer to [Access SonarQube](../../installation/sonarqube-jenkins) to log in. - -![Check Code Quality](https://pek3b.qingstor.com/kubesphere-docs/png/20200222171426.png) - -#### View the Quality Report at SonarQube - -![Quality report](https://pek3b.qingstor.com/kubesphere-docs/png/20200222171539.png) - -### Step 8: Download Artifacts - -Enter the first activity and select **Artifacts**. You can find the artifact of jar package generated by the pipeline, and you can download it by clicking the icon. - -![Download Artifacts](https://pek3b.qingstor.com/kubesphere-docs/png/20200222172157.png) - -### Step 9: Verify the Kubernetes Resource - -If every stage of the pipeline runs successfully, the Docker image will be automatically built and pushed to your DockerHub account. Finally, the project is deployed to the Kubernetes with a deployment and a service automatically. - -9.1. Enter the project `kubesphere-sample-dev`, click **Application Workloads → Workloads** to see that `ks-sample-dev` has been created successfully. - -| Environment | Address | Namespace | Deployment | Service | -| --- | --- | --- | --- | --- | -| Dev | `http://{$Virtual IP}:{$8080}` or `http://{$Intranet/Public IP}:{$30861}` | kubesphere-sample-dev | ks-sample-dev | ks-sample-dev | - -#### View Deployment - -![View Deployment](https://pek3b.qingstor.com/kubesphere-docs/png/20200222173254.png) - -9.2. Navigate to **Service** list, you can find the corresponding service has been created. The NodePort exposed by the service is`30861` in this example. - -#### View Service - -![View Service](https://pek3b.qingstor.com/kubesphere-docs/png/20200222173213.png) - -9.3. Now verify the images pushed to DockerHub. You can see that `devops-sample` is the value of **APP_NAME**, while the tag is the value of `SNAPSHOT-$BUILD_NUMBER`, and `$BUILD_NUMBER` is the serial number of the activity within pipeline. This tag has also been used in deployment `ks-sample-dev`. - -![View DockerHub](https://pek3b.qingstor.com/kubesphere-docs/png/20200222173907.png) - -![View DockerHub](https://pek3b.qingstor.com/kubesphere-docs/png/20200222173802.png) - -9.4. Since we set an email notification in the pipeline, thus we can verify the email in the mailbox. - -![Email notification](https://pek3b.qingstor.com/kubesphere-docs/png/20200222173444.png) - -### Step 10: Access the Sample Service - -We can access the sample service using command or access in browser. For example, you can use the web kubectl by using account `admin` as follows: - -```bash -# curl {$Virtual IP}:{$Port} or curl {$Node IP}:{$NodePort} -curl 10.233.4.154:8080 -Really appreciate your star, that's the power of our life. -``` - -Congratulation! You have been familiar with using graphical editing panel to visualize your CI/CD workflow. diff --git a/content/en/docs/quick-start/job-quick-start.md b/content/en/docs/quick-start/job-quick-start.md deleted file mode 100644 index 450dcb4b3..000000000 --- a/content/en/docs/quick-start/job-quick-start.md +++ /dev/null @@ -1,92 +0,0 @@ ---- -title: "Create a Job to Compute π to 2000 Places" -keywords: 'kubesphere, kubernetes, docker, job' -description: 'How to create a Kubernetes Job in KubeSphere' - - -linkTitle: "5" -weight: 3050 ---- - - A Job creates one or more Pods and ensures a specified number of them successfully terminate. You can also use a Job to run multiple Pods in parallel. For example, we can use Kubernetes Job to process and analyze data in batch. - -## Objective - -This tutorial describes the basic features of a Job by creating a parallel job to compute π to 2000 places and print it out. - -## Prerequisites - -- You need to create a workspace, project and `project-regular` account. Please refer to the [Getting Started with Multi-tenant Management](../admin-quick-start) if not yet. -- You need to sign in with `project-admin` account and invite `project-regular` to enter the corresponding project if not yet. Please refer to [Invite Member](../admin-quick-start#task-3-create-a-project). - -## Estimated Time - -About 15 minutes - -## Hands-on Lab - -### Create a Job - -#### Step 1: Fill in Basic Information - -Log in the KubeSphere console with `project-regular` account, then enter a project, navigate to **Application Workloads → Jobs** and click **Create Job**. Then fill in the basic information, e.g. `job-demo` as its name, and choose **Next**. - -![Job List](https://pek3b.qingstor.com/kubesphere-docs/png/20200205204716.png) - -#### Step 2: Configure Job Settings - -Set the four configuration parameters of the Job Spec as the following shown. - -- Back Off Limit:specifies the number of retries before the build job failed; Set to `5`. -- Completions:expected number of completed build jobs; Change the value from default 1 to `4`. -- Parallelism:expected maximum number of parallel build jobs; Change the value from default 1 to `2`. -- Active Deadline Seconds:the timeout of the running build jobs. Once a Job reaches its value, all of its running Pods are terminated and the Job status will become "Failed". Set to `300`. - -then click **Next** when you are done. - -![Job Settings](https://pek3b.qingstor.com/kubesphere-docs/png/20200205211021.png) - -#### Step 3: Set the Job Template - -Leave the [RestartPolicy](https://kubernetes.io/docs/concepts/workloads/Pods/pod-lifecycle/#restart-policy) as **Never**, then click **Add Container Image**. - -> - Never: The job will create a new container group when the errors occur and it will not disappear. -> - OneFailure: The job will restart the container when the errors occur, instead of creating a new container group. - -Enter `perl` in the image name and press return key, then scroll down to **Start Command**. - -![Job Container](https://pek3b.qingstor.com/kubesphere-docs/png/20200205225230.png) - -Check **Start Command**, add the following command which performs a simple calculation and outputs the result of the Pi to 2000 places. Then click **√** to save it and choose **Next** to finish this step. - -```bash -perl,-Mbignum=bpi,-wle,print bpi(2000) -``` - -![Job Start Command](https://pek3b.qingstor.com/kubesphere-docs/png/20200205225435.png) - -Click **Next** to skip **Mount Volumes**. Click **Create** to complete job creation. - -![Job Demo](https://pek3b.qingstor.com/kubesphere-docs/png/20200205225718.png) - -## Verify the Job Result - -1. Enter the `job-demo` and inspect the execution records. You can see it displays "completed". There are four completed Pods since the Completions was set to `4` in the Step 2. - -![Job Records](https://pek3b.qingstor.com/kubesphere-docs/png/20200205230222.png) - -2. In the **Resource Status**, you can inspect the Pod status. Since the Parallelism was set to 2, there are two Pods created in the first batch. Then it continues to create two more Pods. Finally four Pods are created at the end of the Job. - -![Job Resources](https://pek3b.qingstor.com/kubesphere-docs/png/20200205230003.png) - -> Tips: Since the creation of the container may encounter timed out, if the job fails, click **··· → Rerun** from the list to rerun this job. - -![Rerun Job](https://pek3b.qingstor.com/kubesphere-docs/png/20200205230541.png) - -3. In the **Resource Status** tap, expand one of its Pod, then click into **Container Logs** to inspect the container logs which display the calculation result, i.e. PI to 2000 places. - -![Container Logs Entry](https://pek3b.qingstor.com/kubesphere-docs/png/20200205230919.png) - -![Container Logs](https://pek3b.qingstor.com/kubesphere-docs/png/20190716213657.png#alt=) - -Congratulation! You have learned Job's basic functions. For further details, please refer to [Jobs - Run to Completion](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/). diff --git a/content/en/docs/quick-start/minimal-kubesphere-on-k8s.md b/content/en/docs/quick-start/minimal-kubesphere-on-k8s.md new file mode 100644 index 000000000..36fd2ce80 --- /dev/null +++ b/content/en/docs/quick-start/minimal-kubesphere-on-k8s.md @@ -0,0 +1,8 @@ +--- +title: "Minimal KubeSphere on Kubernetes" +keywords: 'kubesphere, kubernetes, docker, multi-tenant' +description: 'Install a Minimal KubeSphere on Kubernetes' + +linkTitle: "Minimal KubeSphere on Kubernetes" +weight: 3020 +--- diff --git a/content/en/docs/quick-start/mysql-deployment.md b/content/en/docs/quick-start/mysql-deployment.md deleted file mode 100644 index beab4a692..000000000 --- a/content/en/docs/quick-start/mysql-deployment.md +++ /dev/null @@ -1,124 +0,0 @@ ---- -title: 'Deploying a MySQL Stateful Application' -keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus' -description: '' - -_build: - render: false ---- - -## Objective - -Take the setting up of a Statefulset for an example. Here is a presentation of how to use the mirroring deployment, `mysql:5.6`, to set up a stateful MySQL app as the [Wordpress](https://wordpress.org/) website's backend. It will show you how to use Statefulset. The MySQL initial password for this example will be created and saved as [Secret](../configuration/secrets/). For presenting, here will only demonstrate processes. For relevant parameters and fields' detailed explanation, please refer to the [Secret](../configuration/secrets/) and [StatefulSets](../workload/statefulsets/) - -## Prerequisites - -- The workspace, projects and the general user account `project-regular` should be created. If not, please refer to [Quick Start Guide of Multi-tenant Management](../quick-start/admin-quick-start/) -- Use `project-admin` to invite `project regular` to the project and grant it with the role of `operator`. Please refer to [Quick Start Guide of Multi-tenant Management-Inviting Members](../quick-start/admin-quick-start/) - -## Estimated Time - -- About 10 minutes - -## Hands-on Lab - -## Deploy MySQL - -### Step 1: Create the Password - -MySQL's Enviromental variable `MYSQL_ROOT_PASSWORD`, namely the root user's password, is private informsation. It's inappropriate to show the password in steps. Therefore, here we use the password creation to replace the environmental variable. The created password will be keyed in as the environmental variable when setting up the MySQL container group. - -1.1. Log in KubeSphere as the `project-regular`. Select **Secret** in the **Configuration Center → Secrets**, then click **Create**. - -![](https://pek3b.qingstor.com/kubesphere-docs/png/20190716180335.png#alt=) - -1.2. Fill in the password's basic information,then click **Next**. - -- Name: The environmenal variables in the MySQL container can have customized names, such as `mysql-secret`. -- Nickname: Nickname can be a mix of characters for you to differenciate resources, such as `MySQL Secret`. -- Information Description: Simply introduce the password, such as `MySQL Initial password`. - - 1.3. Fill in the following information into the secret setting page. Then click **Create**. - -- Type: Select `default`(Opaque). -- Data: Fill in `MYSQL_ROOT_PASSWORD` and `123456`for the data key-value pair. - -![](https://pek3b.qingstor.com/kubesphere-docs/png/20190716180525.png#alt=) - -### Step 2: Create a StatefulSet - -Navigate to **Workload → StatefulSets**, then click **Create StatefulSet**. - -![](https://pek3b.qingstor.com/kubesphere-docs/png/20190716180714.png#alt=) - -### Step 3: Fill in Basic Information - -Fill in the following information and then click **Next**. - -- Name:(Necessary) A simple name can help with user brpwsing and researching, such as `wordpress-mysql`. -- Nickname: (Optional) Chinese can help with better resource differentiation, such as `MySQL Database`. -- Information description: Simply introduce the workload for users' understanding. - -### Step 4: Container Group Template - -4.1. Click **Add Container** to fill in the container group seeing. The name is customizable. Fill in the mirror with `mysql:5.6` (specific mirror edition number is needed). There is no limitation for CPU and storage. They will be used as the default reqest value when creating the project. - -![](https://pek3b.qingstor.com/kubesphere-docs/png/20190716193052.png#alt=) - -4.2 Set up the **Service Setting** and the **Environmental Variable**. Leave others unchanged. Then click **Save**. - -- Port: It can be named as Port. Select `TCP` protocol. Fill in `3306` at MySQL's container port. -- Environmental Variables: Check the box and click **Reference Configuration Center**. Key in `MYSQL_ROOT_PASSWORD`for name and select the secret set in the first step `mysql-secret` and `MYSQL_ROOT_PASSWORD`. - -![](https://pek3b.qingstor.com/kubesphere-docs/png/20190716193727.png#alt=) - -4.3. Click **Save** and then click **Next**. - -### Step 5: Add Storage Volume Template - -Complete the container group template then click **Next**. Lick **Add Storage Volume Template** in the template. Stateful data should be saved in persistent storage volume. Thus, you need to add storage volume to realize the data persistency. Please refer to the storage volume information as follows. - -- Volume Name: `mysql-pvc` -- Storage Type: Select existing storage type, such as `Local`. -- Capacity: Set `10 Gi` by default and set access mode as `ReadWriteOnce` by default. -- Mount Path: Find the storage volume's mount path in the container. Select `Read and Write` and set pasth as `/var/lib/mysql`. - -Click **Save** when you're done. Then click **Next**. - -![](https://pek3b.qingstor.com/kubesphere-docs/png/20190716194134.png#alt=) - -### Step 6. Service Configuration - -If you need to reveal the MySQL application to other applications and servers, you need to create the service. Complete the parameter setting by refering the picture below. Then click **Next**. - -- Service Name: `mysql-service` (Attention: The Service Name will be associated with Wordpress so use this name when you add environmental variables.) -- Conversation affinity: None by default -- Ports: The name is customizable. Select TCP protocol. Fill `3306` for both of the MySQL service port and the target port. The first port the service port that needs to be exposed. The second port (target port) is the container port. - -> Note: If there is a requirement for conversation affinity, you can select "ClientIP" in the drop-down box or set the value of service.spec.sessionAffinity as "ClientIP" ("None" by default) in the code mode. This configuration can forward access request from the same IP address to the same rear end Pod. - -![](https://pek3b.qingstor.com/kubesphere-docs/png/20190716194331.png#alt=) - -### Step 7: Tag Setting - -Keep the tag as default setting `app: wordpress-mysql`. You can use the specific container group adjust the next node selector to the expected node. Do not set it for now. Click **Create**. - -### Inspect the MySQL Application - -You can see the MySQL StatefulSet displays "updating" since this process requires a series of operations, such as pulling a Docker image creating a container, and initializing the database. It will show `ContainerCreating`.  Normally, it will change to "running" at around 1 min. Click this you can access to the StateSet page including the Resource Status, Version Control, Monitoring, Environmental Variable and Events. - -**Resource Status** - -![](https://pek3b.qingstor.com/kubesphere-docs/png/20190716195604.png#alt=) - -**Monitoring Data** - -![](https://pek3b.qingstor.com/kubesphere-docs/png/20190716195732.png#alt=) - -**Events List** - -![](https://pek3b.qingstor.com/kubesphere-docs/png/20190716200230.png#alt=) - -So far, MySQL Stateful application has been created successfully, it will be served as the backend database of the WordPress application. - -It's recommended to follow with [Quick Start - Wordpress Deployment Guide](../wordpress-deployment) to deploy the blog website, then you will be able to access the web service. diff --git a/content/en/docs/quick-start/one-click-deploy.md b/content/en/docs/quick-start/one-click-deploy.md deleted file mode 100644 index aebc02483..000000000 --- a/content/en/docs/quick-start/one-click-deploy.md +++ /dev/null @@ -1,91 +0,0 @@ ---- -title: "Deploy Grafana App to Kubernetes using Application Template" -keywords: "kubesphere, kubernetes, docker, helm, Grafana, application store" -description: "How to deploy applications using templates based on OpenPitrix" - -linkTitle: "4" -weight: 3040 ---- - -## Objective - -This tutorial shows you how to quickly deploy a [Grafana](https://grafana.com/) application using templates from KubeSphere application store sponsored by [OpenPitrix](https://github.com/openpitrix/openpitirx). The demonstration includes importing application repository, sharing and deploying apps within a workspace. - -## Prerequisites - -- You have enabled [KubeSphere Application Store](../../installation/install-openpitrix) -- You have completed the tutorial in [Getting Started with Multi-tenant Management](../admin-quick-start) - -## Hands-on Lab - -### Step 1: Add an Application Repository - -> Note: The application repository can be hosted by either object storage, e.g. [QingStor Object Storage](https://www.qingcloud.com/products/qingstor/), [AWS S3](https://aws.amazon.com/what-is-cloud-object-storage/), or by [GitHub Repository](https://github.com/). The packages are composed of Helm Chart template files of the applications. Therefore, before adding an application repository to KubeSphere, you need to create an object storage bucket and upload Helm packages in advance. This tutorial prepares a demo repository based on QingStor Object Storage. - -1.1. Sign in with `ws-admin` account, click **View Workspace** and navigate to **Workspace Settings → App Repos**, then click **Create App Repository**. - -![App Repo List](https://pek3b.qingstor.com/kubesphere-docs/png/20200106143904.png) - -1.2. Fill in the basic information, name it `demo-repo` and input the URL `https://helm-chart-repo.pek3a.qingstor.com/kubernetes-charts/`. You can validate if this URL is available, and choose **OK** when you have done. - -> Note: It will automatically import all of the applications from the Helm repository into KubeSphere. You can browse those app templates in each project. - -![Add App Repo](https://pek3b.qingstor.com/kubesphere-docs/png/20200106144105.png) - -### Step 2: Browse App Templates - -2.1. Switch to use `project-regular` account to log in, then enter into `demo-project`. - -2.2. Click **Application Workloads → Applications**, click **Deploy New Application**. - -![App List](https://pek3b.qingstor.com/kubesphere-docs/png/20200106161804.png) - -2.3. Choose **From App Templates** and select `demo-repo` from the dropdown list. - -![App Templates](https://pek3b.qingstor.com/kubesphere-docs/png/20200106162219.png) - -2.4. Search `Grafana` and click into Grafana App. We will demonstrate deploying Grafana to Kubernetes as an example. - -> Note: The applications of this demo repository are synchronized from the Google Helm repo. Some applications may not be able to be deployed successfully, since the helm charts were maintained by different organizations. - -### Step 3: Deploy Grafana Application - -3.1. Click **Deploy** on the right. Generally you do not need to change any configuration, just click **Deploy**. - -![View Grafana](https://pek3b.qingstor.com/kubesphere-docs/png/20200106171747.png) - -3.2. Wait for two minutes, then you will see the application `grafana` showing `active` on the application list. - -![Deploy Grafana](https://pek3b.qingstor.com/kubesphere-docs/png/20200106172151.png) - -### Step 4: Expose Grafana Service - -4.1. Click into Grafana application, and then enter into its service page. - -![View Grafana Detail](https://pek3b.qingstor.com/kubesphere-docs/png/20200106172416.png) - -4.2. In this page, make sure its deployment and Pod are running, then click **More → Edit Internet Access**, and select **NodePort** in the dropdown list, click **OK** to save it. - -![Edit Internet Access for Grafana Service](https://pek3b.qingstor.com/kubesphere-docs/png/20200106172532.png) - -4.3. At this point, you will be able to access Grafana service from outside of the cluster. - -![Grafana Service Endpoint](https://pek3b.qingstor.com/kubesphere-docs/png/20200106172837.png) - -### Step 5: Access the Grafana Service - -In this step, we can access Grafana service using `${Node IP}:${NODEPORT}`, e.g. `http://192.168.0.54:31407`, or click the button **Click to visit** to access the Grafana dashboard. - -5.1. Note you have to obtain the account and password from the grafana secret in advance. Navigate to **Configuration Center → Secrets**, click into **grafana-l47bmc** with Type Default. - -![Grafana Secret](https://pek3b.qingstor.com/kubesphere-docs/png/20200106173434.png) - -5.2. Click the eye button to display the secret information, then copy and paste the values of **admin-user** and **admin-password** respectively. - -![Grafana Credentials](https://pek3b.qingstor.com/kubesphere-docs/png/20200106173531.png) - -5.3. Open the Grafana login page, sign in with the **admin** account. - -![Grafana Login Page](https://pek3b.qingstor.com/kubesphere-docs/png/20190717152831.png#alt=) - -![Grafana Dashboard](https://pek3b.qingstor.com/kubesphere-docs/png/20190717152929.png#alt=) diff --git a/content/en/docs/quick-start/pipeline-git-harbor.md b/content/en/docs/quick-start/pipeline-git-harbor.md deleted file mode 100644 index 79f3b0f1f..000000000 --- a/content/en/docs/quick-start/pipeline-git-harbor.md +++ /dev/null @@ -1,253 +0,0 @@ ---- -title: "Building a Pipeline based on GitLab and Harbor" -keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus' -description: '' - -_build: - render: false ---- - -KubeSphere Installer integrates Harbor and GitLab. Build-in Harbor and GitLab are optional for installation and they need to be opened before being installed. Users can determine the installation according to team projects' requirements, which will be easier to manage projects' images and codes. This guide can be used to build offline pipelines. - -## Objective - -This guide will show how to create pipelines through Jenkinsfile built in the GitLab repository. There are 7 stages in the pipeline. Firstly, images will be created from source codes in GitLab. Then images will be pushed to Harbor's private repository. Finally, deploy "Hello,World!" to Dev and Production environments in the KubeSphere cluster. And the deployment is accessible through public networks. The two environments mentioned above is resource separated by Namespace underlying Kubernetes.  - -## Prerequisite  - -This guide takes GitLab and Harbor as examples. Please confirm the installation of [build-in Harbor](../../installation/harbor-installation/) and build-in GitLab and ready basic image of `java:openjdk-8-jre-alpine`. - - -## Hand-on Lab - -### Pipeline Overview - -The flowchart below illustrates the pipeline's entire workflow: - -![](https://pek3b.qingstor.com/kubesphere-docs/png/20190512155453.png) - -> Workflow: -> - **stage 1. Checkout SCM**: Pull GitLab repository codes. -> - **Stage 2. Unit test**: Proceed the following jobs only unit tests have been passed. -> - **Stage 3. sonarQube analysis**: sonarQube Code check.  -> - **Stage 4. Build & push snapshot image**: According selected branches from the behaviour strategy to build images. Set the tag as `SNAPSHOT-$BRANCH_NAME-$BUILD_NUMBER` and push the tag to Harbor (`$BUILD_NUMBER` is the running code in the pipeline's activity list). -> - **Stage 5. Push latest image**: Tag the master branch as the latest and push it to Harbor. -> - **Stage 6. Deploy to dev**: Deploy master branch to the Dev environment. This stage requires review. -> - **Stage 7. Push with tag**: Generate tag, release it on GitLab and push the tag to Harbor. -> - **Stage 8. Deploy to production**: Deploy the published tag to the Production environment. - - -## Basic Settings - -### Step 1: Edit CoreDNS's Settings - -Configure the DNS service of the KubeSphere cluster through the hosts plug-in of CoreDNS, so that the external service can be accessed through the hostname domain name inside the cluster. - - - -### Step 2 : Upload Basic Images to Harbor - -Then upload a base image `java:openjdk-8-jre-alpine` to Harbor. - -## Create Credentials - -Log in to KubeSphere using the project common user `project-regular`, enter the created DevOps project, and start creating the credential. - -![](https://pek3b.qingstor.com/kubesphere-docs/png/20191018195927.png) - -1. The Jenkinsfile in this sample code repository requires three credentials, such as Harbor, GitLab, and Kubernetes (kubeconfig is used to access the running Kubernetes cluster). These three credentials are created in turn by creating credentials.  - -2. Then visit SonarQube and create a Java's Token, copy it. - -3. Finally, in the DevOps project of `devops-demo` in KubeSphere, similar to the above steps, click **create** under **credential** to create a `secrete text` credential as the above step. Name the credential ID as **sonar-token**. The key is the token information copied in the previous step. Click **confirm** when it's done. - - -At this point, 4 credentials have been created. The next step is to modify the corresponding four creatives IDs for the user-created credential ID in the jenkinsfile sample repository. - -![ce](https://kubesphere-docs.pek3b.qingstor.com/png/ce.png) - -## Edit Jenkinsfile - -### Step 1: Enter into the Project - -1. According to the requirements in the preconditions, the [`devops-java-sample`](https://github.com/kubesphere/devops-java-sample) in GitHub should be correctly imported into GitLab. - -> Note: If you cannot import from GitHub due to network restrictions, please clone it to another server and upload it to the GitLab repository. The name of the repository should be the consistent. - -![gitlab](https://kubesphere-docs.pek3b.qingstor.com/png/gitlab-succ.png) - - -2. Click into the project. - -### Step 2: Edit Jenkinsfile - -1. Enter into **Jenkinsfile-on-prem **from **Root Directory**. - -![jenkins](https://kubesphere-docs.pek3b.qingstor.com/png/jenkins.png) - -2. Click `Edit` at GitLab UI. You need to edit the following environments' variables. - -![edit](https://kubesphere-docs.pek3b.qingstor.com/png/edit.png) - - -| Editting Items | Value | Description | -| :--- | :--- | :--- | -| HARBOR\_CREDENTIAL\_ID | harbor-id | Fill in the Harbor credential ID in the create credential step to log in to your Harbor repository | -| GITLAB\_CREDENTIAL\_ID | gitlab-id | Fill in the GitLab credential ID in the create credential step for the push tag to the GitLab repository | -| KUBECONFIG\_CREDENTIAL\_ID | demo-kubeconfig | Kubeconfig credential ID for accessing a running Kubernetes cluster | -| REGISTRY | harbor.devops.kubesphere.local:30280 | Use Harbor as the domain name by default for pushing images.  | -| HARBOR_NAMESPACE | library | The default is the library project under Harbor, which can change the project name according to the actual situation. | -| GITLAB_ACCOUNT | admin1 | Set the GitLab user as admin1 by default. | -| APP_NAME | devops-docs-sample | Application Name | -| SONAR\_CREDENTIAL\_ID | sonar-token | Fill in sonarQube token credential ID when creating the credentials for code quality check. | - - -## Create Two Projects - -The CI/CD pipeline will eventually deploy the examples to the Dev and Production (Namespace) environments based on the sample project's [yaml template](https://github.com/kubesphere/devops-java-sample/tree/master/deploy). - -The projects' names are `kubesphere-sample-dev` and `kubesphere-sample-prod`. These two projects need to be created in advance, refer to [Building a pipeline based on the Spring Boot project - create a project](../devops-online) to create. - -![](https://pek3b.qingstor.com/kubesphere-docs/png/20191018200218.png) - -## Create Pipelines - -### Step 1: Fill in Basic Information - -1. Enter into the created DevOps project, select the **pipeline** in the left menu bar, and click **Create**. - -![](https://pek3b.qingstor.com/kubesphere-docs/png/20191018200350.png) - -2. In the pop-up window, input the basic information of the pipeline. - -- Name: Give it an easy name to be understood and searched. -- Description: Simply describe the pipeline's main features for further knowledge of the pipeline's functions. -- Code repository: Select code repository which should exist in Jenkinsfile. - -![](https://pek3b.qingstor.com/kubesphere-docs/png/20191018200519.png) - -### Step 2: Add Git Repository - -1. Click code repository. Take the GitLab repository as an example.  - -![](https://pek3b.qingstor.com/kubesphere-docs/png/20191018200627.png) - -2. Input the repository's URL by default: `http://gitlab.devops.kubesphere.local:30080/admin1/devops-java-sample.git`, - -Note: There are errors in HTTP and SSH URI in the GitLab. HTTP URI needs manual port code of 30080. SSH URI needs to be added with the protocol `ssh://` and the port code: 30090 manually.  - -Create `gitlab-id` before selecting the certification.  - -Click 「Save」 to proceed.  - -### Step 3: Advanced Settings - -After completing the code warehouse related settings, enter the advanced settings page. Advanced settings support the construction record of the pipeline. Customization of behavioural policies, periodic scans, etc. The following is a simple definition of the relevant configuration used. - -1. In the build settings, check to discard the old build, where the **number of days to retain** the branch and **the maximum number of reserved branches** default to -1. - -![](https://pek3b.qingstor.com/kubesphere-docs/png/20191018201108.png) - - -2. The default script path is Jenkinsfile and needs to be modified to `Jenkinsfile-on-prem.` -> Note: The path is the path of the Jenkinsfile in the code repository, indicating that it is in the root of the sample repository. If the file location changes, you need to modify its script path. - - -3. When scanning** Repo Trigger** check `If there is no automatic scanning, scan regularly` . The scanning time can be customized according to the team preference. The example here set it as `5 minutes`. - -> Note: Regular scaning is to set a cycle to require the pipeline scan remote repositories regularly. According to the **Behaviour Strategy **to check whether there is a code update or a new PR. -> -> Webhook Push: -> Webhook is a high-efficiency way to detect the changes in the remote repository and automatically activate new operations. Webhook should play the main role in scanning Jenkins for GitHub and Git (like Gitlab). Please refer to the cycle time setting in the previous step. In this sample, you can run the pipeline manually. If you need to set automatic scanning for remote branches and active the operation, please refer to Setting automatic scanning - GitHub SCM.  [设置自动触发扫描 - GitHub SCM](../../devops/auto-trigger)。 - - -Complete advanced settings and click **Create.** - -![](https://pek3b.qingstor.com/kubesphere-docs/png/20191018201244.png) - -### Step 4: Pipeline Operation - -After the pipeline is created, click the browser's **refresh** button to see a log of the auto-triggered remote branch. - -1. Click **Run** on the right. According to the **Behaviour Strategy,** scan branches from the code repository automatically. Then build the pipeline's `master` branch in the pop-up window. The system will upload Jenkinsfile-online according to the input branch (The default option is the Jenkinsfile). - -2. Since there is no default option for `TAG_NAME: defaultValue` in Jenkinsfile-online, put in a tag number in the  `TAG_NAME` such as v0.0.1. - -3. Click **confirm** to start a new pipeline activity. - -> Note: Tag is used to generate release and images with tags. Besides, when the main pipeline is released, `TAG_NAME` should not duplicate the existing `tag` names in the code repository. If the repetition occurs the pipeline cannot run.   - -![](https://pek3b.qingstor.com/kubesphere-docs/png/20191018201430.png) - -For now, the pipeline has been built and started to run. - -> Note: Click **branch** to switch to the branch list to see which branches are running based on which branch, the branch here depends on the discovery branch strategy of the previous **behaviour strategy.** - - -![](https://pek3b.qingstor.com/kubesphere-docs/png/20191018201509.png) - -### Step 5: Review the Pipeline - -For the convenience of demonstration, the current account is used for review by default. When the pipeline is executed to the `input` step, the status will be suspended. You need to manually click **continue**, and the pipeline can continue to run. Note that three stages (stages) are defined in Jenkinsfile-on-prem for deployment to the Dev environment and the production environment, as well as the push tag. Therefore, in the pipeline, you need to audit the three stages of `deploy to dev, push with tag, deploy to production` three times. If you do not review or click to **terminate**, the pipeline will not continue to run. - - -> Note: In actual development and production scenarios, administrators or operations personnel with higher authority may be required to review the pipeline and images. They will also decide whether to push the pipeline and images to code repository or image registries, to deployment development or production environment. The `input` step of Jenkinsfile supports specific users to review the pipeline. For example, to specify a user named project-admin for auditing, you can append a field to the input function of Jenkinsfile. If it is multiple users, separate them with commas, as shown below: - - -```groovy -··· -input(id: 'release-image-with-tag', message: 'release image with tag?', submitter: 'project-admin,project-admin1') -··· -``` - -## Check the Pipeline - -1. Click on the serial number of the currently running pipeline under the `Activity` list in the pipeline. The page shows the running status of each step in the pipeline. Note that the pipeline is initially initialized when it is created, and only the log window may be displayed. After about one minute, you can see the pipeline. The black boxes tag the pipeline's step names. The 8 stages in the sample pipeline have been defined in the [Jenkinsfile-on-prem](https://github.com/kubesphere/devops-java-sample/blob/master/Jenkinsfile-on-prem). - -![](https://pek3b.qingstor.com/kubesphere-docs/png/20191018201731.png) - -2. Click `Checking the log` at the top right corner to check the pipeline's operation log. The page shows the specific log, running status and time of each step. Click on a specific stage on the left to expand its specific log. Logs can be downloaded to the local; if there is an error, download to the local is more convenient to analyze the positioning problem. - -![](https://pek3b.qingstor.com/kubesphere-docs/png/20191018201809.png) - -## Check Results - -1. If the pipeline is successfully executed, click on the `code quality` under the pipeline to see the code quality test result through sonarQube, as shown below (for reference only). - - -2. The Docker image of the final build of the pipeline will also be successfully pushed into Harbor. We have already configured Harbor in Jenkinsfile-on-prem. Log in to Harbor to see the image push results. You can see that the image with the tag snapshot, TAG_NAME(master-1), and latest has been pushed to Harbor, and a new tag and release are also generated in GitLab. The sample website will eventually be deployed to KubeSphere's `kubesphere-sample-dev` and `kubesphere-sample-prod` project enviroenment as deployment and service. - -| Environment | URL | Namespace | Deployment | Service | -| :--- | :--- | :--- | :--- | :--- | -| Dev | Public Network IP : 30861 (`${EIP}:${NODEPORT}`) | kubesphere-sample-dev | ks-sample-dev | ks-sample-dev | -| Production | Public Network IP : 30961 (`${EIP}:${NODEPORT}`) | kubesphere-sample-prod | ks-sample | ks-sample | - - -3. You can go back to the project list through KubeSphere and view the status of the deployments and services in the two projects you created. For example, check the deployment under the `kubesphere-sample-prod` project below. - -Enter the project, click **Workload → Deployment** on the menu bar on the left. You can see that ks-sample has been created successfully. Under normal circumstances, the status of the deployment should show **in progress**. - - -4. Select **Network and Service → Service **to** **check according service. The service's revealed NodePort is `30961`. - - -5. Check the image pushed into your personal Harbor, you can see that devops-java-sample is the value of APP_NAME, and the tag is also the tag defined in Jenkinsfile-on-prem. - -## Visit the Sample Service - -Access to services deployed to the KubeSphere Dev and Production environments in a browser or through background commands: - -**Dev Environment** - -For example, when browsers access to `http://192.168.0.20:30861/` (namely, `http://IP:NodePort/`), you can visit the page of `Hello,World!` or through verification through background commands:  - -```bash -curl http://192.168.0.20:30861 -Hello,World! -``` - -**Prodcution Environment** - -Similarly, you can also access to `http://192.168.0.20:30961/` (namely, `http://IP:NodePort/`). - -At this point, in conjunction with GitLab and Harbor, creating a Jenkinsfile in SCM type pipeline in an offline environment is complete. \ No newline at end of file diff --git a/content/en/docs/quick-start/source-to-image.md b/content/en/docs/quick-start/source-to-image.md deleted file mode 100644 index 4028e1d7f..000000000 --- a/content/en/docs/quick-start/source-to-image.md +++ /dev/null @@ -1,130 +0,0 @@ ---- -title: "Source to Image: Publish Your App without Dockerfile" -keywords: 'kubesphere, kubernetes, docker, jenkins, s2i, source to image' -description: 'Publish your application using source to image' - -linkTitle: "7" -weight: 3070 ---- - -## What is Source to Image - -As [Features and Benefits](../../introduction/features) elaborates, Source-to-Image (S2I) is a toolkit and workflow for building reproducible container images from source code. S2I produces ready-to-run images by injecting source code into a container image and letting the container prepare that source code for execution. KubeSphere integrates S2I to enable automatically building images and publishing to Kubernetes without writing Dockerfile. - -## Objective - -This tutorial will use S2I to import source code of a Java sample project into KubeSphere, build a docker image and push to a target registry, finally publish to Kubernetes and expose the service to outside. - -![S2I Process](https://pek3b.qingstor.com/kubesphere-docs/png/20200207162613.png) - -## Prerequisites - -- You need to enable [KubeSphere DevOps system](../../installation/install-devops). -- You need to create [GitHub](https://github.com/) and [DockerHub](http://www.dockerhub.com/) accounts. GitLab and Harbor are also supported. We will use GitHub and DockerHub in this tutorial. -- You need to create a workspace, a project and `project-regular` account with the role of operator, see [Getting Started with Multi-tenant Management](/../../quick-start/admin-quick-start). -- Set CI dedicated node for building images, please refer to [Set CI Node for Dependency Cache](../../devops/devops-ci-node). This is not mandatory but recommended for development and production environment since it caches code dependency. - -## Estimated Time - -20-30 minutes - -## Hands-on Lab - -### Step 1: Create Secrets - -Log in KubeSphere with the account `project-regular`. Go to your project and create the secrets for DockerHub and GitHub. Please reference [Creating Common-used Secrets](../../configuration/secrets#create-common-used-secrets). - -> Note you may not need to create GitHub Secret if your forked project below is open to public. - -### Step 2: Fork Project - -Log in GitHub and fork the GitHub repository [devops-java-sample](https://github.com/kubesphere/devops-java-sample) to your personal GitHub account. - -![Fork Project](https://pek3b.qingstor.com/kubesphere-docs/png/20200210174640.png) - -### Step 3: Create Service - -#### Fill in Basic Information - -3.1 Navigate to **Application Workloads → Services**, click **Create Service**. - -![Create Service](https://pek3b.qingstor.com/kubesphere-docs/png/20200210180908.png) - -3.2 Choose **Java** under **Build a new service from source code repository**, then name it `s2i-demo` and click **Next**. - -#### Build Settings - -3.3. Now we need go to GitHub and copy the URL of the forked repository first. - -![GitHub](https://pek3b.qingstor.com/kubesphere-docs/png/20200210215006.png) - -3.4. Paste the URL in **Code URL**, enter `/` into **imageName**, e.g. `pengfeizhou/s2i-sample` in this demo. As for **secret** and **Target image repository**, you need to choose the secrets created in step 1, let's say `dockerhub-id` and `github-id` respectively. - -> Note: KubeSphere has built in common S2I templates for Java, Node.js and Python. It allows you to [customize S2I template](../../developer/s2i-template) for other languages. - -![Build Settings](https://pek3b.qingstor.com/kubesphere-docs/png/20200210220057.png) - -3.5. Click **Next** to **Container Setting** tab. In the **Service Settings** part, name the service `http-port` for example. **Container Port** and **Service Port** are both `8080`. - -![Container Settings](https://pek3b.qingstor.com/kubesphere-docs/png/20200226173052.png) - -3.6. Scroll down to **Health Checker**, check it and click `Add Container ready check`, fill in the contents as follows: - -- Port: enter `8080`; it maps to the service port that we need to check. -- Initial Delay(s): `30`; number of seconds after the container has started before liveness or readiness probes are initiated. -- Timeout(s): `10`; number of seconds after the probe times out. Default is 1 second. - -![Health Checker](https://pek3b.qingstor.com/kubesphere-docs/png/20200210223047.png) - -Then click `√` to save it when you are done, and click **Next**. - -#### Create S2I Deployment - -Click **Next** again to skip **Mount Volumes**. Check **Internet Access**, then choose **NodePort** to expose S2I service through `:`. Now click **Create** to start the S2I process. - -![Internet Access](https://pek3b.qingstor.com/kubesphere-docs/png/20200210223251.png) - -### Step 4: Verify Build Progress - -Choose **Image Builder**, drill into the new generated S2I build. - -![Build Progress](https://pek3b.qingstor.com/kubesphere-docs/png/20200210224618.png) - -You will be able to inspect the logs by expanding **Job Records**. Normally you can see it outputs "Build completed successfully" in the end. - -![Build Logs](https://pek3b.qingstor.com/kubesphere-docs/png/20200210225006.png) - -So far this S2I build has created corresponding Job, Deployment and Service accordingly, We can verify each resource object as follows. - -#### Job - -![Job](https://pek3b.qingstor.com/kubesphere-docs/png/20200210230158.png) - -#### Deployment - -![Deployment](https://pek3b.qingstor.com/kubesphere-docs/png/20200210230217.png) - -#### Service - -![Service](https://pek3b.qingstor.com/kubesphere-docs/png/20200210230239.png) - -### Step 5: Access S2I Service - -Go into S2I service detailed page, access this service through either `Endpoints`, or `:`, or `:`. - -![Access Service](https://pek3b.qingstor.com/kubesphere-docs/png/20200210230444.png) - -```bash -$ curl 10.233.90.126:8080 -Really appreciate your star, that is the power of our life. -``` - -> Tip: If you need to access to this service externally, make sure the traffic can pass through the NodePort. You may configure firewall and port forward according to your environment. - -### Step 6: Verify Image Registry - -Since you set DockerHub as the target registry, you can log in to your personal DockerHub to check if the sample image has been pushed by the S2I job. - -![Image in DockerHub](https://pek3b.qingstor.com/kubesphere-docs/png/20200210231552.png) - -Congratulation! You have been familiar with S2I tool. diff --git a/content/en/docs/quick-start/wordpress-deployment.md b/content/en/docs/quick-start/wordpress-deployment.md deleted file mode 100644 index 2a39e539b..000000000 --- a/content/en/docs/quick-start/wordpress-deployment.md +++ /dev/null @@ -1,153 +0,0 @@ ---- -title: "Publish WordPress App to Kubernetes" -keywords: 'kubesphere, kubernetes, docker, wordpress' -description: 'How to deploy WordPress into Kubernetes on KubeSphere' - - -linkTitle: "3" -weight: 3030 ---- - -## WordPress Introduction - -WordPress is an online, open source website creation tool written in PHP, with a back-end MySQL database and a front-end component. We can deploy WordPress to Kubernetes using Kubernetes object resources. - -![WordPress](https://pek3b.qingstor.com/kubesphere-docs/png/20200105181908.png) - -## Objective - -In this tutorial we will create a WordPress application as an example, demonstrating how to deploy application with multiple components to Kubernetes through KubeSphere console. - -## Estimated Time - -About 15 minutes - -## Hands-on Lab - -### Step 1: Create Secrets - -#### Create a MySQL Secret - -The environment variable `WORDPRESS_DB_PASSWORD` is the password to connect the database in WordPress. In this step, we create a ConfigMap to store the environment variable that is used in MySQL Pod template. - -1.1 Log in KubeSphere console using the account `project-regular`. Enter `demo-project`, navigate to **Configuration Center → Secrets**, then click **Create**. - -![Secrets List](https://pek3b.qingstor.com/kubesphere-docs/png/20200105182525.png) - -1.2. Fill in the basic information, e.g. name it `mysql-secret`, then click **Next**. Click **Add data** and fill in the secret settings as shown in the following screenshot, save it and click **Create**. - -- Key: `MYSQL_ROOT_PASSWORD` -- Value: `123456` - -![Create MySQL Secret](https://pek3b.qingstor.com/kubesphere-docs/png/20200105182805.png) - -#### Create a WordPress Secret - -Same steps as above, create a WordPress secret `wordpress-secret` with Key `WORDPRESS_DB_PASSWORD` and Data `123456`. - -![Create WordPress Secret](https://pek3b.qingstor.com/kubesphere-docs/png/20200105183314.png) - -### Step 2: Create a Volume - -Choose **Volumes** and click **Create**, name it `wordpress-pvc`, click **Next** to Volume Settings where you need to choose an available `Storage Class`, `ReadWriteOnce` of access mode and 10G of storage size. Click **Next** to Advanced Settings. No configuration is for this page, so click **Create** to finish volume creation. - -![Create Volume](https://pek3b.qingstor.com/kubesphere-docs/png/20200106000543.png) - -### Step 3: Create an Application - -#### Add MySQL back-end component - -In this step, we will choose the way of composing app to create a complete microservice app. - -3.1. Select **Application Workloads → Applications → Deploy New Application**, and choose **Composing App**. - -![New Application](https://pek3b.qingstor.com/kubesphere-docs/png/20200106000851.png) - -3.2. Fill in the pop-up table as follows: - -- Application Name: `wordpress` -- Then click **Add Component** -- Name: `mysql` -- Component Version: `v1` -- Workload Type: Stateful service (StatefulSet) - -![Compose Application](https://pek3b.qingstor.com/kubesphere-docs/png/20200106001425.png) - -3.3. Scroll down and click **Add Container Image**, enter `mysql:5.6` into the Image edit box, press the return key and click `Use Default Ports`. - -![Fill Application Info](https://pek3b.qingstor.com/kubesphere-docs/png/20200106002012.png) - -3.4. Scroll down to the Environment Variables, check **Environment Variable** and click **Use ConfigMap or Secret**, then input the name `MYSQL_ROOT_PASSWORD` and choose the resource `mysql-secret` and the key `MYSQL_ROOT_PASSWORD` we created in previous step. - -Click `√` to save it when you have finished. - -![Fill More Application Info](https://pek3b.qingstor.com/kubesphere-docs/png/20200106002450.png) - -3.5. Continue scrolling down and click **Add Volume Template** to create a PVC for MySQL according to the following screenshot. - -![Add Volume to Application](https://pek3b.qingstor.com/kubesphere-docs/png/20200106003738.png) - -3.6. Click `√` to save it. At this point you have added the MySQL component. - -![Save Application info](https://pek3b.qingstor.com/kubesphere-docs/png/20200106004012.png) - -#### Add WordPress front-end component - -3.7. Click **Add Component** again, fill in the Name and Component Version refer to the following screenshot: - -![Add Front End](https://pek3b.qingstor.com/kubesphere-docs/png/20200106004302.png) - -3.8. Click **Add Container Image**, enter `wordpress:4.8-apache` into the Image edit box, press the return key and click `Use Default Ports`. - -![Choose Container Image](https://pek3b.qingstor.com/kubesphere-docs/png/20200106004543.png) - -3.9. Scroll down to the Environment Variables, check **Environment Variable** and click **Use ConfigMap or Secret**, then enter the values according to the following screenshot. - -- `WORDPRESS_DB_PASSWORD`, choose `wordpress-secret` and `WORDPRESS_DB_PASSWORD` -- Click **Add Environment Variable**, then fill its key & value with `WORDPRESS_DB_HOST` and `mysql`. - -![Add Env Variables](https://pek3b.qingstor.com/kubesphere-docs/png/20200106004841.png) - -3.10. Click `√` to save it. - -3.11. Continue scrolling down and click **Add Volume** to attach the existed volume to WordPress. - -![Add Volume](https://pek3b.qingstor.com/kubesphere-docs/png/20200106005242.png) - -3.12. Select `wordpress-pvc` that we created in the previous step, and select `ReadAndWrite`, then input `/var/www/html` as its mount path. Click `√` to save it. - -![Fill Volume Info](https://pek3b.qingstor.com/kubesphere-docs/png/20200106005431.png) - -3.13. Again, click `√` to save it. Ensure both mysql and wordpress application components have been added into the table, then you can click **Create**. - -![Save Application](https://pek3b.qingstor.com/kubesphere-docs/png/20200106005705.png) - -![Application List](https://pek3b.qingstor.com/kubesphere-docs/png/20200106010011.png) - -### Step 4: Verify the Resources - -#### Deployment - -![WordPress Deployment](https://pek3b.qingstor.com/kubesphere-docs/png/20200106010223.png) - -#### StatefulSet - -![WordPress StatefulSet](https://pek3b.qingstor.com/kubesphere-docs/png/20200106010244.png) - -#### Services - -![WordPress Services](https://pek3b.qingstor.com/kubesphere-docs/png/20200106010312.png) - -### Step 5: Access the WordPress Application - -5.1. Enter `wordpress` service, and click **Edit Internet Access**. - -![WordPress Internet Access](https://pek3b.qingstor.com/kubesphere-docs/png/20200106010404.png) - -5.2. Choose `NodePort` as its service type. - -![Service Status](https://pek3b.qingstor.com/kubesphere-docs/png/20200106010644.png) - -At this point, WordPress is exposed to outside through the service, thus we can access this application in browser via `{$Node IP}:{$NodePort}`, for example `http://192.168.0.88:30048` since we selected http protocol previously. - -![WordPress Page](https://pek3b.qingstor.com/kubesphere-docs/png/20190716205640.png#alt=) diff --git a/content/en/docs/release/_index.md b/content/en/docs/release/_index.md index 80565f789..ee376ec42 100644 --- a/content/en/docs/release/_index.md +++ b/content/en/docs/release/_index.md @@ -1,9 +1,9 @@ --- -title: "release" +title: "Release Notes" description: "Help you to better understand KubeSphere with detailed graphics and contents" layout: "single" -linkTitle: "release" +linkTitle: "Release Notes" weight: 1 @@ -19,4 +19,4 @@ In this chapter, we will demonstrate how to use KubeKey to provision a new Kuber Below you will find some of the most common and helpful pages from this chapter. We highly recommend you to review them at first. -{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} \ No newline at end of file +{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} diff --git a/content/en/docs/release/release-v300.md b/content/en/docs/release/release-v300.md new file mode 100644 index 000000000..98c787c91 --- /dev/null +++ b/content/en/docs/release/release-v300.md @@ -0,0 +1,10 @@ +--- +title: "Release Notes For 3.0.0" +keywords: "kubernetes, docker, kubesphere, jenkins, istio, prometheus" +description: "KubeSphere Release Notes For 3.0.0" + +linkTitle: "Release Notes - 3.0.0" +weight: 50 +--- + +TBD diff --git a/content/en/docs/upgrade/_index.md b/content/en/docs/upgrade/_index.md new file mode 100644 index 000000000..6ffe04694 --- /dev/null +++ b/content/en/docs/upgrade/_index.md @@ -0,0 +1,22 @@ +--- +title: "Upgrade" +description: "Upgrade KubeSphere and Kubernetes" +layout: "single" + +linkTitle: "Upgrade" + +weight: 4000 + +icon: "/images/docs/docs.svg" + +--- + +## Installing KubeSphere and Kubernetes on Linux + +In this chapter, we will demonstrate how to use KubeKey to provision a new Kubernetes and KubeSphere cluster based on different infrastructures. Kubekey can help you to quickly build a production-ready cluster architecture on a set of machines from zero to one. It also helps you to easily scale the cluster and install pluggable components on existing architecture. + +## Most Popular Pages + +Below you will find some of the most common and helpful pages from this chapter. We highly recommend you to review them at first. + +{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} diff --git a/content/en/docs/upgrade/release-v210.md b/content/en/docs/upgrade/release-v210.md new file mode 100644 index 000000000..5df5e5d44 --- /dev/null +++ b/content/en/docs/upgrade/release-v210.md @@ -0,0 +1,155 @@ +--- +title: "Upgrade KubeSphere Only" +keywords: "kubernetes, docker, kubesphere, jenkins, istio, prometheus" +description: "Upgrade KubeSphere without Kubernetes" + +linkTitle: "Upgrade KubeSphere Only" +weight: 200 +--- + +KubeSphere 2.1.0 was released on Nov 11th, 2019, which fixes known bugs, adds some new features and brings some enhancement. If you have installed versions of 2.0.x, please upgrade it and enjoy the better user experience of v2.1.0. + +## Installer Enhancement + +- Decouple some components and make components including DevOps, service mesh, app store, logging, alerting and notification optional and pluggable +- Add Grafana (v5.2.4) as the optional component +- Upgrade Kubernetes to 1.15.5. It is also compatible with 1.14.x and 1.13.x +- Upgrade [OpenPitrix](https://openpitrix.io/) to v0.4.5 +- Upgrade the log forwarder Fluent Bit to v1.3.2 +- Upgrade Jenkins to v2.176.2 +- Upgrade Istio to 1.3.3 +- Optimize the high availability for core components + +## App Store + +### Features + +Support upload / test / review / deploy / publish/ classify / upgrade / deploy and delete apps, and provide nine built-in applications + +### Upgrade & Enhancement + +- The application repository configuration is moved from global to each workspace +- Support adding application repository to share applications in a workspace + +## Storage + +### Features + +- Support Local Volume with dynamic provisioning +- Provide the real-time monitoring feature for QingCloud block storage + +### Upgrade & Enhancement + +QingCloud CSI is adapted to CSI 1.1.0, supports upgrade, topology, create or delete a snapshot. It also supports creating PVC based on a snapshot + +### BUG Fixes + +Fix the StorageClass list display problem + +## Observability + +### Features + +- Support for collecting the file logs on the disk. It is used for the Pod which preserves the logs as the file on the disk +- Support integrating with external ElasticSearch 7.x +- Ability to search logs containinh Chinese words +- Add initContainer log display +- Ability to export logs +- Support for canceling the notification from alerting + +### UPGRADE & ENHANCEMENT + +- Improve the performance of log search +- Refine the hints when the logging service is abnormal +- Optimize the information when the monitoring metrics request is abnormal +- Support pod anti-affinity rule for Prometheus + +### BUG FIXES + +- Fix the mistaken highlights in the logs search result +- Fix log search not matching phrases correctly +- Fix the issue that log could not be retrieved for a deleted workload when it is searched by workload name +- Fix the issue where the results were truncated when the log is highlighted +- Fix some metrics exceptions: node `inode`, maximum pod tolerance +- Fix the issue with an incorrect number of alerting targets +- Fix filter failure problem of multi-metric monitoring +- Fix the problem of no logging and monitoring information on taint nodes (Adjust the toleration attributes of node-exporter and fluent-bit to deploy on all nodes by default, ignoring taints) + +## DevOps + +### Features + +- Add support for branch exchange and git log export in S2I +- Add B2I, ability to build Binary/WAR/JAR package and release to Kubernetes +- Support dependency cache for the pipeline, S2I, and B2I +- Support delete Kubernetes resource action in `kubernetesDeploy` step +- Multi-branch pipeline supports trigger other pipelines when create or delete the branch + +### Upgrades & Enhancement + +- Support BitBucket in the pipeline +- Support Cron script validation in the pipeline +- Support Jenkinsfile syntax validation +- Support custom the link in SonarQube +- Support event trigger build in the pipeline +- Optimize the agent node selection in the pipeline +- Accelerate the start speed of the pipeline +- Use dynamical volume as the work directory of the Agent in the pipeline, also contributes to Jenkins [#589](https://github.com/jenkinsci/kubernetes-plugin/pull/598) +- Optimize the Jenkins kubernetesDeploy plugin, add more resources and versions (v1, app/v1, extensions/v1beta1、apps/v1beta2、apps/v1beta1、autoscaling/v1、autoscaling/v2beta1、autoscaling/v2beta2、networking.k8s.io/v1、batch/v1beta1、batch/v2alpha1), also contributes to Jenkins [#614](https://github.com/jenkinsci/kubernetes-plugin/pull/614) +- Add support for PV, PVC, Network Policy in deploy step of the pipeline, also contributes to Jenkins [#87](https://github.com/jenkinsci/kubernetes-cd-plugin/pull/87)、[#88](https://github.com/jenkinsci/kubernetes-cd-plugin/pull/88) + +### Bug Fixes + +- Fix the issue that 400 bad request in GitHub Webhook +- incompatible change: DevOps Webhook's URL prefix is changed from `/webhook/xxx` to `/devops_webhook/xxx` + +## Authentication and authority + +### Features + +Support sync and authenticate with AD account + +### Upgrades & Enhancement + +- Reduce the LDAP component's RAM consumption +- Add protection against brute force attacks + +### Bug Fixes + +- Fix LDAP connection pool leak +- Fix the issue where users could not be added in the workspace +- Fix sensitive data transmission leaks + +## User Experience + +### Features + +Ability to wizard management of projects (namespace) that are not assigned to the workspace + +### Upgrades & Enhancement + +- Support bash-completion in web kubectl +- Optimize the host information display +- Add connection test of the email server +- Add prompt on resource list page +- Optimize the project overview page and project basic information +- Simplify the service creation process +- Simplify the workload creation process +- Support real-time status update in the resource list +- optimize YAML editing +- Support image search and image information display +- Add the pod list to the workload page +- Update the web terminal theme +- Support container switching in container terminal +- Optimize Pod information display, and add Pod scheduling information +- More detailed workload status display + +### Bug Fixes + +- Fix the issue where the default request resource of the project is displayed incorrectly +- Optimize the web terminal design, make it much easier to find +- Fix the Pod status update delay +- Fix the issue where a host could not be searched based on roles +- Fix DevOps project quantity error in workspace detail page +- Fix the issue with the workspace list pages not turning properly +- Fix the problem of inconsistent result ordering after query on workspace list page diff --git a/content/en/docs/upgrade/release-v211.md b/content/en/docs/upgrade/release-v211.md new file mode 100644 index 000000000..34f244b9b --- /dev/null +++ b/content/en/docs/upgrade/release-v211.md @@ -0,0 +1,122 @@ +--- +title: "Upgrade KubeSphere and Kubernetes" +keywords: "kubernetes, docker, kubesphere, jenkins, istio, prometheus" +description: "Upgrade KubeSphere and Kubernetes in Linux machines" + +linkTitle: "Upgrade KubeSphere and Kubernetes" +weight: 100 +--- + +KubeSphere 2.1.1 was released on Feb 23rd, 2020, which has fixed known bugs and brought some enhancements. For the users who have installed versions of 2.0.x or 2.1.0, make sure to read the user manual carefully about how to upgrade before doing that, and feel free to raise any questions on [GitHub](https://github.com/kubesphere/kubesphere/issues). + +## What's New in 2.1.1 + +## Installer + +### UPGRADE & ENHANCEMENT + +- Support Kubernetes v1.14.x、v1.15.x、v1.16.x、v1.17.x,also solve the issue of Kubernetes API Compatibility#[1829](https://github.com/kubesphere/kubesphere/issues/1829) +- Simplify the steps of installation on existing Kubernetes, and remove the step of specifying cluster's CA certification, also specifying Etcd certification is no longer mandatory step if users don't need Etcd monitoring metrics +- Backup the configuration of CoreDNS before upgrading + +### BUG FIXES + +- Fix the issue of importing apps to App Store + +## App Store + +### UPGRADE & ENHANCEMENT + +- Upgrade OpenPitrix to v0.4.8 + +### BUG FIXES + +- Fix the latest version display issue for the published app #[1130](https://github.com/kubesphere/kubesphere/issues/1130) +- Fix the column name display issue in app approval list page #[1498](https://github.com/kubesphere/kubesphere/issues/1498) +- Fix the searching issue by app name/workspace #[1497](https://github.com/kubesphere/kubesphere/issues/1497) +- Fix the issue of failing to create app with the same name of previously deleted app #[1821](https://github.com/kubesphere/kubesphere/pull/1821) #[1564](https://github.com/kubesphere/kubesphere/issues/1564) +- Fix the issue of failing to deploy apps in some cases #[1619](https://github.com/kubesphere/kubesphere/issues/1619) #[1730](https://github.com/kubesphere/kubesphere/issues/1730) + +## Storage + +### UPGRADE & ENHANCEMENT + +- Support CSI plugins of Alibaba Cloud and Tencent Cloud + +### BUG FIXES + +- Fix the paging issue of storage class list page #[1583](https://github.com/kubesphere/kubesphere/issues/1583) #[1591](https://github.com/kubesphere/kubesphere/issues/1591) +- Fix the issue that the value of imageFeatures parameter displays '2' when creating ceph storage class #[1593](https://github.com/kubesphere/kubesphere/issues/1593) +- Fix the issue that search filter fails to work in persistent volumes list page #[1582](https://github.com/kubesphere/kubesphere/issues/1582) +- Fix the display issue for abnormal persistent volume #[1581](https://github.com/kubesphere/kubesphere/issues/1581) +- Fix the display issue for the persistent volumes which associated storage class is deleted #[1580](https://github.com/kubesphere/kubesphere/issues/1580) #[1579](https://github.com/kubesphere/kubesphere/issues/1579) + +## Observability + +### UPGRADE & ENHANCEMENT + +- Upgrade Fluent Bit to v1.3.5 #[1505](https://github.com/kubesphere/kubesphere/issues/1505) +- Upgrade Kube-state-metrics to v1.7.2 +- Upgrade Elastic Curator to v5.7.6 #[517](https://github.com/kubesphere/ks-installer/issues/517) +- Fluent Bit Operator support to detect the location of soft linked docker log folder dynamically on host machines +- Fluent Bit Operator support to manage the instance of Fluent Bit by declarative configuration through updating the ConfigMap of Operator +- Fix the issue of sort orders in alert list page #[1397](https://github.com/kubesphere/kubesphere/issues/1397) +- Adjust the metric of container memory usage with 'container_memory_working_set_bytes' + +### BUG FIXES + +- Fix the lag issue of container logs #[1650](https://github.com/kubesphere/kubesphere/issues/1650) +- Fix the display issue that some replicas of workload have no logs on container detail log page #[1505](https://github.com/kubesphere/kubesphere/issues/1505) +- Fix the compatibility issue of Curator to support ElasticSearch 7.x #[517](https://github.com/kubesphere/ks-installer/issues/517) +- Fix the display issue of container log page during container initialization #[1518](https://github.com/kubesphere/kubesphere/issues/1518) +- Fix the blank node issue when these nodes are resized #[1464](https://github.com/kubesphere/kubesphere/issues/1464) +- Fix the display issue of components status in monitor center, to keep them up-to date #[1858](https://github.com/kubesphere/kubesphere/issues/1858) +- Fix the wrong monitoring targets number in alert detail page #[61](https://github.com/kubesphere/console/issues/61) + +## DevOps + +### BUG FIXES + +- Fix the issue of UNSTABLE state not visible in the pipeline #[1428](https://github.com/kubesphere/kubesphere/issues/1428) +- Fix the format issue of KubeConfig in DevOps pipeline #[1529](https://github.com/kubesphere/kubesphere/issues/1529) +- Fix the image repo compatibility issue in B2I, to support image repo of Alibaba Cloud #[1500](https://github.com/kubesphere/kubesphere/issues/1500) +- Fix the paging issue in DevOps pipelines' branches list page #[1517](https://github.com/kubesphere/kubesphere/issues/1517) +- Fix the issue of failing to display pipeline configuration after modifying it #[1522](https://github.com/kubesphere/kubesphere/issues/1522) +- Fix the issue of failing to download generated artifact in S2I job #[1547](https://github.com/kubesphere/kubesphere/issues/1547) +- Fix the issue of [data loss occasionally after restarting Jenkins]( https://kubesphere.com.cn/forum/d/283-jenkins) +- Fix the issue that only 'PR-HEAD' is fetched when binding pipeline with GitHub #[1780](https://github.com/kubesphere/kubesphere/issues/1780) +- Fix 414 issue when updating DevOps credential #[1824](https://github.com/kubesphere/kubesphere/issues/1824) +- Fix wrong s2ib/s2ir naming issue from B2I/S2I #[1840](https://github.com/kubesphere/kubesphere/issues/1840) +- Fix the issue of failing to drag and drop tasks on pipeline editing page #[62](https://github.com/kubesphere/console/issues/62) + +## Authentication and Authorization + +### UPGRADE & ENHANCEMENT + +- Generate client certification through CSR #[1449](https://github.com/kubesphere/kubesphere/issues/1449) + +### BUG FIXES + +- Fix content loss issue in KubeConfig token file #[1529](https://github.com/kubesphere/kubesphere/issues/1529) +- Fix the issue that users with different permission fail to log in on the same browser #[1600](https://github.com/kubesphere/kubesphere/issues/1600) + +## User Experience + +### UPGRADE & ENHANCEMENT + +- Support to edit SecurityContext in workload editing page #[1530](https://github.com/kubesphere/kubesphere/issues/1530) +- Support to configure init container in workload editing page #[1488](https://github.com/kubesphere/kubesphere/issues/1488) +- Add support of startupProbe, also add periodSeconds, successThreshold, failureThreshold parameters in probe editing page #[1487](https://github.com/kubesphere/kubesphere/issues/1487) +- Optimize the status update display of Pods #[1187](https://github.com/kubesphere/kubesphere/issues/1187) +- Optimize the error message report on console #[43](https://github.com/kubesphere/console/issues/43) + +### BUG FIXES + +- Fix the status display issue for the Pods that are not under running status #[1187](https://github.com/kubesphere/kubesphere/issues/1187) +- Fix the issue that the added annotation can't be deleted when creating service of QingCloud LoadBalancer #[1395](https://github.com/kubesphere/kubesphere/issues/1395) +- Fix the display issue when selecting workload on service editing page #[1596](https://github.com/kubesphere/kubesphere/issues/1596) +- Fix the issue of failing to edit configuration file when editing 'Job' #[1521](https://github.com/kubesphere/kubesphere/issues/1521) +- Fix the issue of failing to update the service of 'StatefulSet' #[1513](https://github.com/kubesphere/kubesphere/issues/1513) +- Fix the issue of image searching for QingCloud and Alibaba Cloud image repos #[1627](https://github.com/kubesphere/kubesphere/issues/1627) +- Fix resource ordering issue with the same creation timestamp #[1750](https://github.com/kubesphere/kubesphere/pull/1750) +- Fix the issue of failing to edit configuration file when editing service #[41](https://github.com/kubesphere/console/issues/41) diff --git a/content/en/docs/upgrade/release-v300.md b/content/en/docs/upgrade/release-v300.md new file mode 100644 index 000000000..7a1cb4647 --- /dev/null +++ b/content/en/docs/upgrade/release-v300.md @@ -0,0 +1,10 @@ +--- +title: "Overview" +keywords: "kubernetes, upgrade, kubesphere, v3.0.0" +description: "Upgrade KubeSphere" + +linkTitle: "Overview" +weight: 50 +--- + +TBD diff --git a/content/en/docs/workspaces-administration/_index.md b/content/en/docs/workspaces-administration/_index.md new file mode 100644 index 000000000..45396647b --- /dev/null +++ b/content/en/docs/workspaces-administration/_index.md @@ -0,0 +1,22 @@ +--- +title: "Workspace Administration" +description: "Help you to better manage KubeSphere workspace" +layout: "single" + +linkTitle: "Workspace Administration" + +weight: 4200 + +icon: "/images/docs/docs.svg" + +--- + +## Installing KubeSphere and Kubernetes on Linux + +In this chapter, we will demonstrate how to use KubeKey to provision a new Kubernetes and KubeSphere cluster based on different infrastructures. Kubekey can help you to quickly build a production-ready cluster architecture on a set of machines from zero to one. It also helps you to easily scale the cluster and install pluggable components on existing architecture. + +## Most Popular Pages + +Below you will find some of the most common and helpful pages from this chapter. We highly recommend you to review them at first. + +{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} diff --git a/content/en/docs/workspaces-administration/release-v210.md b/content/en/docs/workspaces-administration/release-v210.md new file mode 100644 index 000000000..9442d12ca --- /dev/null +++ b/content/en/docs/workspaces-administration/release-v210.md @@ -0,0 +1,10 @@ +--- +title: "Role and Member Management" +keywords: "kubernetes, workspace, kubesphere, multitenancy" +description: "Role and Member Management in a Workspace" + +linkTitle: "Role and Member Management" +weight: 200 +--- + +TBD diff --git a/content/en/docs/workspaces-administration/release-v211.md b/content/en/docs/workspaces-administration/release-v211.md new file mode 100644 index 000000000..d74285d36 --- /dev/null +++ b/content/en/docs/workspaces-administration/release-v211.md @@ -0,0 +1,10 @@ +--- +title: "Import Helm Repository" +keywords: "kubernetes, helm, kubesphere, application" +description: "Import Helm Repository into KubeSphere" + +linkTitle: "Import Helm Repository" +weight: 100 +--- + +TBD diff --git a/content/en/docs/workspaces-administration/release-v300.md b/content/en/docs/workspaces-administration/release-v300.md new file mode 100644 index 000000000..dae816590 --- /dev/null +++ b/content/en/docs/workspaces-administration/release-v300.md @@ -0,0 +1,10 @@ +--- +title: "Upload Helm-based Application" +keywords: "kubernetes, helm, kubesphere, openpitrix, application" +description: "Upload Helm-based Application" + +linkTitle: "Upload Helm-based Application" +weight: 50 +--- + +TBD diff --git a/content/zh/docs/application-store/_index.md b/content/zh/docs/application-store/_index.md new file mode 100644 index 000000000..bc9c43c71 --- /dev/null +++ b/content/zh/docs/application-store/_index.md @@ -0,0 +1,23 @@ +--- +title: "Application Store" +description: "Getting started with KubeSphere DevOps project" +layout: "single" + +linkTitle: "Application Store" +weight: 4500 + +icon: "/images/docs/docs.svg" + +--- + +## Installing KubeSphere and Kubernetes on Linux + +In this chapter, we will demonstrate how to use KubeKey to provision a new Kubernetes and KubeSphere cluster based on different infrastructures. Kubekey can help you to quickly build a production-ready cluster architecture on a set of machines from zero to one. It also helps you to easily scale the cluster and install pluggable components on existing architecture. + +## Most Popular Pages + +Below you will find some of the most common and helpful pages from this chapter. We highly recommend you to review them at first. + +{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} + +{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} diff --git a/content/zh/docs/application-store/app-developer-guide/_index.md b/content/zh/docs/application-store/app-developer-guide/_index.md new file mode 100644 index 000000000..bb7d8edd9 --- /dev/null +++ b/content/zh/docs/application-store/app-developer-guide/_index.md @@ -0,0 +1,7 @@ +--- +linkTitle: "Application Developer Guide" +weight: 2200 + +_build: + render: false +--- diff --git a/content/zh/docs/application-store/app-developer-guide/helm-developer-guide.md b/content/zh/docs/application-store/app-developer-guide/helm-developer-guide.md new file mode 100644 index 000000000..26b3e4f04 --- /dev/null +++ b/content/zh/docs/application-store/app-developer-guide/helm-developer-guide.md @@ -0,0 +1,224 @@ +--- +title: "Air-Gapped Installation" +keywords: 'kubernetes, kubesphere, air gapped, installation' +description: 'How to install KubeSphere on air-gapped Linux machines' + + +weight: 2240 +--- + +The air-gapped installation is almost the same as the online installation except it creates a local registry to host the Docker images. We will demonstrate how to install KubeSphere and Kubernetes on air-gapped environment. + +> Note: The dependencies in different operating systems may cause upexpected problems. If you encounter any installation problems on air-gapped environment, please describe your OS information and error logs on [GitHub](https://github.com/kubesphere/kubesphere/issues). + +## Prerequisites + +- If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information. +> - Installer will use `/var/lib/docker` as the default directory where all Docker related files, including the images, are stored. We recommend you to add additional storage to a disk with at least 100G mounted at `/var/lib/docker` and `/mnt/registry` respectively, use the [fdisk](https://www.computerhope.com/unix/fdisk.htm) command for reference. +- Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [OpenEBS](https://openebs.io/) to provide storage service with dynamic provisioning. It is convenient for testing and development. For production, please [configure supported persistent storage service](../storage-configuration) and prepare [high availability configuration](../master-ha) before installation. +- Since the air-gapped machines cannot connect to apt or yum source, please use clean Linux machine to avoid this problem. + +## Step 1: Prepare Linux Hosts + +The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements. + +- Supported OSes: CentOS 7.4 ~ 7.7 (64-bit), Ubuntu 16.04.5/16.04.6/18.04.1/18.04.2/18.04.3 LTS (64-bit) +- Time synchronization is required across all nodes, otherwise the installation may not succeed; +- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`; +- If you are using `Ubuntu 18.04`, you need to use the user `root`. +- Ensure your disk of each node is at least 100G. +- CPU and memory in total of all machines: 2 cores and 4 GB for minimal installation; 8 cores and 16 GB for complete installation. + + +The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes. + +> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide. + +| Host IP | Host Name | Role | +| --- | --- | --- | +|192.168.0.1|master|master, etcd| +|192.168.0.2|node1|node| +|192.168.0.3|node2|node| + +### Cluster Architecture + +#### Single Master, Single Etcd, Two Nodes + +![Architecture](/cluster-architecture.svg) + +## Step 2: Download Installer Package + +Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`. + +```bash +curl -L https://kubesphere.io/download/offline/latest > kubesphere-all-offline-v2.1.1.tar.gz \ +&& tar -zxf kubesphere-all-offline-v2.1.1.tar.gz && cd kubesphere-all-offline-v2.1.1/conf +``` + +## Step 3: Configure Host Template + +> This step is only for multi-node installation, you can skip this step if you choose all-in-one installation. + +Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file. + +> Note: +> +> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`. +> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`. +> - master, node1 and node2 are the host names of each node and all host names should be in lowercase. + +### hosts.ini + +```ini +[all] +master ansible_connection=local ip=192.168.0.1 +node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD +node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD + +[local-registry] +master + +[kube-master] +master + +[kube-node] +node1 +node2 + +[etcd] +master + +[k8s-cluster:children] +kube-node +kube-master +``` + +> Note: +> +> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here. +> - Installer will use a node as the local registry for docker images, defaults to "master" in the group `[local-registry]`. +> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively. +> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`. +> +> Parameters Specification: +> +> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection. +> - `ansible_host`: The name of the host to be connected. +> - `ip`: The ip of the host to be connected. +> - `ansible_user`: The default ssh user name to use. +> - `ansible_become_pass`: Allows you to set the privilege escalation password. +> - `ansible_ssh_pass`: The password of the host to be connected using root. + +## Step 4: Enable All Components + +> This is step is complete installation. You can skip this step if you choose a minimal installation. + +Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default. + +```yaml +# LOGGING CONFIGURATION +# logging is an optional component when installing KubeSphere, and +# Kubernetes builtin logging APIs will be used if logging_enabled is set to false. +# Builtin logging only provides limited functions, so recommend to enable logging. +logging_enabled: true # Whether to install logging system +elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number +elasticsearch_data_replica: 2 # total number of data nodes +elasticsearch_volume_size: 20Gi # Elasticsearch volume size +log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default. +elk_prefix: logstash # the string making up index names. The index name will be formatted as ks--log +kibana_enabled: false # Kibana Whether to install built-in Grafana +#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption. +#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port + +#DevOps Configuration +devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image) +jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default +jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default +jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default +jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters +jenkinsJavaOpts_Xmx: 6g +jenkinsJavaOpts_MaxRAM: 8g +sonarqube_enabled: true # Whether to install built-in SonarQube +#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption. +#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token + +# Following components are all optional for KubeSphere, +# Which could be turned on to install it before installation or later by updating its value to true +openpitrix_enabled: true # KubeSphere application store +metrics_server_enabled: true # For KubeSphere HPA to use +servicemesh_enabled: true # KubeSphere service mesh system(Istio-based) +notification_enabled: true # KubeSphere notification system +alerting_enabled: true # KubeSphere alerting system +``` + +## Step 5: Install KubeSphere to Linux Machines + +> Note: +> +> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default. +> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions. +> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation. +> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts. + +**1.** Enter `scripts` folder, and execute `install.sh` using `root` user: + +```bash +cd ../cripts +./install.sh +``` + +**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume. + +```bash +################################################ + KubeSphere Installer Menu +################################################ +* 1) All-in-one +* 2) Multi-node +* 3) Quit +################################################ +https://kubesphere.io/ 2020-02-24 +################################################ +Please input an option: 2 + +``` + +**3.** Verify the multi-node installation: + +**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go. + +```bash +successsful! +##################################################### +### Welcome to KubeSphere! ### +##################################################### + +Console: http://192.168.0.1:30880 +Account: admin +Password: P@88w0rd + +NOTE:Please modify the default password after login. +##################################################### +``` + +> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components). + +**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in. + +![Login](https://pek3b.qingstor.com/kubesphere-docs/png/20191017172215.png) + +Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up. + +![Landing Page](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png) + +## Enable Pluggable Components + +If you already have set up minimal installation, you still can edit the ConfigMap of ks-installer using the following command. Please make sure there is enough resource in your machines, see [Pluggable Components Overview](/en/installation/pluggable-components/). + +```bash +kubectl edit cm -n kubesphere-system ks-installer +``` + +## FAQ + +If you have further questions please do not hesitate to raise issues on [GitHub](https://github.com/kubesphere/kubesphere/issues). diff --git a/content/zh/docs/application-store/app-developer-guide/helm-specification.md b/content/zh/docs/application-store/app-developer-guide/helm-specification.md new file mode 100644 index 000000000..26b3e4f04 --- /dev/null +++ b/content/zh/docs/application-store/app-developer-guide/helm-specification.md @@ -0,0 +1,224 @@ +--- +title: "Air-Gapped Installation" +keywords: 'kubernetes, kubesphere, air gapped, installation' +description: 'How to install KubeSphere on air-gapped Linux machines' + + +weight: 2240 +--- + +The air-gapped installation is almost the same as the online installation except it creates a local registry to host the Docker images. We will demonstrate how to install KubeSphere and Kubernetes on air-gapped environment. + +> Note: The dependencies in different operating systems may cause upexpected problems. If you encounter any installation problems on air-gapped environment, please describe your OS information and error logs on [GitHub](https://github.com/kubesphere/kubesphere/issues). + +## Prerequisites + +- If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information. +> - Installer will use `/var/lib/docker` as the default directory where all Docker related files, including the images, are stored. We recommend you to add additional storage to a disk with at least 100G mounted at `/var/lib/docker` and `/mnt/registry` respectively, use the [fdisk](https://www.computerhope.com/unix/fdisk.htm) command for reference. +- Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [OpenEBS](https://openebs.io/) to provide storage service with dynamic provisioning. It is convenient for testing and development. For production, please [configure supported persistent storage service](../storage-configuration) and prepare [high availability configuration](../master-ha) before installation. +- Since the air-gapped machines cannot connect to apt or yum source, please use clean Linux machine to avoid this problem. + +## Step 1: Prepare Linux Hosts + +The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements. + +- Supported OSes: CentOS 7.4 ~ 7.7 (64-bit), Ubuntu 16.04.5/16.04.6/18.04.1/18.04.2/18.04.3 LTS (64-bit) +- Time synchronization is required across all nodes, otherwise the installation may not succeed; +- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`; +- If you are using `Ubuntu 18.04`, you need to use the user `root`. +- Ensure your disk of each node is at least 100G. +- CPU and memory in total of all machines: 2 cores and 4 GB for minimal installation; 8 cores and 16 GB for complete installation. + + +The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes. + +> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide. + +| Host IP | Host Name | Role | +| --- | --- | --- | +|192.168.0.1|master|master, etcd| +|192.168.0.2|node1|node| +|192.168.0.3|node2|node| + +### Cluster Architecture + +#### Single Master, Single Etcd, Two Nodes + +![Architecture](/cluster-architecture.svg) + +## Step 2: Download Installer Package + +Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`. + +```bash +curl -L https://kubesphere.io/download/offline/latest > kubesphere-all-offline-v2.1.1.tar.gz \ +&& tar -zxf kubesphere-all-offline-v2.1.1.tar.gz && cd kubesphere-all-offline-v2.1.1/conf +``` + +## Step 3: Configure Host Template + +> This step is only for multi-node installation, you can skip this step if you choose all-in-one installation. + +Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file. + +> Note: +> +> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`. +> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`. +> - master, node1 and node2 are the host names of each node and all host names should be in lowercase. + +### hosts.ini + +```ini +[all] +master ansible_connection=local ip=192.168.0.1 +node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD +node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD + +[local-registry] +master + +[kube-master] +master + +[kube-node] +node1 +node2 + +[etcd] +master + +[k8s-cluster:children] +kube-node +kube-master +``` + +> Note: +> +> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here. +> - Installer will use a node as the local registry for docker images, defaults to "master" in the group `[local-registry]`. +> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively. +> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`. +> +> Parameters Specification: +> +> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection. +> - `ansible_host`: The name of the host to be connected. +> - `ip`: The ip of the host to be connected. +> - `ansible_user`: The default ssh user name to use. +> - `ansible_become_pass`: Allows you to set the privilege escalation password. +> - `ansible_ssh_pass`: The password of the host to be connected using root. + +## Step 4: Enable All Components + +> This is step is complete installation. You can skip this step if you choose a minimal installation. + +Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default. + +```yaml +# LOGGING CONFIGURATION +# logging is an optional component when installing KubeSphere, and +# Kubernetes builtin logging APIs will be used if logging_enabled is set to false. +# Builtin logging only provides limited functions, so recommend to enable logging. +logging_enabled: true # Whether to install logging system +elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number +elasticsearch_data_replica: 2 # total number of data nodes +elasticsearch_volume_size: 20Gi # Elasticsearch volume size +log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default. +elk_prefix: logstash # the string making up index names. The index name will be formatted as ks--log +kibana_enabled: false # Kibana Whether to install built-in Grafana +#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption. +#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port + +#DevOps Configuration +devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image) +jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default +jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default +jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default +jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters +jenkinsJavaOpts_Xmx: 6g +jenkinsJavaOpts_MaxRAM: 8g +sonarqube_enabled: true # Whether to install built-in SonarQube +#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption. +#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token + +# Following components are all optional for KubeSphere, +# Which could be turned on to install it before installation or later by updating its value to true +openpitrix_enabled: true # KubeSphere application store +metrics_server_enabled: true # For KubeSphere HPA to use +servicemesh_enabled: true # KubeSphere service mesh system(Istio-based) +notification_enabled: true # KubeSphere notification system +alerting_enabled: true # KubeSphere alerting system +``` + +## Step 5: Install KubeSphere to Linux Machines + +> Note: +> +> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default. +> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions. +> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation. +> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts. + +**1.** Enter `scripts` folder, and execute `install.sh` using `root` user: + +```bash +cd ../cripts +./install.sh +``` + +**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume. + +```bash +################################################ + KubeSphere Installer Menu +################################################ +* 1) All-in-one +* 2) Multi-node +* 3) Quit +################################################ +https://kubesphere.io/ 2020-02-24 +################################################ +Please input an option: 2 + +``` + +**3.** Verify the multi-node installation: + +**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go. + +```bash +successsful! +##################################################### +### Welcome to KubeSphere! ### +##################################################### + +Console: http://192.168.0.1:30880 +Account: admin +Password: P@88w0rd + +NOTE:Please modify the default password after login. +##################################################### +``` + +> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components). + +**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in. + +![Login](https://pek3b.qingstor.com/kubesphere-docs/png/20191017172215.png) + +Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up. + +![Landing Page](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png) + +## Enable Pluggable Components + +If you already have set up minimal installation, you still can edit the ConfigMap of ks-installer using the following command. Please make sure there is enough resource in your machines, see [Pluggable Components Overview](/en/installation/pluggable-components/). + +```bash +kubectl edit cm -n kubesphere-system ks-installer +``` + +## FAQ + +If you have further questions please do not hesitate to raise issues on [GitHub](https://github.com/kubesphere/kubesphere/issues). diff --git a/content/zh/docs/application-store/built-in-apps/_index.md b/content/zh/docs/application-store/built-in-apps/_index.md new file mode 100644 index 000000000..0f2ce8a6d --- /dev/null +++ b/content/zh/docs/application-store/built-in-apps/_index.md @@ -0,0 +1,7 @@ +--- +linkTitle: "Built-in Applications" +weight: 2200 + +_build: + render: false +--- diff --git a/content/zh/docs/application-store/built-in-apps/all-in-one.md b/content/zh/docs/application-store/built-in-apps/all-in-one.md new file mode 100644 index 000000000..8214171ef --- /dev/null +++ b/content/zh/docs/application-store/built-in-apps/all-in-one.md @@ -0,0 +1,116 @@ +--- +title: "All-in-One Installation" +keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus' +description: 'The guide for installing all-in-one KubeSphere for developing or testing' + +linkTitle: "All-in-One" +weight: 2210 +--- + +For those who are new to KubeSphere and looking for a quick way to discover the platform, the all-in-one mode is your best choice to install it since it is one-click and hassle-free configuration installation with provisioning KubeSphere and Kubernetes on your machine. + +- The following instructions are for the default installation without enabling any optional components as we have made them pluggable since v2.1.0. If you want to enable any one, please see the section [Enable Pluggable Components](../all-in-one#enable-pluggable-components) below. +- If your machine has >= 8 cores and >= 16G memory, we recommend you to install the full package of KubeSphere by [enabling optional components](../complete-installation). + +## Prerequisites + +If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirement](../port-firewall) for more information. + +## Step 1: Prepare Linux Machine + +The following describes the requirements of hardware and operating system. + +- For `Ubuntu 16.04` OS, it is recommended to select the latest `16.04.5`. +- If you are using Ubuntu 18.04, you need to use the root user to install. +- If the Debian system does not have the sudo command installed, you need to execute the `apt update && apt install sudo` command using root before installation. + +### Hardware Recommendation + +| System | Minimum Requirements | +| ------- | ----------- | +| CentOS 7.4 ~ 7.7 (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:100 G | +| Ubuntu 16.04/18.04 LTS (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:100 G | +| Red Hat Enterprise Linux Server 7.4 (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:100 G | +| Debian Stretch 9.5 (64 bit)| CPU:2 Core, Memory:4 G, Disk Space:100 G | + +## Step 2: Download Installer Package + +Execute the following commands to download Installer 2.1.1 and unpack it. + +```bash +curl -L https://kubesphere.io/download/stable/latest > installer.tar.gz \ +&& tar -zxf installer.tar.gz && cd kubesphere-all-v2.1.1/scripts +``` + +## Step 3: Get Started with Installation + +You should not do anything except executing one command as follows. The installer will complete all things for you automatically including installing/updating dependency packages, installing Kubernetes with default version 1.16.7, storage service and so on. + +> Note: +> +> - Generally speaking, do not modify any configuration. +> - KubeSphere installs `calico` by default. If you would like to use a different network plugin, you are allowed to change the configuration in `conf/common.yaml`. You are also allowed to modify other configurations such as storage class, pluggable components, etc. +> - The default storage class is [OpenEBS](https://openebs.io/) which is a kind of [Local Volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) to provision persistence storage service. OpenEBS supports [dynamic provisioning PV](https://docs.openebs.io/docs/next/uglocalpv.html#Provision-OpenEBS-Local-PV-based-on-hostpath). It will be installed automatically for your testing purpose. +> - Please refer [storage configurations](../storage-configuration) for supported storage class. +> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts. + +**1.** Execute the following command: + +```bash +./install.sh +``` + +**2.** Enter `1` to select `All-in-one` mode and type `yes` if your machine satisfies the requirements to start: + +```bash +################################################ + KubeSphere Installer Menu +################################################ +* 1) All-in-one +* 2) Multi-node +* 3) Quit +################################################ +https://kubesphere.io/ 2020-02-24 +################################################ +Please input an option: 1 +``` + +**3.** Verify if KubeSphere is installed successfully or not: + +**(1).** If you see "Successful" returned after completed, it means the installation is successful. The console service is exposed through nodeport 30880 by default. You may need to bind EIP and configure port forwarding in your environment for outside users to access. Make sure you disable the related firewall. + +```bash +successsful! +##################################################### +### Welcome to KubeSphere! ### +##################################################### + +Console: http://192.168.0.8:30880 +Account: admin +Password: P@88w0rd + +NOTE:Please modify the default password after login. +##################################################### +``` + +> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components). + +**(2).** You will be able to use default account and password to log in the console to take a tour of KubeSphere. + +Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up. + +![Dashboard](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png) + +## Enable Pluggable Components + +The guide above is only used for minimal installation by default. You can execute the following command to open the configure map and enable pluggable components. Make sure your cluster has enough CPU and memory in advance, see [Enable Pluggable Components](../pluggable-components). + +```bash +kubectl edit cm -n kubesphere-system ks-installer +``` + +## FAQ + +The installer has been tested on Aliyun, AWS, Huawei Cloud, QingCloud and Tencent Cloud. Please check the [results](https://github.com/kubesphere/ks-installer/issues/23) for details. Also please read the [FAQ of installation](../../faq/faq-install). + +If you have any further questions please do not hesitate to file issues on [GitHub](https://github.com/kubesphere/kubesphere/issues). diff --git a/content/zh/docs/application-store/built-in-apps/complete-installation.md b/content/zh/docs/application-store/built-in-apps/complete-installation.md new file mode 100644 index 000000000..e0ab92099 --- /dev/null +++ b/content/zh/docs/application-store/built-in-apps/complete-installation.md @@ -0,0 +1,76 @@ +--- +title: "Install All Optional Components" +keywords: 'kubesphere, kubernetes, docker, devops, service mesh, openpitrix' +description: 'Install KubeSphere with all optional components enabled on Linux machine' + + +weight: 2260 +--- + +The installer only installs required components (i.e. minimal installation) by default since v2.1.0. Other components are designed to be pluggable, which means you can enable any of them before or after installation. If your machine meets the following minimum requirements, we recommend you to **enable all components before installation**. A complete installation gives you an opportunity to comprehensively discover the container platform. + + +Minimum Requirements + +- CPU: 8 cores in total of all machines +- Memory: 16 GB in total of all machines + + + +> Note: +> +> - If your machines do not meet the minimum requirements of a complete installation, you can enable any of components at your will. Please refer to [Enable Pluggable Components Installation](../pluggable-components). +> - It works for [All-in-One](../all-in-one) and [Multi-Node](../multi-node). + +This tutorial will walk you through how to enable all components of KubeSphere. + +## Download Installer Package + +If you do not have the package yet, please run the following commands to download Installer 2.1.1 and unpack it, then enter `conf` folder. + +```bash +$ curl -L https://kubesphere.io/download/stable/v2.1.1 > installer.tar.gz \ +&& tar -zxf installer.tar.gz && cd kubesphere-all-v2.1.1/conf +``` + +## Enable All Components + +Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default. + +```yaml +# LOGGING CONFIGURATION +# logging is an optional component when installing KubeSphere, and +# Kubernetes builtin logging APIs will be used if logging_enabled is set to false. +# Builtin logging only provides limited functions, so recommend to enable logging. +logging_enabled: true # Whether to install logging system +elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number +elasticsearch_data_replica: 2 # total number of data nodes +elasticsearch_volume_size: 20Gi # Elasticsearch volume size +log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default. +elk_prefix: logstash # the string making up index names. The index name will be formatted as ks--log +kibana_enabled: false # Kibana Whether to install built-in Grafana +#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption. +#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port + +#DevOps Configuration +devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image) +jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default +jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default +jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default +jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters +jenkinsJavaOpts_Xmx: 6g +jenkinsJavaOpts_MaxRAM: 8g +sonarqube_enabled: true # Whether to install built-in SonarQube +#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption. +#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token + +# Following components are all optional for KubeSphere, +# Which could be turned on to install it before installation or later by updating its value to true +openpitrix_enabled: true # KubeSphere application store +metrics_server_enabled: true # For KubeSphere HPA to use +servicemesh_enabled: true # KubeSphere service mesh system(Istio-based) +notification_enabled: true # KubeSphere notification system +alerting_enabled: true # KubeSphere alerting system +``` + +Save it, then you can continue the installation process. diff --git a/content/zh/docs/application-store/built-in-apps/install-ks-on-linux-airgapped.md b/content/zh/docs/application-store/built-in-apps/install-ks-on-linux-airgapped.md new file mode 100644 index 000000000..26b3e4f04 --- /dev/null +++ b/content/zh/docs/application-store/built-in-apps/install-ks-on-linux-airgapped.md @@ -0,0 +1,224 @@ +--- +title: "Air-Gapped Installation" +keywords: 'kubernetes, kubesphere, air gapped, installation' +description: 'How to install KubeSphere on air-gapped Linux machines' + + +weight: 2240 +--- + +The air-gapped installation is almost the same as the online installation except it creates a local registry to host the Docker images. We will demonstrate how to install KubeSphere and Kubernetes on air-gapped environment. + +> Note: The dependencies in different operating systems may cause upexpected problems. If you encounter any installation problems on air-gapped environment, please describe your OS information and error logs on [GitHub](https://github.com/kubesphere/kubesphere/issues). + +## Prerequisites + +- If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information. +> - Installer will use `/var/lib/docker` as the default directory where all Docker related files, including the images, are stored. We recommend you to add additional storage to a disk with at least 100G mounted at `/var/lib/docker` and `/mnt/registry` respectively, use the [fdisk](https://www.computerhope.com/unix/fdisk.htm) command for reference. +- Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [OpenEBS](https://openebs.io/) to provide storage service with dynamic provisioning. It is convenient for testing and development. For production, please [configure supported persistent storage service](../storage-configuration) and prepare [high availability configuration](../master-ha) before installation. +- Since the air-gapped machines cannot connect to apt or yum source, please use clean Linux machine to avoid this problem. + +## Step 1: Prepare Linux Hosts + +The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements. + +- Supported OSes: CentOS 7.4 ~ 7.7 (64-bit), Ubuntu 16.04.5/16.04.6/18.04.1/18.04.2/18.04.3 LTS (64-bit) +- Time synchronization is required across all nodes, otherwise the installation may not succeed; +- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`; +- If you are using `Ubuntu 18.04`, you need to use the user `root`. +- Ensure your disk of each node is at least 100G. +- CPU and memory in total of all machines: 2 cores and 4 GB for minimal installation; 8 cores and 16 GB for complete installation. + + +The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes. + +> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide. + +| Host IP | Host Name | Role | +| --- | --- | --- | +|192.168.0.1|master|master, etcd| +|192.168.0.2|node1|node| +|192.168.0.3|node2|node| + +### Cluster Architecture + +#### Single Master, Single Etcd, Two Nodes + +![Architecture](/cluster-architecture.svg) + +## Step 2: Download Installer Package + +Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`. + +```bash +curl -L https://kubesphere.io/download/offline/latest > kubesphere-all-offline-v2.1.1.tar.gz \ +&& tar -zxf kubesphere-all-offline-v2.1.1.tar.gz && cd kubesphere-all-offline-v2.1.1/conf +``` + +## Step 3: Configure Host Template + +> This step is only for multi-node installation, you can skip this step if you choose all-in-one installation. + +Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file. + +> Note: +> +> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`. +> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`. +> - master, node1 and node2 are the host names of each node and all host names should be in lowercase. + +### hosts.ini + +```ini +[all] +master ansible_connection=local ip=192.168.0.1 +node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD +node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD + +[local-registry] +master + +[kube-master] +master + +[kube-node] +node1 +node2 + +[etcd] +master + +[k8s-cluster:children] +kube-node +kube-master +``` + +> Note: +> +> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here. +> - Installer will use a node as the local registry for docker images, defaults to "master" in the group `[local-registry]`. +> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively. +> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`. +> +> Parameters Specification: +> +> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection. +> - `ansible_host`: The name of the host to be connected. +> - `ip`: The ip of the host to be connected. +> - `ansible_user`: The default ssh user name to use. +> - `ansible_become_pass`: Allows you to set the privilege escalation password. +> - `ansible_ssh_pass`: The password of the host to be connected using root. + +## Step 4: Enable All Components + +> This is step is complete installation. You can skip this step if you choose a minimal installation. + +Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default. + +```yaml +# LOGGING CONFIGURATION +# logging is an optional component when installing KubeSphere, and +# Kubernetes builtin logging APIs will be used if logging_enabled is set to false. +# Builtin logging only provides limited functions, so recommend to enable logging. +logging_enabled: true # Whether to install logging system +elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number +elasticsearch_data_replica: 2 # total number of data nodes +elasticsearch_volume_size: 20Gi # Elasticsearch volume size +log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default. +elk_prefix: logstash # the string making up index names. The index name will be formatted as ks--log +kibana_enabled: false # Kibana Whether to install built-in Grafana +#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption. +#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port + +#DevOps Configuration +devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image) +jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default +jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default +jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default +jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters +jenkinsJavaOpts_Xmx: 6g +jenkinsJavaOpts_MaxRAM: 8g +sonarqube_enabled: true # Whether to install built-in SonarQube +#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption. +#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token + +# Following components are all optional for KubeSphere, +# Which could be turned on to install it before installation or later by updating its value to true +openpitrix_enabled: true # KubeSphere application store +metrics_server_enabled: true # For KubeSphere HPA to use +servicemesh_enabled: true # KubeSphere service mesh system(Istio-based) +notification_enabled: true # KubeSphere notification system +alerting_enabled: true # KubeSphere alerting system +``` + +## Step 5: Install KubeSphere to Linux Machines + +> Note: +> +> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default. +> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions. +> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation. +> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts. + +**1.** Enter `scripts` folder, and execute `install.sh` using `root` user: + +```bash +cd ../cripts +./install.sh +``` + +**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume. + +```bash +################################################ + KubeSphere Installer Menu +################################################ +* 1) All-in-one +* 2) Multi-node +* 3) Quit +################################################ +https://kubesphere.io/ 2020-02-24 +################################################ +Please input an option: 2 + +``` + +**3.** Verify the multi-node installation: + +**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go. + +```bash +successsful! +##################################################### +### Welcome to KubeSphere! ### +##################################################### + +Console: http://192.168.0.1:30880 +Account: admin +Password: P@88w0rd + +NOTE:Please modify the default password after login. +##################################################### +``` + +> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components). + +**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in. + +![Login](https://pek3b.qingstor.com/kubesphere-docs/png/20191017172215.png) + +Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up. + +![Landing Page](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png) + +## Enable Pluggable Components + +If you already have set up minimal installation, you still can edit the ConfigMap of ks-installer using the following command. Please make sure there is enough resource in your machines, see [Pluggable Components Overview](/en/installation/pluggable-components/). + +```bash +kubectl edit cm -n kubesphere-system ks-installer +``` + +## FAQ + +If you have further questions please do not hesitate to raise issues on [GitHub](https://github.com/kubesphere/kubesphere/issues). diff --git a/content/zh/docs/application-store/built-in-apps/master-ha.md b/content/zh/docs/application-store/built-in-apps/master-ha.md new file mode 100644 index 000000000..ee8f26203 --- /dev/null +++ b/content/zh/docs/application-store/built-in-apps/master-ha.md @@ -0,0 +1,152 @@ +--- +title: "High Availability Configuration" +keywords: "kubesphere, kubernetes, docker,installation, HA, high availability" +description: "The guide for installing a high availability of KubeSphere cluster" + +weight: 2230 +--- + +## Introduction + +[Multi-node installation](../multi-node) can help you to quickly set up a single-master cluster on multiple machines for development and testing. However, we need to consider the high availability of the cluster for production. Since the key components on the master node, i.e. kube-apiserver, kube-scheduler, and kube-controller-manager are running on a single master node, Kubernetes and KubeSphere will be unavailable during the master being down. Therefore we need to set up a high availability cluster by provisioning load balancers and multiple masters. You can use any cloud load balancer, or any hardware load balancer (e.g. F5). In addition, keepalved and Haproxy is also an alternative for creating such high-availability cluster. + +This document walks you through an example how to create two [QingCloud Load Balancer](https://docs.qingcloud.com/product/network/loadbalancer), serving as internal load balancer and external load balancer respectively, and how to configure the high availability of masters and Etcd using the load balancers. + +## Prerequisites + +- Please make sure that you already read [Multi-Node installation](../multi-node). This document only demonstrates how to configure load balancers. +- You need a [QingCloud](https://console.qingcloud.com/login) account to create load balancers, or follow the guide of any other cloud provider to create load balancers. + +## Architecture + +This example prepares six machines of CentOS 7.5. We will create two load balancers, and deploy three masters and Etcd nodes on three of the machines. You can configure these masters and Etcd nodes in `conf/hosts.ini`. + +![Master and etcd node high availability architecture](https://pek3b.qingstor.com/kubesphere-docs/png/20200307215924.png) + +## Install HA Cluster + +### Step 1: Create Load Balancers + +This step briefly shows an example of creating a load balancer on QingCloud platform. + +#### Create an Internal Load Balancer + +1.1. Log in [QingCloud Console](https://console.qingcloud.com/login) and select **Network & CDN → Load Balancers**, then click on the create button and fill in the basic information. + +1.2. Choose the VxNet that your machines are created within from the **Network** dropdown list. Here is `kube`. Other settings can be default values as follows. Click **Submit** to complete the creation. + +![Create Internal LB on QingCloud](https://pek3b.qingstor.com/kubesphere-docs/png/20200215224125.png) + +1.3. Drill into the detail page of the load balancer, then create a listener that listens to the port `6443` of the `TCP` protocol. + +- Name: Define a name for this Listener +- Listener Protocol: Select `TCP` protocol +- Port: `6443` +- Load mode: `Poll` + +> Note: After creating the listener, please check the firewall rules of the load balancer. Make sure that the port `6443` has been added to the firewall rules and the external traffic can pass through `6443`. Otherwise, the installation will fail. + +![Add Listener to LB](https://pek3b.qingstor.com/kubesphere-docs/png/20200215225205.png) + +1.4. Click **Add Backend**, choose the VxNet `kube` that we chose. Then click on the button **Advanced Search** and choose the three master nodes under the VxNet and set the port to `6443` which is the default secure port of api-server. + +Click **Submit** when you are done. + +![Choose Backends](https://pek3b.qingstor.com/kubesphere-docs/png/20200215225550.png) + +1.5. Click on the button **Apply Changes** to activate the configurations. At this point, you can find the three masters have been added as the backend servers of the listener that is behind the internal load balancer. + +> Please note: The status of all masters might shows `Not available` after you added them as backends. This is normal since the port `6443` of api-server are not active in masters yet. The status will change to `Active` and the port of api-server will be exposed after installation complete, which means the internal load balancer you configured works as expected. + +![Apply Changes](https://pek3b.qingstor.com/kubesphere-docs/png/20200215230107.png) + +#### Create an External Load Balancer + +You need to create an EIP in advance. + +1.6. Similarly, create an external load balancer without joining any network, but associate the EIP that you created to this load balancer. + +1.7. Enter the load balancer detail page, create a listener that listens to the port `30880` of the `HTTP` protocol which is the nodeport of KubeSphere console.. + +> Note: After creating the listener, please check the firewall rules of the load balancer. Make sure that the port `30880` has been added to the firewall rules and the external traffic can pass through `6443`. Otherwise, the installation will fail. + +![Create external LB](https://pek3b.qingstor.com/kubesphere-docs/png/20200215232114.png) + +1.8. Click **Add Backend**, then choose the `six` machines that we are going to install KubeSphere within the VxNet `Kube`, and set the port to `30880`. + +Click **Submit** when you are done. + +1.9. Click on the button **Apply Changes** to activate the configurations. At this point, you can find the six machines have been added as the backend servers of the listener that is behind the external load balancer. + +![Apply Changes](https://pek3b.qingstor.com/kubesphere-docs/png/20200215232445.png) + +### Step 2: Modify the host.ini + +Go to the taskbox where you downloaded the installer by following the [Multi-node Installation](../multi-node) and complete the following configurations. + +| **Parameter** | **Description** | +|--------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `[all]` | node information. Use the following syntax if you run installation as `root` user:
- ` ansible_connection= ip=`
- ` ansible_host= ip= ansible_ssh_pass=`
If you log in as a non-root user, use the syntax:
- ` ansible_connection= ip= ansible_user= ansible_become_pass=` | +| `[kube-master]` | master node names | +| `[kube-node]` | worker node names | +| `[etcd]` | etcd node names. The number of `etcd` nodes needs to be odd. | +| `[k8s-cluster:children]` | group names of `[kube-master]` and `[kube-node]` | + + +We use **CentOS 7.5** with `root` user to install an HA cluster. Please see the following configuration as an example: + +> Note: +>
+> If the _taskbox_ cannot establish `ssh` connection with the rest nodes, try to use the non-root user configuration. + +#### host.ini example + +```ini +[all] +master1 ansible_connection=local ip=192.168.0.1 +master2 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD +master3 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD +node1 ansible_host=192.168.0.4 ip=192.168.0.4 ansible_ssh_pass=PASSWORD +node2 ansible_host=192.168.0.5 ip=192.168.0.5 ansible_ssh_pass=PASSWORD +node3 ansible_host=192.168.0.6 ip=192.168.0.6 ansible_ssh_pass=PASSWORD + +[kube-master] +master1 +master2 +master3 + +[kube-node] +node1 +node2 +node3 + +[etcd] +master1 +master2 +master3 + +[k8s-cluster:children] +kube-node +kube-master +``` + +### Step 3: Configure the Load Balancer Parameters + +Besides configuring the `common.yaml` by following the [Multi-node Installation](../multi-node), you need to modify the load balancer information in the `common.yaml`. Assume the **VIP** address and listening port of the **internal load balancer** are `192.168.0.253` and `6443`, then you can refer to the following example. + +> - Note that address and port should be indented by two spaces in `common.yaml`, and the address should be VIP. +> - The domain name of the load balancer is "lb.kubesphere.local" by default for internal access. If you need to change the domain name, please uncomment and modify it. + +#### The configuration sample in common.yaml + +```yaml +## External LB example config +## apiserver_loadbalancer_domain_name: "lb.kubesphere.local" +loadbalancer_apiserver: + address: 192.168.0.253 + port: 6443 +``` + +Finally, please refer to the [guide](../storage-configuration) to configure the persistent storage service in `common.yaml` and start your HA cluster installation. + +Then it is ready to install the high availability KubeSphere cluster. diff --git a/content/zh/docs/application-store/built-in-apps/multi-node.md b/content/zh/docs/application-store/built-in-apps/multi-node.md new file mode 100644 index 000000000..d1cd790ea --- /dev/null +++ b/content/zh/docs/application-store/built-in-apps/multi-node.md @@ -0,0 +1,176 @@ +--- +title: "Multi-node Installation" +keywords: 'kubesphere, kubernetes, docker, kubesphere installer' +description: 'The guide for installing KubeSphere on Multi-Node in development or testing environment' + +weight: 2220 +--- + +`Multi-Node` installation enables installing KubeSphere on multiple nodes. Typically, use any one node as _taskbox_ to run the installation task. Please note `ssh` communication is required to be established between taskbox and other nodes. + +- The following instructions are for the default installation without enabling any optional components as we have made them pluggable since v2.1.0. If you want to enable any one, please read [Enable Pluggable Components](../pluggable-components). +- If your machines in total have >= 8 cores and >= 16G memory, we recommend you to install the full package of KubeSphere by [Enabling Optional Components](../complete-installation). +- The installation time depends on your network bandwidth, your computer configuration, the number of nodes, etc. + +## Prerequisites + +If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information. + +## Step 1: Prepare Linux Hosts + +The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements. + +- Time synchronization is required across all nodes, otherwise the installation may not succeed; +- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`; +- If you are using `Ubuntu 18.04`, you need to use the user `root`; +- If the Debian system does not have the sudo command installed, you need to execute `apt update && apt install sudo` command using root before installation. + +### Hardware Recommendation + +- KubeSphere can be installed on any cloud platform. +- The installation speed can be accelerated by increasing network bandwidth. +- If you choose air-gapped installation, ensure your disk of each node is at least 100G. + +| System | Minimum Requirements (Each node) | +| --- | --- | +| CentOS 7.4 ~ 7.7 (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:40 G | +| Ubuntu 16.04/18.04 LTS (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:40 G | +| Red Hat Enterprise Linux Server 7.4 (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:40 G | +| Debian Stretch 9.5 (64 bit)| CPU:2 Core, Memory:4 G, Disk Space:40 G | + +The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes. + +> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide. + +| Host IP | Host Name | Role | +| --- | --- | --- | +|192.168.0.1|master|master, etcd| +|192.168.0.2|node1|node| +|192.168.0.3|node2|node| + +### Cluster Architecture + +#### Single Master, Single Etcd, Two Nodes + +![Architecture](/cluster-architecture.svg) + +## Step 2: Download Installer Package + +**1.** Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`. + +```bash +curl -L https://kubesphere.io/download/stable/latest > installer.tar.gz \ +&& tar -zxf installer.tar.gz && cd kubesphere-all-v2.1.1/conf +``` + +**2.** Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file. + +> Note: +> +> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`. +> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`. +> - master, node1 and node2 are the host names of each node and all host names should be in lowercase. + +### hosts.ini + +```ini +[all] +master ansible_connection=local ip=192.168.0.1 +node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD +node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD + +[kube-master] +master + +[kube-node] +node1 +node2 + +[etcd] +master + +[k8s-cluster:children] +kube-node +kube-master +``` + +> Note: +> +> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here. +> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively. +> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`. +> +> Parameters Specification: +> +> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection. +> - `ansible_host`: The name of the host to be connected. +> - `ip`: The ip of the host to be connected. +> - `ansible_user`: The default ssh user name to use. +> - `ansible_become_pass`: Allows you to set the privilege escalation password. +> - `ansible_ssh_pass`: The password of the host to be connected using root. + +## Step 3: Install KubeSphere to Linux Machines + +> Note: +> +> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default. +> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions. +> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation. +> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts. + +**1.** Enter `scripts` folder, and execute `install.sh` using `root` user: + +```bash +cd ../cripts +./install.sh +``` + +**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume. + +```bash +################################################ + KubeSphere Installer Menu +################################################ +* 1) All-in-one +* 2) Multi-node +* 3) Quit +################################################ +https://kubesphere.io/ 2020-02-24 +################################################ +Please input an option: 2 + +``` + +**3.** Verify the multi-node installation: + +**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go. + +```bash +successsful! +##################################################### +### Welcome to KubeSphere! ### +##################################################### + +Console: http://192.168.0.1:30880 +Account: admin +Password: P@88w0rd + +NOTE:Please modify the default password after login. +##################################################### +``` + +> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components). + +**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in. + +![Login](https://pek3b.qingstor.com/kubesphere-docs/png/20191017172215.png) + +Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up. + +![Landing Page](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png) + +## FAQ + +The installer has been tested on Aliyun, AWS, Huawei Cloud, QingCloud, Tencent Cloud. Please check the [results](https://github.com/kubesphere/ks-installer/issues/23) for details. Also please read the [FAQ of installation](../../faq/faq-install). + +If you have any further questions please do not hesitate to file issues on [GitHub](https://github.com/kubesphere/kubesphere/issues). diff --git a/content/zh/docs/application-store/built-in-apps/storage-configuration.md b/content/zh/docs/application-store/built-in-apps/storage-configuration.md new file mode 100644 index 000000000..a3d8d5156 --- /dev/null +++ b/content/zh/docs/application-store/built-in-apps/storage-configuration.md @@ -0,0 +1,157 @@ +--- +title: "StorageClass Configuration" +keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus' +description: 'Instructions for Setting up StorageClass for KubeSphere' + +weight: 2250 +--- + +Currently, Installer supports the following [Storage Class](https://kubernetes.io/docs/concepts/storage/storage-classes/), providing persistent storage service for KubeSphere (more storage classes will be supported soon). + +- NFS +- Ceph RBD +- GlusterFS +- QingCloud Block Storage +- QingStor NeonSAN +- Local Volume (for development and test only) + +The versions of storage systems and corresponding CSI plugins in the table listed below have been well tested. + +| **Name** | **Version** | **Reference** | +| ----------- | --- |---| +Ceph RBD Server | v0.94.10 | For development and testing, refer to [Install Ceph Storage Server](/zh-CN/appendix/ceph-ks-install/) for details. Please refer to [Ceph Documentation](http://docs.ceph.com/docs/master/) for production. | +Ceph RBD Client | v12.2.5 | Before installing KubeSphere, you need to configure the corresponding parameters in `common.yaml`. Please refer to [Ceph RBD](../storage-configuration/#ceph-rbd) | +GlusterFS Server | v3.7.6 | For development and testing, refer to [Deploying GlusterFS Storage Server](/zh-CN/appendix/glusterfs-ks-install/) for details. Please refer to [Gluster Documentation](https://www.gluster.org/install/) or [Gluster Documentation](http://gluster.readthedocs.io/en/latest/Install-Guide/Install/) for production. Note you need to install [Heketi Manager (V3.0.0)](https://github.com/heketi/heketi/tree/master/docs/admin). | +|GlusterFS Client |v3.12.10|Before installing KubeSphere, you need to configure the corresponding parameters in `common.yaml`. Please refer to [GlusterFS](../storage-configuration/#glusterfs)| +|NFS Client | v3.1.0 | Before installing KubeSphere, you need to configure the corresponding parameters in `common.yaml`. Make sure you have prepared NFS storage server. Please see [NFS Client](../storage-configuration/#nfs) | +QingCloud-CSI|v0.2.0.1|You need to configure the corresponding parameters in `common.yaml` before installing KubeSphere. Please refer to [QingCloud CSI](../storage-configuration/#qingcloud-csi) for details| +NeonSAN-CSI|v0.3.0| Before installing KubeSphere, you need to configure the corresponding parameters in `common.yaml`. Make sure you have prepared QingStor NeonSAN storage server. Please see [Neonsan-CSI](../storage-configuration/#neonsan-csi) | + +> Note: You are only allowed to set ONE default storage classes in the cluster. To specify a default storage class, make sure there is no default storage class already exited in the cluster. + +## Storage Configuration + +After preparing the storage server, you need to refer to the parameters description in the following table. Then modify the corresponding configurations in `conf/common.yaml` accordingly. + +The following describes the storage configuration in `common.yaml`. + +> Note: Local Volume is configured as the default storage class in `common.yaml` by default. If you are going to set other storage class as the default, disable the Local Volume and modify the configuration for other storage class. + +### Local Volume (For developing or testing only) + +A [Local Volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) represents a mounted local storage device such as a disk, partition or directory. Local volumes can only be used as a statically created PersistentVolume. We recommend you to use Local volume in testing or development only since it is quick and easy to install KubeSphere without the struggle to set up persistent storage server. Refer to following table for the definition in `conf/common.yaml`. + +| **Local volume** | **Description** | +| --- | --- | +| local\_volume\_provisioner\_enabled | Whether to use Local as the persistent storage, defaults to true | +| local\_volume\_provisioner\_storage\_class | Storage class name, default value:local | +| local\_volume\_is\_default\_class | Whether to set Local as the default storage class, defaults to true.| + +### NFS + +An NFS volume allows an existing NFS (Network File System) share to be mounted into your Pod. NFS can be configured in `conf/common.yaml`. Note you need to prepare NFS server in advance. + +| **NFS** | **Description** | +| --- | --- | +| nfs\_client\_enable | Whether to use NFS as the persistent storage, defaults to false | +| nfs\_client\_is\_default\_class | Whether to set NFS as default storage class, defaults to false. | +| nfs\_server | The NFS server address, either IP or Hostname | +| nfs\_path | NFS shared directory, which is the file directory shared on the server, see [Kubernetes Documentation](https://kubernetes.io/docs/concepts/storage/volumes/#nfs) | +|nfs\_vers3\_enabled | Specifies which version of the NFS protocol to use, defaults to false which means v4. True means v4 | +|nfs_archiveOnDelete | Archive PVC when deleting. It will automatically remove data from `oldPath` when it sets to false | + +### Ceph RBD + +The open source [Ceph RBD](https://ceph.com/) distributed storage system can be configured to use in `conf/common.yaml`. You need to prepare Ceph storage server in advance. Please refer to [Kubernetes Documentation](https://kubernetes.io/docs/concepts/storage/storage-classes/#ceph-rbd) for more details. + +| **Ceph\_RBD** | **Description** | +| --- | --- | +| ceph\_rbd\_enabled | Whether to use Ceph RBD as the persistent storage, defaults to false | +| ceph\_rbd\_storage\_class | Storage class name | +| ceph\_rbd\_is\_default\_class | Whether to set Ceph RBD as default storage class, defaults to false | +| ceph\_rbd\_monitors | Ceph monitors, comma delimited. This parameter is required, which depends on Ceph RBD server parameters | +| ceph\_rbd\_admin\_id | Ceph client ID that is capable of creating images in the pool. Defaults to “admin” | +| ceph\_rbd\_admin\_secret | Admin_id's secret, secret name for "adminId". This parameter is required. The provided secret must have type “kubernetes.io/rbd” | +| ceph\_rbd\_pool | Ceph RBD pool. Default is “rbd” | +| ceph\_rbd\_user\_id | Ceph client ID that is used to map the RBD image. Default is the same as adminId | +| ceph\_rbd\_user\_secret | Secret for User_id, it is required to create this secret in namespace which used rbd image | +| ceph\_rbd\_fsType | fsType that is supported by Kubernetes. Default: "ext4"| +| ceph\_rbd\_imageFormat | Ceph RBD image format, “1” or “2”. Default is “1” | +|ceph\_rbd\_imageFeatures| This parameter is optional and should only be used if you set imageFormat to “2”. Currently supported features are layering only. Default is “”, and no features are turned on| + +> Note: +> +> The ceph secret, which is created in storage class, like "ceph_rbd_admin_secret" and "ceph_rbd_user_secret", is retrieved using following command in Ceph storage server. + +```bash +ceph auth get-key client.admin +``` + +### GlusterFS + +[GlusterFS](https://docs.gluster.org/en/latest/) is a scalable network filesystem suitable for data-intensive tasks such as cloud storage and media streaming. You need to prepare GlusterFS storage server in advance. Please refer to [Kubernetes Documentation](https://kubernetes.io/docs/concepts/storage/storage-classes/#glusterfs) for further information. + +| **GlusterFS(It requires glusterfs cluster which is managed by heketi)**|**Description** | +| --- | --- | +| glusterfs\_provisioner\_enabled | Whether to use GlusterFS as the persistent storage, defaults to false | +| glusterfs\_provisioner\_storage\_class | Storage class name | +| glusterfs\_is\_default\_class | Whether to set GlusterFS as default storage class, defaults to false | +| glusterfs\_provisioner\_restauthenabled | Gluster REST service authentication boolean that enables authentication to the REST server | +| glusterfs\_provisioner\_resturl | Gluster REST service/Heketi service url which provision gluster volumes on demand. The general format should be "IP address:Port" and this is a mandatory parameter for GlusterFS dynamic provisioner| +| glusterfs\_provisioner\_clusterid | Optional, for example, 630372ccdc720a92c681fb928f27b53f is the ID of the cluster which will be used by Heketi when provisioning the volume. It can also be a list of clusterids | +| glusterfs\_provisioner\_restuser | Gluster REST service/Heketi user who has access to create volumes in the Gluster Trusted Pool | +| glusterfs\_provisioner\_secretName | Optional, identification of Secret instance that contains user password to use when talking to Gluster REST service, Installer will automatically create this secret in kube-system | +| glusterfs\_provisioner\_gidMin | The minimum value of GID range for the storage class | +| glusterfs\_provisioner\_gidMax |The maximum value of GID range for the storage class | +| glusterfs\_provisioner\_volumetype | The volume type and its parameters can be configured with this optional value, for example: ‘Replica volume’: volumetype: replicate:3 | +| jwt\_admin\_key | "jwt.admin.key" field is from "/etc/heketi/heketi.json" in Heketi server | + +**Attention:** + + > Please note: `"glusterfs_provisioner_clusterid"` could be returned from glusterfs server by running the following command: + + ```bash + export HEKETI_CLI_SERVER=http://localhost:8080 + heketi-cli cluster list + ``` + +### QingCloud Block Storage + +[QingCloud Block Storage](https://docs.qingcloud.com/product/Storage/volume/) is supported in KubeSphere as the persistent storage service. If you would like to experience dynamic provisioning when creating volume, we recommend you to use it as your persistent storage solution. KubeSphere integrates [QingCloud-CSI](https://github.com/yunify/qingcloud-csi/blob/master/README_zh.md), and allows you to use various block storage services of QingCloud. With simple configuration, you can quickly expand, clone PVCs and view the topology of PVCs, create/delete snapshot, as well as restore volume from snapshot. + +QingCloud-CSI plugin has implemented the standard CSI. You can easily create and manage different types of volumes in KubeSphere, which are provided by QingCloud. The corresponding PVCs will created with ReadWriteOnce access mode and mounted to running Pods. + +QingCloud-CSI supports create the following five types of volume in QingCloud: + +- High capacity +- Standard +- SSD Enterprise +- Super high performance +- High performance + +|**QingCloud-CSI** | **Description**| +| --- | ---| +| qingcloud\_csi\_enabled|Whether to use QingCloud-CSI as the persistent storage volume, defaults to false | +| qingcloud\_csi\_is\_default\_class| Whether to set QingCloud-CSI as default storage class, defaults to false | +qingcloud\_access\_key\_id ,
qingcloud\_secret\_access\_key| Please obtain it from [QingCloud Console](https://console.qingcloud.com/login) | +|qingcloud\_zone| Zone should be the same as the zone where the Kubernetes cluster is installed, and the CSI plugin will operate on the storage volumes for this zone. For example: zone can be set to these values, such as sh1a (Shanghai 1-A), sh1b (Shanghai 1-B), pek2 (Beijing 2), pek3a (Beijing 3-A), pek3b (Beijing 3-B), pek3c (Beijing 3-C), gd1 (Guangdong 1), gd2a (Guangdong 2-A), ap1 (Asia Pacific 1), ap2a (Asia Pacific 2-A) | +| type | The type of volume in QingCloud platform. In QingCloud platform, 0 represents high performance volume. 3 represents super high performance volume. 1 or 2 represents high capacity volume depending on cluster‘s zone, see [QingCloud Documentation](https://docs.qingcloud.com/product/api/action/volume/create_volumes.html)| +| maxSize, minSize | Limit the range of volume size in GiB| +| stepSize | Set the increment of volumes size in GiB| +| fsType | The file system of the storage volume, which supports ext3, ext4, xfs. The default is ext4| + +### QingStor NeonSAN + +The NeonSAN-CSI plugin supports the enterprise-level distributed storage [QingStor NeonSAN](https://www.qingcloud.com/products/qingstor-neonsan/) as the persistent storage solution. You need prepare the NeonSAN server, then configure the NeonSAN-CSI plugin to connect to its storage server in `conf/common.yaml`. Please refer to [NeonSAN-CSI Reference](https://github.com/wnxn/qingstor-csi/blob/master/docs/reference_zh.md#storageclass-%E5%8F%82%E6%95%B0) for further information. + +| **NeonSAN** | **Description** | +| --- | --- | +| neonsan\_csi\_enabled | Whether to use NeonSAN as the persistent storage, defaults to false | +| neonsan\_csi\_is\_default\_class | Whether to set NeonSAN-CSI as the default storage class, defaults to false| +Neonsan\_csi\_protocol | transportation protocol, user must set the option, such as TCP or RDMA| +| neonsan\_server\_address | NeonSAN server address | +| neonsan\_cluster\_name| NeonSAN server cluster name| +| neonsan\_server\_pool|A comma separated list of pools. Tell plugin to manager these pools. User must set the option, the default value is kube| +| neonsan\_server\_replicas|NeonSAN image replica count. Default: 1| +| neonsan\_server\_stepSize|set the increment of volumes size in GiB. Default: 1| +| neonsan\_server\_fsType|The file system to use for the volume. Default: ext4| diff --git a/content/zh/docs/cluster-administration/_index.md b/content/zh/docs/cluster-administration/_index.md new file mode 100644 index 000000000..ebb2b9400 --- /dev/null +++ b/content/zh/docs/cluster-administration/_index.md @@ -0,0 +1,22 @@ +--- +title: "Cluster Administration" +description: "Help you to better understand KubeSphere with detailed graphics and contents" +layout: "single" + +linkTitle: "Cluster Administration" + +weight: 4100 + +icon: "/images/docs/docs.svg" + +--- + +## Installing KubeSphere and Kubernetes on Linux + +In this chapter, we will demonstrate how to use KubeKey to provision a new Kubernetes and KubeSphere cluster based on different infrastructures. Kubekey can help you to quickly build a production-ready cluster architecture on a set of machines from zero to one. It also helps you to easily scale the cluster and install pluggable components on existing architecture. + +## Most Popular Pages + +Below you will find some of the most common and helpful pages from this chapter. We highly recommend you to review them at first. + +{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} diff --git a/content/zh/docs/cluster-administration/nodes.md b/content/zh/docs/cluster-administration/nodes.md new file mode 100644 index 000000000..4bed011c5 --- /dev/null +++ b/content/zh/docs/cluster-administration/nodes.md @@ -0,0 +1,10 @@ +--- +title: "Nodes" +keywords: "kubernetes, StorageClass, kubesphere, PVC" +description: "Kubernetes Nodes Management" + +linkTitle: "Nodes" +weight: 200 +--- + +TBD diff --git a/content/zh/docs/cluster-administration/platform-settings/_index.md b/content/zh/docs/cluster-administration/platform-settings/_index.md new file mode 100644 index 000000000..d3af6d02b --- /dev/null +++ b/content/zh/docs/cluster-administration/platform-settings/_index.md @@ -0,0 +1,7 @@ +--- +linkTitle: "DevOps Administration" +weight: 2200 + +_build: + render: false +--- diff --git a/content/zh/docs/cluster-administration/platform-settings/customize-basic-information.md b/content/zh/docs/cluster-administration/platform-settings/customize-basic-information.md new file mode 100644 index 000000000..52a968785 --- /dev/null +++ b/content/zh/docs/cluster-administration/platform-settings/customize-basic-information.md @@ -0,0 +1,224 @@ +--- +title: "Role and Member Management" +keywords: 'kubernetes, kubesphere, air gapped, installation' +description: 'Role and Member Management' + + +weight: 2240 +--- + +The air-gapped installation is almost the same as the online installation except it creates a local registry to host the Docker images. We will demonstrate how to install KubeSphere and Kubernetes on air-gapped environment. + +> Note: The dependencies in different operating systems may cause upexpected problems. If you encounter any installation problems on air-gapped environment, please describe your OS information and error logs on [GitHub](https://github.com/kubesphere/kubesphere/issues). + +## Prerequisites + +- If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information. +> - Installer will use `/var/lib/docker` as the default directory where all Docker related files, including the images, are stored. We recommend you to add additional storage to a disk with at least 100G mounted at `/var/lib/docker` and `/mnt/registry` respectively, use the [fdisk](https://www.computerhope.com/unix/fdisk.htm) command for reference. +- Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [OpenEBS](https://openebs.io/) to provide storage service with dynamic provisioning. It is convenient for testing and development. For production, please [configure supported persistent storage service](../storage-configuration) and prepare [high availability configuration](../master-ha) before installation. +- Since the air-gapped machines cannot connect to apt or yum source, please use clean Linux machine to avoid this problem. + +## Step 1: Prepare Linux Hosts + +The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements. + +- Supported OSes: CentOS 7.4 ~ 7.7 (64-bit), Ubuntu 16.04.5/16.04.6/18.04.1/18.04.2/18.04.3 LTS (64-bit) +- Time synchronization is required across all nodes, otherwise the installation may not succeed; +- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`; +- If you are using `Ubuntu 18.04`, you need to use the user `root`. +- Ensure your disk of each node is at least 100G. +- CPU and memory in total of all machines: 2 cores and 4 GB for minimal installation; 8 cores and 16 GB for complete installation. + + +The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes. + +> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide. + +| Host IP | Host Name | Role | +| --- | --- | --- | +|192.168.0.1|master|master, etcd| +|192.168.0.2|node1|node| +|192.168.0.3|node2|node| + +### Cluster Architecture + +#### Single Master, Single Etcd, Two Nodes + +![Architecture](/cluster-architecture.svg) + +## Step 2: Download Installer Package + +Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`. + +```bash +curl -L https://kubesphere.io/download/offline/latest > kubesphere-all-offline-v2.1.1.tar.gz \ +&& tar -zxf kubesphere-all-offline-v2.1.1.tar.gz && cd kubesphere-all-offline-v2.1.1/conf +``` + +## Step 3: Configure Host Template + +> This step is only for multi-node installation, you can skip this step if you choose all-in-one installation. + +Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file. + +> Note: +> +> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`. +> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`. +> - master, node1 and node2 are the host names of each node and all host names should be in lowercase. + +### hosts.ini + +```ini +[all] +master ansible_connection=local ip=192.168.0.1 +node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD +node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD + +[local-registry] +master + +[kube-master] +master + +[kube-node] +node1 +node2 + +[etcd] +master + +[k8s-cluster:children] +kube-node +kube-master +``` + +> Note: +> +> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here. +> - Installer will use a node as the local registry for docker images, defaults to "master" in the group `[local-registry]`. +> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively. +> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`. +> +> Parameters Specification: +> +> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection. +> - `ansible_host`: The name of the host to be connected. +> - `ip`: The ip of the host to be connected. +> - `ansible_user`: The default ssh user name to use. +> - `ansible_become_pass`: Allows you to set the privilege escalation password. +> - `ansible_ssh_pass`: The password of the host to be connected using root. + +## Step 4: Enable All Components + +> This is step is complete installation. You can skip this step if you choose a minimal installation. + +Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default. + +```yaml +# LOGGING CONFIGURATION +# logging is an optional component when installing KubeSphere, and +# Kubernetes builtin logging APIs will be used if logging_enabled is set to false. +# Builtin logging only provides limited functions, so recommend to enable logging. +logging_enabled: true # Whether to install logging system +elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number +elasticsearch_data_replica: 2 # total number of data nodes +elasticsearch_volume_size: 20Gi # Elasticsearch volume size +log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default. +elk_prefix: logstash # the string making up index names. The index name will be formatted as ks--log +kibana_enabled: false # Kibana Whether to install built-in Grafana +#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption. +#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port + +#DevOps Configuration +devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image) +jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default +jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default +jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default +jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters +jenkinsJavaOpts_Xmx: 6g +jenkinsJavaOpts_MaxRAM: 8g +sonarqube_enabled: true # Whether to install built-in SonarQube +#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption. +#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token + +# Following components are all optional for KubeSphere, +# Which could be turned on to install it before installation or later by updating its value to true +openpitrix_enabled: true # KubeSphere application store +metrics_server_enabled: true # For KubeSphere HPA to use +servicemesh_enabled: true # KubeSphere service mesh system(Istio-based) +notification_enabled: true # KubeSphere notification system +alerting_enabled: true # KubeSphere alerting system +``` + +## Step 5: Install KubeSphere to Linux Machines + +> Note: +> +> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default. +> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions. +> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation. +> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts. + +**1.** Enter `scripts` folder, and execute `install.sh` using `root` user: + +```bash +cd ../cripts +./install.sh +``` + +**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume. + +```bash +################################################ + KubeSphere Installer Menu +################################################ +* 1) All-in-one +* 2) Multi-node +* 3) Quit +################################################ +https://kubesphere.io/ 2020-02-24 +################################################ +Please input an option: 2 + +``` + +**3.** Verify the multi-node installation: + +**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go. + +```bash +successsful! +##################################################### +### Welcome to KubeSphere! ### +##################################################### + +Console: http://192.168.0.1:30880 +Account: admin +Password: P@88w0rd + +NOTE:Please modify the default password after login. +##################################################### +``` + +> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components). + +**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in. + +![Login](https://pek3b.qingstor.com/kubesphere-docs/png/20191017172215.png) + +Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up. + +![Landing Page](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png) + +## Enable Pluggable Components + +If you already have set up minimal installation, you still can edit the ConfigMap of ks-installer using the following command. Please make sure there is enough resource in your machines, see [Pluggable Components Overview](/en/installation/pluggable-components/). + +```bash +kubectl edit cm -n kubesphere-system ks-installer +``` + +## FAQ + +If you have further questions please do not hesitate to raise issues on [GitHub](https://github.com/kubesphere/kubesphere/issues). diff --git a/content/zh/docs/cluster-administration/storageclass.md b/content/zh/docs/cluster-administration/storageclass.md new file mode 100644 index 000000000..db100ea30 --- /dev/null +++ b/content/zh/docs/cluster-administration/storageclass.md @@ -0,0 +1,8 @@ +--- +title: "StorageClass" +keywords: "kubernetes, docker, kubesphere, jenkins, istio, prometheus" +description: "Kubernetes and KubeSphere node management" + +linkTitle: "StorageClass" +weight: 100 +--- diff --git a/content/zh/docs/devops-user-guide/_index.md b/content/zh/docs/devops-user-guide/_index.md new file mode 100644 index 000000000..7cbaba6b1 --- /dev/null +++ b/content/zh/docs/devops-user-guide/_index.md @@ -0,0 +1,23 @@ +--- +title: "DevOps User Guide" +description: "Getting started with KubeSphere DevOps project" +layout: "single" + +linkTitle: "DevOps User Guide" +weight: 4400 + +icon: "/images/docs/docs.svg" + +--- + +## Installing KubeSphere and Kubernetes on Linux + +In this chapter, we will demonstrate how to use KubeKey to provision a new Kubernetes and KubeSphere cluster based on different infrastructures. Kubekey can help you to quickly build a production-ready cluster architecture on a set of machines from zero to one. It also helps you to easily scale the cluster and install pluggable components on existing architecture. + +## Most Popular Pages + +Below you will find some of the most common and helpful pages from this chapter. We highly recommend you to review them at first. + +{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} + +{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} diff --git a/content/zh/docs/devops-user-guide/devops-administration/_index.md b/content/zh/docs/devops-user-guide/devops-administration/_index.md new file mode 100644 index 000000000..d3af6d02b --- /dev/null +++ b/content/zh/docs/devops-user-guide/devops-administration/_index.md @@ -0,0 +1,7 @@ +--- +linkTitle: "DevOps Administration" +weight: 2200 + +_build: + render: false +--- diff --git a/content/zh/docs/devops-user-guide/devops-administration/role-and-member-management.md b/content/zh/docs/devops-user-guide/devops-administration/role-and-member-management.md new file mode 100644 index 000000000..52a968785 --- /dev/null +++ b/content/zh/docs/devops-user-guide/devops-administration/role-and-member-management.md @@ -0,0 +1,224 @@ +--- +title: "Role and Member Management" +keywords: 'kubernetes, kubesphere, air gapped, installation' +description: 'Role and Member Management' + + +weight: 2240 +--- + +The air-gapped installation is almost the same as the online installation except it creates a local registry to host the Docker images. We will demonstrate how to install KubeSphere and Kubernetes on air-gapped environment. + +> Note: The dependencies in different operating systems may cause upexpected problems. If you encounter any installation problems on air-gapped environment, please describe your OS information and error logs on [GitHub](https://github.com/kubesphere/kubesphere/issues). + +## Prerequisites + +- If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information. +> - Installer will use `/var/lib/docker` as the default directory where all Docker related files, including the images, are stored. We recommend you to add additional storage to a disk with at least 100G mounted at `/var/lib/docker` and `/mnt/registry` respectively, use the [fdisk](https://www.computerhope.com/unix/fdisk.htm) command for reference. +- Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [OpenEBS](https://openebs.io/) to provide storage service with dynamic provisioning. It is convenient for testing and development. For production, please [configure supported persistent storage service](../storage-configuration) and prepare [high availability configuration](../master-ha) before installation. +- Since the air-gapped machines cannot connect to apt or yum source, please use clean Linux machine to avoid this problem. + +## Step 1: Prepare Linux Hosts + +The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements. + +- Supported OSes: CentOS 7.4 ~ 7.7 (64-bit), Ubuntu 16.04.5/16.04.6/18.04.1/18.04.2/18.04.3 LTS (64-bit) +- Time synchronization is required across all nodes, otherwise the installation may not succeed; +- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`; +- If you are using `Ubuntu 18.04`, you need to use the user `root`. +- Ensure your disk of each node is at least 100G. +- CPU and memory in total of all machines: 2 cores and 4 GB for minimal installation; 8 cores and 16 GB for complete installation. + + +The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes. + +> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide. + +| Host IP | Host Name | Role | +| --- | --- | --- | +|192.168.0.1|master|master, etcd| +|192.168.0.2|node1|node| +|192.168.0.3|node2|node| + +### Cluster Architecture + +#### Single Master, Single Etcd, Two Nodes + +![Architecture](/cluster-architecture.svg) + +## Step 2: Download Installer Package + +Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`. + +```bash +curl -L https://kubesphere.io/download/offline/latest > kubesphere-all-offline-v2.1.1.tar.gz \ +&& tar -zxf kubesphere-all-offline-v2.1.1.tar.gz && cd kubesphere-all-offline-v2.1.1/conf +``` + +## Step 3: Configure Host Template + +> This step is only for multi-node installation, you can skip this step if you choose all-in-one installation. + +Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file. + +> Note: +> +> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`. +> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`. +> - master, node1 and node2 are the host names of each node and all host names should be in lowercase. + +### hosts.ini + +```ini +[all] +master ansible_connection=local ip=192.168.0.1 +node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD +node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD + +[local-registry] +master + +[kube-master] +master + +[kube-node] +node1 +node2 + +[etcd] +master + +[k8s-cluster:children] +kube-node +kube-master +``` + +> Note: +> +> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here. +> - Installer will use a node as the local registry for docker images, defaults to "master" in the group `[local-registry]`. +> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively. +> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`. +> +> Parameters Specification: +> +> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection. +> - `ansible_host`: The name of the host to be connected. +> - `ip`: The ip of the host to be connected. +> - `ansible_user`: The default ssh user name to use. +> - `ansible_become_pass`: Allows you to set the privilege escalation password. +> - `ansible_ssh_pass`: The password of the host to be connected using root. + +## Step 4: Enable All Components + +> This is step is complete installation. You can skip this step if you choose a minimal installation. + +Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default. + +```yaml +# LOGGING CONFIGURATION +# logging is an optional component when installing KubeSphere, and +# Kubernetes builtin logging APIs will be used if logging_enabled is set to false. +# Builtin logging only provides limited functions, so recommend to enable logging. +logging_enabled: true # Whether to install logging system +elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number +elasticsearch_data_replica: 2 # total number of data nodes +elasticsearch_volume_size: 20Gi # Elasticsearch volume size +log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default. +elk_prefix: logstash # the string making up index names. The index name will be formatted as ks--log +kibana_enabled: false # Kibana Whether to install built-in Grafana +#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption. +#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port + +#DevOps Configuration +devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image) +jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default +jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default +jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default +jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters +jenkinsJavaOpts_Xmx: 6g +jenkinsJavaOpts_MaxRAM: 8g +sonarqube_enabled: true # Whether to install built-in SonarQube +#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption. +#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token + +# Following components are all optional for KubeSphere, +# Which could be turned on to install it before installation or later by updating its value to true +openpitrix_enabled: true # KubeSphere application store +metrics_server_enabled: true # For KubeSphere HPA to use +servicemesh_enabled: true # KubeSphere service mesh system(Istio-based) +notification_enabled: true # KubeSphere notification system +alerting_enabled: true # KubeSphere alerting system +``` + +## Step 5: Install KubeSphere to Linux Machines + +> Note: +> +> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default. +> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions. +> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation. +> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts. + +**1.** Enter `scripts` folder, and execute `install.sh` using `root` user: + +```bash +cd ../cripts +./install.sh +``` + +**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume. + +```bash +################################################ + KubeSphere Installer Menu +################################################ +* 1) All-in-one +* 2) Multi-node +* 3) Quit +################################################ +https://kubesphere.io/ 2020-02-24 +################################################ +Please input an option: 2 + +``` + +**3.** Verify the multi-node installation: + +**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go. + +```bash +successsful! +##################################################### +### Welcome to KubeSphere! ### +##################################################### + +Console: http://192.168.0.1:30880 +Account: admin +Password: P@88w0rd + +NOTE:Please modify the default password after login. +##################################################### +``` + +> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components). + +**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in. + +![Login](https://pek3b.qingstor.com/kubesphere-docs/png/20191017172215.png) + +Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up. + +![Landing Page](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png) + +## Enable Pluggable Components + +If you already have set up minimal installation, you still can edit the ConfigMap of ks-installer using the following command. Please make sure there is enough resource in your machines, see [Pluggable Components Overview](/en/installation/pluggable-components/). + +```bash +kubectl edit cm -n kubesphere-system ks-installer +``` + +## FAQ + +If you have further questions please do not hesitate to raise issues on [GitHub](https://github.com/kubesphere/kubesphere/issues). diff --git a/content/zh/docs/devops-user-guide/introduction/_index.md b/content/zh/docs/devops-user-guide/introduction/_index.md new file mode 100644 index 000000000..f7bc936a3 --- /dev/null +++ b/content/zh/docs/devops-user-guide/introduction/_index.md @@ -0,0 +1,7 @@ +--- +linkTitle: "DevOps Project Introduction" +weight: 2100 + +_build: + render: false +--- diff --git a/content/zh/docs/devops-user-guide/introduction/credential.md b/content/zh/docs/devops-user-guide/introduction/credential.md new file mode 100644 index 000000000..a176c3255 --- /dev/null +++ b/content/zh/docs/devops-user-guide/introduction/credential.md @@ -0,0 +1,93 @@ +--- +title: "Introduction" +keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus' +description: 'KubeSphere Installation Overview' + +linkTitle: "Introduction" +weight: 2110 +--- + +[KubeSphere](https://kubesphere.io/) is an enterprise-grade multi-tenant container platform built on [Kubernetes](https://kubernetes.io). It provides an easy-to-use UI for users to manage application workloads and computing resources with a few clicks, which greatly reduces the learning curve and the complexity of daily work such as development, testing, operation and maintenance. KubeSphere aims to alleviate the pain points of Kubernetes including storage, network, security and ease of use, etc. + +KubeSphere supports installing on cloud-hosted and on-premises Kubernetes cluster, e.g. native K8s, GKE, EKS, RKE, etc. It also supports installing on Linux host including virtual machine and bare metal with provisioning fresh Kubernetes cluster. Both of the two methods are easy and friendly to install KubeSphere. Meanwhile, KubeSphere offers not only online installer, but air-gapped installer for such environment with no access to the internet. + +KubeSphere is open source project on [GitHub](https://github.com/kubesphere). There are thousands of users are using KunbeSphere, and many of them are running KubeSphere for their production workloads. + +In summary, there are several installation options you can choose. Please note not all options are mutually exclusive. For instance, you can deploy KubeSphere with minimal packages on existing K8s cluster on multiple nodes in air-gapped environment. Here is the decision tree shown in the following graph you may reference for your own situation. + +- [All-in-One](../all-in-one): Intall KubeSphere on a singe node. It is only for users to quickly get familar with KubeSphere. +- [Multi-Node](../multi-node): Install KubeSphere on multiple nodes. It is for testing or development. +- [Install KubeSphere on Air Gapped Linux](../install-ks-on-linux-airgapped): All images of KubeSphere have been encapsulated into a package, it is convenient for air gapped installation on Linux machines. +- [High Availability Multi-Node](../master-ha): Install high availability KubeSphere on multiple nodes which is used for production environment. +- [KubeSphere on Existing K8s](../install-on-k8s): Deploy KubeSphere on your Kubernetes cluster including cloud-hosted services such as GKE, EKS, etc. +- [KubeSphere on Air-Gapped K8s](../install-on-k8s-airgapped): Install KubeSphere on a disconnected Kubernetes cluster. +- Minimal Packages: Only install minimal required system components of KubeSphere. The minimum of resource requirement is down to 1 core and 2G memory. +- [Full Packages](../complete-installation): Install all available system components of KubeSphere including DevOps, service mesh, application store, etc. + +![Installer Options](https://pek3b.qingstor.com/kubesphere-docs/png/20200305093158.png) + +## Before Installation + +- As the installation will pull images and update operating system from the internet, your environment must have the internet access. If not, then you need to use the air-gapped installer instead. +- For all-in-one installation, the only one node is both the master and the worker. +- For multi-node installation, you are asked to specify the node roles in the configuration file before installation. +- Your linux host must have OpenSSH Server installed. +- Please check the [ports requirements](../port-firewall) before installation. + +## Quick Install For Development and Testing + +KubeSphere has decoupled some components since v2.1.0. The installer only installs required components by default which brings the benefits of fast installation and minimal resource consumption. If you want to install any optional component, please check the following section [Pluggable Components Overview](../intro#pluggable-components-overview) for details. + +The quick install of KubeSphere is only for development or testing since it uses local volume for storage by default. If you want a production install please refer to the section [High Availability Installation for Production Environment](../intro#high-availability-installation-for-production-environment). + +### 1. Install KubeSphere on Linux + +- [All-in-One](../all-in-one): It means a single-node hassle-free configuration installation with one-click. +- [Multi-Node](../multi-node): It allows you to install KubeSphere on multiple instances using local volume, which means it is not required to install storage server such as Ceph, GlusterFS. + +> Note:With regard to air-gapped installation please refer to [Install KubeSphere on Air Gapped Linux Machines](../install-ks-on-linux-airgapped). + +### 2. Install KubeSphere on Existing Kubernetes + +You can install KubeSphere on your existing Kubernetes cluster. Please refer [Install KubeSphere on Kubernetes](../install-on-k8s) for instructions. + +## High Availability Installation for Production Environment + +### 1. Install HA KubeSphere on Linux + +KubeSphere installer supports installing a highly available cluster for production with the prerequisites being a load balancer and persistent storage service set up in advance. + +- [Persistent Service Configuration](../storage-configuration): By default, KubeSphere Installer uses [Local Volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning in Kubernetes cluster. It is convenient for quick install of testing environment. In production environment, it must have a storage server set up. Please refer [Persistent Service Configuration](../storage-configuration) for details. +- [Load Balancer Configuration for HA install](../master-ha): Before you get started with multi-node installation in production environment, you need to configure a load balancer. Either cloud LB or `HAproxy + keepalived` works for the installation. + +### 2. Install HA KubeSphere on Existing Kubernetes + +Before you install KubeSphere on existing Kubernetes, please check the prerequisites of the installation on Linux described above, and verify the existing Kubernetes to see if it satisfies these prerequisites or not, i.e., a load balancer and persistent storage service. + +If your Kubernetes is ready, please refer [Install KubeSphere on Kubernetes](../install-on-k8s) for instructions. + +> You can install KubeSphere on cloud Kubernetes service such as [Installing KubeSphere on GKE cluster](../install-on-gke) + +## Pluggable Components Overview + +KubeSphere has decoupled some core feature components since v2.1.0. These components are designed to be pluggable, which means you can enable any of them before or after installation. The installer by default does not install the pluggable components. Please check the guide [Enable Pluggable Components Installation](../pluggable-components) for your requirement. + +![Pluggable Components](https://pek3b.qingstor.com/kubesphere-docs/png/20191207140846.png) + +## Storage Configuration Instruction + +The following links explain how to configure different types of persistent storage services. Please refer to [Storage Configuration Instruction](../storage-configuration) for detailed instructions regarding how to configure the storage class in KubeSphere. + +- [NFS](https://kubernetes.io/docs/concepts/storage/volumes/#nfs) +- [GlusterFS](https://www.gluster.org/) +- [Ceph RBD](https://ceph.com/) +- [QingCloud Block Storage](https://docs.qingcloud.com/product/storage/volume/) +- [QingStor NeonSAN](https://docs.qingcloud.com/product/storage/volume/super_high_performance_shared_volume/) + +## Add New Nodes + +KubeSphere Installer allows you to scale the number of nodes, see [Add New Nodes](../add-nodes). + +## Uninstall + +Uninstall will remove KubeSphere from the machines. This operation is irreversible and dangerous. Please check [Uninstall](../uninstall). diff --git a/content/zh/docs/devops-user-guide/introduction/pipeline.md b/content/zh/docs/devops-user-guide/introduction/pipeline.md new file mode 100644 index 000000000..a176c3255 --- /dev/null +++ b/content/zh/docs/devops-user-guide/introduction/pipeline.md @@ -0,0 +1,93 @@ +--- +title: "Introduction" +keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus' +description: 'KubeSphere Installation Overview' + +linkTitle: "Introduction" +weight: 2110 +--- + +[KubeSphere](https://kubesphere.io/) is an enterprise-grade multi-tenant container platform built on [Kubernetes](https://kubernetes.io). It provides an easy-to-use UI for users to manage application workloads and computing resources with a few clicks, which greatly reduces the learning curve and the complexity of daily work such as development, testing, operation and maintenance. KubeSphere aims to alleviate the pain points of Kubernetes including storage, network, security and ease of use, etc. + +KubeSphere supports installing on cloud-hosted and on-premises Kubernetes cluster, e.g. native K8s, GKE, EKS, RKE, etc. It also supports installing on Linux host including virtual machine and bare metal with provisioning fresh Kubernetes cluster. Both of the two methods are easy and friendly to install KubeSphere. Meanwhile, KubeSphere offers not only online installer, but air-gapped installer for such environment with no access to the internet. + +KubeSphere is open source project on [GitHub](https://github.com/kubesphere). There are thousands of users are using KunbeSphere, and many of them are running KubeSphere for their production workloads. + +In summary, there are several installation options you can choose. Please note not all options are mutually exclusive. For instance, you can deploy KubeSphere with minimal packages on existing K8s cluster on multiple nodes in air-gapped environment. Here is the decision tree shown in the following graph you may reference for your own situation. + +- [All-in-One](../all-in-one): Intall KubeSphere on a singe node. It is only for users to quickly get familar with KubeSphere. +- [Multi-Node](../multi-node): Install KubeSphere on multiple nodes. It is for testing or development. +- [Install KubeSphere on Air Gapped Linux](../install-ks-on-linux-airgapped): All images of KubeSphere have been encapsulated into a package, it is convenient for air gapped installation on Linux machines. +- [High Availability Multi-Node](../master-ha): Install high availability KubeSphere on multiple nodes which is used for production environment. +- [KubeSphere on Existing K8s](../install-on-k8s): Deploy KubeSphere on your Kubernetes cluster including cloud-hosted services such as GKE, EKS, etc. +- [KubeSphere on Air-Gapped K8s](../install-on-k8s-airgapped): Install KubeSphere on a disconnected Kubernetes cluster. +- Minimal Packages: Only install minimal required system components of KubeSphere. The minimum of resource requirement is down to 1 core and 2G memory. +- [Full Packages](../complete-installation): Install all available system components of KubeSphere including DevOps, service mesh, application store, etc. + +![Installer Options](https://pek3b.qingstor.com/kubesphere-docs/png/20200305093158.png) + +## Before Installation + +- As the installation will pull images and update operating system from the internet, your environment must have the internet access. If not, then you need to use the air-gapped installer instead. +- For all-in-one installation, the only one node is both the master and the worker. +- For multi-node installation, you are asked to specify the node roles in the configuration file before installation. +- Your linux host must have OpenSSH Server installed. +- Please check the [ports requirements](../port-firewall) before installation. + +## Quick Install For Development and Testing + +KubeSphere has decoupled some components since v2.1.0. The installer only installs required components by default which brings the benefits of fast installation and minimal resource consumption. If you want to install any optional component, please check the following section [Pluggable Components Overview](../intro#pluggable-components-overview) for details. + +The quick install of KubeSphere is only for development or testing since it uses local volume for storage by default. If you want a production install please refer to the section [High Availability Installation for Production Environment](../intro#high-availability-installation-for-production-environment). + +### 1. Install KubeSphere on Linux + +- [All-in-One](../all-in-one): It means a single-node hassle-free configuration installation with one-click. +- [Multi-Node](../multi-node): It allows you to install KubeSphere on multiple instances using local volume, which means it is not required to install storage server such as Ceph, GlusterFS. + +> Note:With regard to air-gapped installation please refer to [Install KubeSphere on Air Gapped Linux Machines](../install-ks-on-linux-airgapped). + +### 2. Install KubeSphere on Existing Kubernetes + +You can install KubeSphere on your existing Kubernetes cluster. Please refer [Install KubeSphere on Kubernetes](../install-on-k8s) for instructions. + +## High Availability Installation for Production Environment + +### 1. Install HA KubeSphere on Linux + +KubeSphere installer supports installing a highly available cluster for production with the prerequisites being a load balancer and persistent storage service set up in advance. + +- [Persistent Service Configuration](../storage-configuration): By default, KubeSphere Installer uses [Local Volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning in Kubernetes cluster. It is convenient for quick install of testing environment. In production environment, it must have a storage server set up. Please refer [Persistent Service Configuration](../storage-configuration) for details. +- [Load Balancer Configuration for HA install](../master-ha): Before you get started with multi-node installation in production environment, you need to configure a load balancer. Either cloud LB or `HAproxy + keepalived` works for the installation. + +### 2. Install HA KubeSphere on Existing Kubernetes + +Before you install KubeSphere on existing Kubernetes, please check the prerequisites of the installation on Linux described above, and verify the existing Kubernetes to see if it satisfies these prerequisites or not, i.e., a load balancer and persistent storage service. + +If your Kubernetes is ready, please refer [Install KubeSphere on Kubernetes](../install-on-k8s) for instructions. + +> You can install KubeSphere on cloud Kubernetes service such as [Installing KubeSphere on GKE cluster](../install-on-gke) + +## Pluggable Components Overview + +KubeSphere has decoupled some core feature components since v2.1.0. These components are designed to be pluggable, which means you can enable any of them before or after installation. The installer by default does not install the pluggable components. Please check the guide [Enable Pluggable Components Installation](../pluggable-components) for your requirement. + +![Pluggable Components](https://pek3b.qingstor.com/kubesphere-docs/png/20191207140846.png) + +## Storage Configuration Instruction + +The following links explain how to configure different types of persistent storage services. Please refer to [Storage Configuration Instruction](../storage-configuration) for detailed instructions regarding how to configure the storage class in KubeSphere. + +- [NFS](https://kubernetes.io/docs/concepts/storage/volumes/#nfs) +- [GlusterFS](https://www.gluster.org/) +- [Ceph RBD](https://ceph.com/) +- [QingCloud Block Storage](https://docs.qingcloud.com/product/storage/volume/) +- [QingStor NeonSAN](https://docs.qingcloud.com/product/storage/volume/super_high_performance_shared_volume/) + +## Add New Nodes + +KubeSphere Installer allows you to scale the number of nodes, see [Add New Nodes](../add-nodes). + +## Uninstall + +Uninstall will remove KubeSphere from the machines. This operation is irreversible and dangerous. Please check [Uninstall](../uninstall). diff --git a/content/zh/docs/installing-on-kubernetes/_index.md b/content/zh/docs/installing-on-kubernetes/_index.md new file mode 100644 index 000000000..51adfedde --- /dev/null +++ b/content/zh/docs/installing-on-kubernetes/_index.md @@ -0,0 +1,23 @@ +--- +title: "Installing on Kubernetes" +description: "Help you to better understand KubeSphere with detailed graphics and contents" +layout: "single" + +linkTitle: "Installing on Kubernetes" +weight: 2500 + +icon: "/images/docs/docs.svg" + +--- + +## Installing KubeSphere and Kubernetes on Linux + +In this chapter, we will demonstrate how to use KubeKey to provision a new Kubernetes and KubeSphere cluster based on different infrastructures. Kubekey can help you to quickly build a production-ready cluster architecture on a set of machines from zero to one. It also helps you to easily scale the cluster and install pluggable components on existing architecture. + +## Most Popular Pages + +Below you will find some of the most common and helpful pages from this chapter. We highly recommend you to review them at first. + +{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} + +{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} diff --git a/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/_index.md b/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/_index.md new file mode 100644 index 000000000..cd927f966 --- /dev/null +++ b/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/_index.md @@ -0,0 +1,7 @@ +--- +linkTitle: "Install on Linux" +weight: 2200 + +_build: + render: false +--- \ No newline at end of file diff --git a/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/all-in-one.md b/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/all-in-one.md new file mode 100644 index 000000000..8214171ef --- /dev/null +++ b/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/all-in-one.md @@ -0,0 +1,116 @@ +--- +title: "All-in-One Installation" +keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus' +description: 'The guide for installing all-in-one KubeSphere for developing or testing' + +linkTitle: "All-in-One" +weight: 2210 +--- + +For those who are new to KubeSphere and looking for a quick way to discover the platform, the all-in-one mode is your best choice to install it since it is one-click and hassle-free configuration installation with provisioning KubeSphere and Kubernetes on your machine. + +- The following instructions are for the default installation without enabling any optional components as we have made them pluggable since v2.1.0. If you want to enable any one, please see the section [Enable Pluggable Components](../all-in-one#enable-pluggable-components) below. +- If your machine has >= 8 cores and >= 16G memory, we recommend you to install the full package of KubeSphere by [enabling optional components](../complete-installation). + +## Prerequisites + +If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirement](../port-firewall) for more information. + +## Step 1: Prepare Linux Machine + +The following describes the requirements of hardware and operating system. + +- For `Ubuntu 16.04` OS, it is recommended to select the latest `16.04.5`. +- If you are using Ubuntu 18.04, you need to use the root user to install. +- If the Debian system does not have the sudo command installed, you need to execute the `apt update && apt install sudo` command using root before installation. + +### Hardware Recommendation + +| System | Minimum Requirements | +| ------- | ----------- | +| CentOS 7.4 ~ 7.7 (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:100 G | +| Ubuntu 16.04/18.04 LTS (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:100 G | +| Red Hat Enterprise Linux Server 7.4 (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:100 G | +| Debian Stretch 9.5 (64 bit)| CPU:2 Core, Memory:4 G, Disk Space:100 G | + +## Step 2: Download Installer Package + +Execute the following commands to download Installer 2.1.1 and unpack it. + +```bash +curl -L https://kubesphere.io/download/stable/latest > installer.tar.gz \ +&& tar -zxf installer.tar.gz && cd kubesphere-all-v2.1.1/scripts +``` + +## Step 3: Get Started with Installation + +You should not do anything except executing one command as follows. The installer will complete all things for you automatically including installing/updating dependency packages, installing Kubernetes with default version 1.16.7, storage service and so on. + +> Note: +> +> - Generally speaking, do not modify any configuration. +> - KubeSphere installs `calico` by default. If you would like to use a different network plugin, you are allowed to change the configuration in `conf/common.yaml`. You are also allowed to modify other configurations such as storage class, pluggable components, etc. +> - The default storage class is [OpenEBS](https://openebs.io/) which is a kind of [Local Volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) to provision persistence storage service. OpenEBS supports [dynamic provisioning PV](https://docs.openebs.io/docs/next/uglocalpv.html#Provision-OpenEBS-Local-PV-based-on-hostpath). It will be installed automatically for your testing purpose. +> - Please refer [storage configurations](../storage-configuration) for supported storage class. +> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts. + +**1.** Execute the following command: + +```bash +./install.sh +``` + +**2.** Enter `1` to select `All-in-one` mode and type `yes` if your machine satisfies the requirements to start: + +```bash +################################################ + KubeSphere Installer Menu +################################################ +* 1) All-in-one +* 2) Multi-node +* 3) Quit +################################################ +https://kubesphere.io/ 2020-02-24 +################################################ +Please input an option: 1 +``` + +**3.** Verify if KubeSphere is installed successfully or not: + +**(1).** If you see "Successful" returned after completed, it means the installation is successful. The console service is exposed through nodeport 30880 by default. You may need to bind EIP and configure port forwarding in your environment for outside users to access. Make sure you disable the related firewall. + +```bash +successsful! +##################################################### +### Welcome to KubeSphere! ### +##################################################### + +Console: http://192.168.0.8:30880 +Account: admin +Password: P@88w0rd + +NOTE:Please modify the default password after login. +##################################################### +``` + +> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components). + +**(2).** You will be able to use default account and password to log in the console to take a tour of KubeSphere. + +Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up. + +![Dashboard](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png) + +## Enable Pluggable Components + +The guide above is only used for minimal installation by default. You can execute the following command to open the configure map and enable pluggable components. Make sure your cluster has enough CPU and memory in advance, see [Enable Pluggable Components](../pluggable-components). + +```bash +kubectl edit cm -n kubesphere-system ks-installer +``` + +## FAQ + +The installer has been tested on Aliyun, AWS, Huawei Cloud, QingCloud and Tencent Cloud. Please check the [results](https://github.com/kubesphere/ks-installer/issues/23) for details. Also please read the [FAQ of installation](../../faq/faq-install). + +If you have any further questions please do not hesitate to file issues on [GitHub](https://github.com/kubesphere/kubesphere/issues). diff --git a/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/complete-installation.md b/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/complete-installation.md new file mode 100644 index 000000000..e0ab92099 --- /dev/null +++ b/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/complete-installation.md @@ -0,0 +1,76 @@ +--- +title: "Install All Optional Components" +keywords: 'kubesphere, kubernetes, docker, devops, service mesh, openpitrix' +description: 'Install KubeSphere with all optional components enabled on Linux machine' + + +weight: 2260 +--- + +The installer only installs required components (i.e. minimal installation) by default since v2.1.0. Other components are designed to be pluggable, which means you can enable any of them before or after installation. If your machine meets the following minimum requirements, we recommend you to **enable all components before installation**. A complete installation gives you an opportunity to comprehensively discover the container platform. + + +Minimum Requirements + +- CPU: 8 cores in total of all machines +- Memory: 16 GB in total of all machines + + + +> Note: +> +> - If your machines do not meet the minimum requirements of a complete installation, you can enable any of components at your will. Please refer to [Enable Pluggable Components Installation](../pluggable-components). +> - It works for [All-in-One](../all-in-one) and [Multi-Node](../multi-node). + +This tutorial will walk you through how to enable all components of KubeSphere. + +## Download Installer Package + +If you do not have the package yet, please run the following commands to download Installer 2.1.1 and unpack it, then enter `conf` folder. + +```bash +$ curl -L https://kubesphere.io/download/stable/v2.1.1 > installer.tar.gz \ +&& tar -zxf installer.tar.gz && cd kubesphere-all-v2.1.1/conf +``` + +## Enable All Components + +Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default. + +```yaml +# LOGGING CONFIGURATION +# logging is an optional component when installing KubeSphere, and +# Kubernetes builtin logging APIs will be used if logging_enabled is set to false. +# Builtin logging only provides limited functions, so recommend to enable logging. +logging_enabled: true # Whether to install logging system +elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number +elasticsearch_data_replica: 2 # total number of data nodes +elasticsearch_volume_size: 20Gi # Elasticsearch volume size +log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default. +elk_prefix: logstash # the string making up index names. The index name will be formatted as ks--log +kibana_enabled: false # Kibana Whether to install built-in Grafana +#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption. +#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port + +#DevOps Configuration +devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image) +jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default +jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default +jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default +jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters +jenkinsJavaOpts_Xmx: 6g +jenkinsJavaOpts_MaxRAM: 8g +sonarqube_enabled: true # Whether to install built-in SonarQube +#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption. +#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token + +# Following components are all optional for KubeSphere, +# Which could be turned on to install it before installation or later by updating its value to true +openpitrix_enabled: true # KubeSphere application store +metrics_server_enabled: true # For KubeSphere HPA to use +servicemesh_enabled: true # KubeSphere service mesh system(Istio-based) +notification_enabled: true # KubeSphere notification system +alerting_enabled: true # KubeSphere alerting system +``` + +Save it, then you can continue the installation process. diff --git a/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/install-ks-on-linux-airgapped.md b/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/install-ks-on-linux-airgapped.md new file mode 100644 index 000000000..26b3e4f04 --- /dev/null +++ b/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/install-ks-on-linux-airgapped.md @@ -0,0 +1,224 @@ +--- +title: "Air-Gapped Installation" +keywords: 'kubernetes, kubesphere, air gapped, installation' +description: 'How to install KubeSphere on air-gapped Linux machines' + + +weight: 2240 +--- + +The air-gapped installation is almost the same as the online installation except it creates a local registry to host the Docker images. We will demonstrate how to install KubeSphere and Kubernetes on air-gapped environment. + +> Note: The dependencies in different operating systems may cause upexpected problems. If you encounter any installation problems on air-gapped environment, please describe your OS information and error logs on [GitHub](https://github.com/kubesphere/kubesphere/issues). + +## Prerequisites + +- If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information. +> - Installer will use `/var/lib/docker` as the default directory where all Docker related files, including the images, are stored. We recommend you to add additional storage to a disk with at least 100G mounted at `/var/lib/docker` and `/mnt/registry` respectively, use the [fdisk](https://www.computerhope.com/unix/fdisk.htm) command for reference. +- Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [OpenEBS](https://openebs.io/) to provide storage service with dynamic provisioning. It is convenient for testing and development. For production, please [configure supported persistent storage service](../storage-configuration) and prepare [high availability configuration](../master-ha) before installation. +- Since the air-gapped machines cannot connect to apt or yum source, please use clean Linux machine to avoid this problem. + +## Step 1: Prepare Linux Hosts + +The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements. + +- Supported OSes: CentOS 7.4 ~ 7.7 (64-bit), Ubuntu 16.04.5/16.04.6/18.04.1/18.04.2/18.04.3 LTS (64-bit) +- Time synchronization is required across all nodes, otherwise the installation may not succeed; +- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`; +- If you are using `Ubuntu 18.04`, you need to use the user `root`. +- Ensure your disk of each node is at least 100G. +- CPU and memory in total of all machines: 2 cores and 4 GB for minimal installation; 8 cores and 16 GB for complete installation. + + +The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes. + +> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide. + +| Host IP | Host Name | Role | +| --- | --- | --- | +|192.168.0.1|master|master, etcd| +|192.168.0.2|node1|node| +|192.168.0.3|node2|node| + +### Cluster Architecture + +#### Single Master, Single Etcd, Two Nodes + +![Architecture](/cluster-architecture.svg) + +## Step 2: Download Installer Package + +Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`. + +```bash +curl -L https://kubesphere.io/download/offline/latest > kubesphere-all-offline-v2.1.1.tar.gz \ +&& tar -zxf kubesphere-all-offline-v2.1.1.tar.gz && cd kubesphere-all-offline-v2.1.1/conf +``` + +## Step 3: Configure Host Template + +> This step is only for multi-node installation, you can skip this step if you choose all-in-one installation. + +Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file. + +> Note: +> +> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`. +> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`. +> - master, node1 and node2 are the host names of each node and all host names should be in lowercase. + +### hosts.ini + +```ini +[all] +master ansible_connection=local ip=192.168.0.1 +node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD +node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD + +[local-registry] +master + +[kube-master] +master + +[kube-node] +node1 +node2 + +[etcd] +master + +[k8s-cluster:children] +kube-node +kube-master +``` + +> Note: +> +> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here. +> - Installer will use a node as the local registry for docker images, defaults to "master" in the group `[local-registry]`. +> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively. +> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`. +> +> Parameters Specification: +> +> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection. +> - `ansible_host`: The name of the host to be connected. +> - `ip`: The ip of the host to be connected. +> - `ansible_user`: The default ssh user name to use. +> - `ansible_become_pass`: Allows you to set the privilege escalation password. +> - `ansible_ssh_pass`: The password of the host to be connected using root. + +## Step 4: Enable All Components + +> This is step is complete installation. You can skip this step if you choose a minimal installation. + +Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default. + +```yaml +# LOGGING CONFIGURATION +# logging is an optional component when installing KubeSphere, and +# Kubernetes builtin logging APIs will be used if logging_enabled is set to false. +# Builtin logging only provides limited functions, so recommend to enable logging. +logging_enabled: true # Whether to install logging system +elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number +elasticsearch_data_replica: 2 # total number of data nodes +elasticsearch_volume_size: 20Gi # Elasticsearch volume size +log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default. +elk_prefix: logstash # the string making up index names. The index name will be formatted as ks--log +kibana_enabled: false # Kibana Whether to install built-in Grafana +#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption. +#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port + +#DevOps Configuration +devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image) +jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default +jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default +jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default +jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters +jenkinsJavaOpts_Xmx: 6g +jenkinsJavaOpts_MaxRAM: 8g +sonarqube_enabled: true # Whether to install built-in SonarQube +#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption. +#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token + +# Following components are all optional for KubeSphere, +# Which could be turned on to install it before installation or later by updating its value to true +openpitrix_enabled: true # KubeSphere application store +metrics_server_enabled: true # For KubeSphere HPA to use +servicemesh_enabled: true # KubeSphere service mesh system(Istio-based) +notification_enabled: true # KubeSphere notification system +alerting_enabled: true # KubeSphere alerting system +``` + +## Step 5: Install KubeSphere to Linux Machines + +> Note: +> +> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default. +> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions. +> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation. +> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts. + +**1.** Enter `scripts` folder, and execute `install.sh` using `root` user: + +```bash +cd ../cripts +./install.sh +``` + +**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume. + +```bash +################################################ + KubeSphere Installer Menu +################################################ +* 1) All-in-one +* 2) Multi-node +* 3) Quit +################################################ +https://kubesphere.io/ 2020-02-24 +################################################ +Please input an option: 2 + +``` + +**3.** Verify the multi-node installation: + +**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go. + +```bash +successsful! +##################################################### +### Welcome to KubeSphere! ### +##################################################### + +Console: http://192.168.0.1:30880 +Account: admin +Password: P@88w0rd + +NOTE:Please modify the default password after login. +##################################################### +``` + +> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components). + +**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in. + +![Login](https://pek3b.qingstor.com/kubesphere-docs/png/20191017172215.png) + +Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up. + +![Landing Page](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png) + +## Enable Pluggable Components + +If you already have set up minimal installation, you still can edit the ConfigMap of ks-installer using the following command. Please make sure there is enough resource in your machines, see [Pluggable Components Overview](/en/installation/pluggable-components/). + +```bash +kubectl edit cm -n kubesphere-system ks-installer +``` + +## FAQ + +If you have further questions please do not hesitate to raise issues on [GitHub](https://github.com/kubesphere/kubesphere/issues). diff --git a/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/master-ha.md b/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/master-ha.md new file mode 100644 index 000000000..ee8f26203 --- /dev/null +++ b/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/master-ha.md @@ -0,0 +1,152 @@ +--- +title: "High Availability Configuration" +keywords: "kubesphere, kubernetes, docker,installation, HA, high availability" +description: "The guide for installing a high availability of KubeSphere cluster" + +weight: 2230 +--- + +## Introduction + +[Multi-node installation](../multi-node) can help you to quickly set up a single-master cluster on multiple machines for development and testing. However, we need to consider the high availability of the cluster for production. Since the key components on the master node, i.e. kube-apiserver, kube-scheduler, and kube-controller-manager are running on a single master node, Kubernetes and KubeSphere will be unavailable during the master being down. Therefore we need to set up a high availability cluster by provisioning load balancers and multiple masters. You can use any cloud load balancer, or any hardware load balancer (e.g. F5). In addition, keepalved and Haproxy is also an alternative for creating such high-availability cluster. + +This document walks you through an example how to create two [QingCloud Load Balancer](https://docs.qingcloud.com/product/network/loadbalancer), serving as internal load balancer and external load balancer respectively, and how to configure the high availability of masters and Etcd using the load balancers. + +## Prerequisites + +- Please make sure that you already read [Multi-Node installation](../multi-node). This document only demonstrates how to configure load balancers. +- You need a [QingCloud](https://console.qingcloud.com/login) account to create load balancers, or follow the guide of any other cloud provider to create load balancers. + +## Architecture + +This example prepares six machines of CentOS 7.5. We will create two load balancers, and deploy three masters and Etcd nodes on three of the machines. You can configure these masters and Etcd nodes in `conf/hosts.ini`. + +![Master and etcd node high availability architecture](https://pek3b.qingstor.com/kubesphere-docs/png/20200307215924.png) + +## Install HA Cluster + +### Step 1: Create Load Balancers + +This step briefly shows an example of creating a load balancer on QingCloud platform. + +#### Create an Internal Load Balancer + +1.1. Log in [QingCloud Console](https://console.qingcloud.com/login) and select **Network & CDN → Load Balancers**, then click on the create button and fill in the basic information. + +1.2. Choose the VxNet that your machines are created within from the **Network** dropdown list. Here is `kube`. Other settings can be default values as follows. Click **Submit** to complete the creation. + +![Create Internal LB on QingCloud](https://pek3b.qingstor.com/kubesphere-docs/png/20200215224125.png) + +1.3. Drill into the detail page of the load balancer, then create a listener that listens to the port `6443` of the `TCP` protocol. + +- Name: Define a name for this Listener +- Listener Protocol: Select `TCP` protocol +- Port: `6443` +- Load mode: `Poll` + +> Note: After creating the listener, please check the firewall rules of the load balancer. Make sure that the port `6443` has been added to the firewall rules and the external traffic can pass through `6443`. Otherwise, the installation will fail. + +![Add Listener to LB](https://pek3b.qingstor.com/kubesphere-docs/png/20200215225205.png) + +1.4. Click **Add Backend**, choose the VxNet `kube` that we chose. Then click on the button **Advanced Search** and choose the three master nodes under the VxNet and set the port to `6443` which is the default secure port of api-server. + +Click **Submit** when you are done. + +![Choose Backends](https://pek3b.qingstor.com/kubesphere-docs/png/20200215225550.png) + +1.5. Click on the button **Apply Changes** to activate the configurations. At this point, you can find the three masters have been added as the backend servers of the listener that is behind the internal load balancer. + +> Please note: The status of all masters might shows `Not available` after you added them as backends. This is normal since the port `6443` of api-server are not active in masters yet. The status will change to `Active` and the port of api-server will be exposed after installation complete, which means the internal load balancer you configured works as expected. + +![Apply Changes](https://pek3b.qingstor.com/kubesphere-docs/png/20200215230107.png) + +#### Create an External Load Balancer + +You need to create an EIP in advance. + +1.6. Similarly, create an external load balancer without joining any network, but associate the EIP that you created to this load balancer. + +1.7. Enter the load balancer detail page, create a listener that listens to the port `30880` of the `HTTP` protocol which is the nodeport of KubeSphere console.. + +> Note: After creating the listener, please check the firewall rules of the load balancer. Make sure that the port `30880` has been added to the firewall rules and the external traffic can pass through `6443`. Otherwise, the installation will fail. + +![Create external LB](https://pek3b.qingstor.com/kubesphere-docs/png/20200215232114.png) + +1.8. Click **Add Backend**, then choose the `six` machines that we are going to install KubeSphere within the VxNet `Kube`, and set the port to `30880`. + +Click **Submit** when you are done. + +1.9. Click on the button **Apply Changes** to activate the configurations. At this point, you can find the six machines have been added as the backend servers of the listener that is behind the external load balancer. + +![Apply Changes](https://pek3b.qingstor.com/kubesphere-docs/png/20200215232445.png) + +### Step 2: Modify the host.ini + +Go to the taskbox where you downloaded the installer by following the [Multi-node Installation](../multi-node) and complete the following configurations. + +| **Parameter** | **Description** | +|--------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `[all]` | node information. Use the following syntax if you run installation as `root` user:
- ` ansible_connection= ip=`
- ` ansible_host= ip= ansible_ssh_pass=`
If you log in as a non-root user, use the syntax:
- ` ansible_connection= ip= ansible_user= ansible_become_pass=` | +| `[kube-master]` | master node names | +| `[kube-node]` | worker node names | +| `[etcd]` | etcd node names. The number of `etcd` nodes needs to be odd. | +| `[k8s-cluster:children]` | group names of `[kube-master]` and `[kube-node]` | + + +We use **CentOS 7.5** with `root` user to install an HA cluster. Please see the following configuration as an example: + +> Note: +>
+> If the _taskbox_ cannot establish `ssh` connection with the rest nodes, try to use the non-root user configuration. + +#### host.ini example + +```ini +[all] +master1 ansible_connection=local ip=192.168.0.1 +master2 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD +master3 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD +node1 ansible_host=192.168.0.4 ip=192.168.0.4 ansible_ssh_pass=PASSWORD +node2 ansible_host=192.168.0.5 ip=192.168.0.5 ansible_ssh_pass=PASSWORD +node3 ansible_host=192.168.0.6 ip=192.168.0.6 ansible_ssh_pass=PASSWORD + +[kube-master] +master1 +master2 +master3 + +[kube-node] +node1 +node2 +node3 + +[etcd] +master1 +master2 +master3 + +[k8s-cluster:children] +kube-node +kube-master +``` + +### Step 3: Configure the Load Balancer Parameters + +Besides configuring the `common.yaml` by following the [Multi-node Installation](../multi-node), you need to modify the load balancer information in the `common.yaml`. Assume the **VIP** address and listening port of the **internal load balancer** are `192.168.0.253` and `6443`, then you can refer to the following example. + +> - Note that address and port should be indented by two spaces in `common.yaml`, and the address should be VIP. +> - The domain name of the load balancer is "lb.kubesphere.local" by default for internal access. If you need to change the domain name, please uncomment and modify it. + +#### The configuration sample in common.yaml + +```yaml +## External LB example config +## apiserver_loadbalancer_domain_name: "lb.kubesphere.local" +loadbalancer_apiserver: + address: 192.168.0.253 + port: 6443 +``` + +Finally, please refer to the [guide](../storage-configuration) to configure the persistent storage service in `common.yaml` and start your HA cluster installation. + +Then it is ready to install the high availability KubeSphere cluster. diff --git a/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/multi-node.md b/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/multi-node.md new file mode 100644 index 000000000..d1cd790ea --- /dev/null +++ b/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/multi-node.md @@ -0,0 +1,176 @@ +--- +title: "Multi-node Installation" +keywords: 'kubesphere, kubernetes, docker, kubesphere installer' +description: 'The guide for installing KubeSphere on Multi-Node in development or testing environment' + +weight: 2220 +--- + +`Multi-Node` installation enables installing KubeSphere on multiple nodes. Typically, use any one node as _taskbox_ to run the installation task. Please note `ssh` communication is required to be established between taskbox and other nodes. + +- The following instructions are for the default installation without enabling any optional components as we have made them pluggable since v2.1.0. If you want to enable any one, please read [Enable Pluggable Components](../pluggable-components). +- If your machines in total have >= 8 cores and >= 16G memory, we recommend you to install the full package of KubeSphere by [Enabling Optional Components](../complete-installation). +- The installation time depends on your network bandwidth, your computer configuration, the number of nodes, etc. + +## Prerequisites + +If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information. + +## Step 1: Prepare Linux Hosts + +The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements. + +- Time synchronization is required across all nodes, otherwise the installation may not succeed; +- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`; +- If you are using `Ubuntu 18.04`, you need to use the user `root`; +- If the Debian system does not have the sudo command installed, you need to execute `apt update && apt install sudo` command using root before installation. + +### Hardware Recommendation + +- KubeSphere can be installed on any cloud platform. +- The installation speed can be accelerated by increasing network bandwidth. +- If you choose air-gapped installation, ensure your disk of each node is at least 100G. + +| System | Minimum Requirements (Each node) | +| --- | --- | +| CentOS 7.4 ~ 7.7 (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:40 G | +| Ubuntu 16.04/18.04 LTS (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:40 G | +| Red Hat Enterprise Linux Server 7.4 (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:40 G | +| Debian Stretch 9.5 (64 bit)| CPU:2 Core, Memory:4 G, Disk Space:40 G | + +The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes. + +> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide. + +| Host IP | Host Name | Role | +| --- | --- | --- | +|192.168.0.1|master|master, etcd| +|192.168.0.2|node1|node| +|192.168.0.3|node2|node| + +### Cluster Architecture + +#### Single Master, Single Etcd, Two Nodes + +![Architecture](/cluster-architecture.svg) + +## Step 2: Download Installer Package + +**1.** Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`. + +```bash +curl -L https://kubesphere.io/download/stable/latest > installer.tar.gz \ +&& tar -zxf installer.tar.gz && cd kubesphere-all-v2.1.1/conf +``` + +**2.** Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file. + +> Note: +> +> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`. +> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`. +> - master, node1 and node2 are the host names of each node and all host names should be in lowercase. + +### hosts.ini + +```ini +[all] +master ansible_connection=local ip=192.168.0.1 +node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD +node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD + +[kube-master] +master + +[kube-node] +node1 +node2 + +[etcd] +master + +[k8s-cluster:children] +kube-node +kube-master +``` + +> Note: +> +> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here. +> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively. +> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`. +> +> Parameters Specification: +> +> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection. +> - `ansible_host`: The name of the host to be connected. +> - `ip`: The ip of the host to be connected. +> - `ansible_user`: The default ssh user name to use. +> - `ansible_become_pass`: Allows you to set the privilege escalation password. +> - `ansible_ssh_pass`: The password of the host to be connected using root. + +## Step 3: Install KubeSphere to Linux Machines + +> Note: +> +> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default. +> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions. +> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation. +> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts. + +**1.** Enter `scripts` folder, and execute `install.sh` using `root` user: + +```bash +cd ../cripts +./install.sh +``` + +**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume. + +```bash +################################################ + KubeSphere Installer Menu +################################################ +* 1) All-in-one +* 2) Multi-node +* 3) Quit +################################################ +https://kubesphere.io/ 2020-02-24 +################################################ +Please input an option: 2 + +``` + +**3.** Verify the multi-node installation: + +**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go. + +```bash +successsful! +##################################################### +### Welcome to KubeSphere! ### +##################################################### + +Console: http://192.168.0.1:30880 +Account: admin +Password: P@88w0rd + +NOTE:Please modify the default password after login. +##################################################### +``` + +> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components). + +**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in. + +![Login](https://pek3b.qingstor.com/kubesphere-docs/png/20191017172215.png) + +Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up. + +![Landing Page](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png) + +## FAQ + +The installer has been tested on Aliyun, AWS, Huawei Cloud, QingCloud, Tencent Cloud. Please check the [results](https://github.com/kubesphere/ks-installer/issues/23) for details. Also please read the [FAQ of installation](../../faq/faq-install). + +If you have any further questions please do not hesitate to file issues on [GitHub](https://github.com/kubesphere/kubesphere/issues). diff --git a/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/storage-configuration.md b/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/storage-configuration.md new file mode 100644 index 000000000..a3d8d5156 --- /dev/null +++ b/content/zh/docs/installing-on-kubernetes/hosted-kubernetes/storage-configuration.md @@ -0,0 +1,157 @@ +--- +title: "StorageClass Configuration" +keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus' +description: 'Instructions for Setting up StorageClass for KubeSphere' + +weight: 2250 +--- + +Currently, Installer supports the following [Storage Class](https://kubernetes.io/docs/concepts/storage/storage-classes/), providing persistent storage service for KubeSphere (more storage classes will be supported soon). + +- NFS +- Ceph RBD +- GlusterFS +- QingCloud Block Storage +- QingStor NeonSAN +- Local Volume (for development and test only) + +The versions of storage systems and corresponding CSI plugins in the table listed below have been well tested. + +| **Name** | **Version** | **Reference** | +| ----------- | --- |---| +Ceph RBD Server | v0.94.10 | For development and testing, refer to [Install Ceph Storage Server](/zh-CN/appendix/ceph-ks-install/) for details. Please refer to [Ceph Documentation](http://docs.ceph.com/docs/master/) for production. | +Ceph RBD Client | v12.2.5 | Before installing KubeSphere, you need to configure the corresponding parameters in `common.yaml`. Please refer to [Ceph RBD](../storage-configuration/#ceph-rbd) | +GlusterFS Server | v3.7.6 | For development and testing, refer to [Deploying GlusterFS Storage Server](/zh-CN/appendix/glusterfs-ks-install/) for details. Please refer to [Gluster Documentation](https://www.gluster.org/install/) or [Gluster Documentation](http://gluster.readthedocs.io/en/latest/Install-Guide/Install/) for production. Note you need to install [Heketi Manager (V3.0.0)](https://github.com/heketi/heketi/tree/master/docs/admin). | +|GlusterFS Client |v3.12.10|Before installing KubeSphere, you need to configure the corresponding parameters in `common.yaml`. Please refer to [GlusterFS](../storage-configuration/#glusterfs)| +|NFS Client | v3.1.0 | Before installing KubeSphere, you need to configure the corresponding parameters in `common.yaml`. Make sure you have prepared NFS storage server. Please see [NFS Client](../storage-configuration/#nfs) | +QingCloud-CSI|v0.2.0.1|You need to configure the corresponding parameters in `common.yaml` before installing KubeSphere. Please refer to [QingCloud CSI](../storage-configuration/#qingcloud-csi) for details| +NeonSAN-CSI|v0.3.0| Before installing KubeSphere, you need to configure the corresponding parameters in `common.yaml`. Make sure you have prepared QingStor NeonSAN storage server. Please see [Neonsan-CSI](../storage-configuration/#neonsan-csi) | + +> Note: You are only allowed to set ONE default storage classes in the cluster. To specify a default storage class, make sure there is no default storage class already exited in the cluster. + +## Storage Configuration + +After preparing the storage server, you need to refer to the parameters description in the following table. Then modify the corresponding configurations in `conf/common.yaml` accordingly. + +The following describes the storage configuration in `common.yaml`. + +> Note: Local Volume is configured as the default storage class in `common.yaml` by default. If you are going to set other storage class as the default, disable the Local Volume and modify the configuration for other storage class. + +### Local Volume (For developing or testing only) + +A [Local Volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) represents a mounted local storage device such as a disk, partition or directory. Local volumes can only be used as a statically created PersistentVolume. We recommend you to use Local volume in testing or development only since it is quick and easy to install KubeSphere without the struggle to set up persistent storage server. Refer to following table for the definition in `conf/common.yaml`. + +| **Local volume** | **Description** | +| --- | --- | +| local\_volume\_provisioner\_enabled | Whether to use Local as the persistent storage, defaults to true | +| local\_volume\_provisioner\_storage\_class | Storage class name, default value:local | +| local\_volume\_is\_default\_class | Whether to set Local as the default storage class, defaults to true.| + +### NFS + +An NFS volume allows an existing NFS (Network File System) share to be mounted into your Pod. NFS can be configured in `conf/common.yaml`. Note you need to prepare NFS server in advance. + +| **NFS** | **Description** | +| --- | --- | +| nfs\_client\_enable | Whether to use NFS as the persistent storage, defaults to false | +| nfs\_client\_is\_default\_class | Whether to set NFS as default storage class, defaults to false. | +| nfs\_server | The NFS server address, either IP or Hostname | +| nfs\_path | NFS shared directory, which is the file directory shared on the server, see [Kubernetes Documentation](https://kubernetes.io/docs/concepts/storage/volumes/#nfs) | +|nfs\_vers3\_enabled | Specifies which version of the NFS protocol to use, defaults to false which means v4. True means v4 | +|nfs_archiveOnDelete | Archive PVC when deleting. It will automatically remove data from `oldPath` when it sets to false | + +### Ceph RBD + +The open source [Ceph RBD](https://ceph.com/) distributed storage system can be configured to use in `conf/common.yaml`. You need to prepare Ceph storage server in advance. Please refer to [Kubernetes Documentation](https://kubernetes.io/docs/concepts/storage/storage-classes/#ceph-rbd) for more details. + +| **Ceph\_RBD** | **Description** | +| --- | --- | +| ceph\_rbd\_enabled | Whether to use Ceph RBD as the persistent storage, defaults to false | +| ceph\_rbd\_storage\_class | Storage class name | +| ceph\_rbd\_is\_default\_class | Whether to set Ceph RBD as default storage class, defaults to false | +| ceph\_rbd\_monitors | Ceph monitors, comma delimited. This parameter is required, which depends on Ceph RBD server parameters | +| ceph\_rbd\_admin\_id | Ceph client ID that is capable of creating images in the pool. Defaults to “admin” | +| ceph\_rbd\_admin\_secret | Admin_id's secret, secret name for "adminId". This parameter is required. The provided secret must have type “kubernetes.io/rbd” | +| ceph\_rbd\_pool | Ceph RBD pool. Default is “rbd” | +| ceph\_rbd\_user\_id | Ceph client ID that is used to map the RBD image. Default is the same as adminId | +| ceph\_rbd\_user\_secret | Secret for User_id, it is required to create this secret in namespace which used rbd image | +| ceph\_rbd\_fsType | fsType that is supported by Kubernetes. Default: "ext4"| +| ceph\_rbd\_imageFormat | Ceph RBD image format, “1” or “2”. Default is “1” | +|ceph\_rbd\_imageFeatures| This parameter is optional and should only be used if you set imageFormat to “2”. Currently supported features are layering only. Default is “”, and no features are turned on| + +> Note: +> +> The ceph secret, which is created in storage class, like "ceph_rbd_admin_secret" and "ceph_rbd_user_secret", is retrieved using following command in Ceph storage server. + +```bash +ceph auth get-key client.admin +``` + +### GlusterFS + +[GlusterFS](https://docs.gluster.org/en/latest/) is a scalable network filesystem suitable for data-intensive tasks such as cloud storage and media streaming. You need to prepare GlusterFS storage server in advance. Please refer to [Kubernetes Documentation](https://kubernetes.io/docs/concepts/storage/storage-classes/#glusterfs) for further information. + +| **GlusterFS(It requires glusterfs cluster which is managed by heketi)**|**Description** | +| --- | --- | +| glusterfs\_provisioner\_enabled | Whether to use GlusterFS as the persistent storage, defaults to false | +| glusterfs\_provisioner\_storage\_class | Storage class name | +| glusterfs\_is\_default\_class | Whether to set GlusterFS as default storage class, defaults to false | +| glusterfs\_provisioner\_restauthenabled | Gluster REST service authentication boolean that enables authentication to the REST server | +| glusterfs\_provisioner\_resturl | Gluster REST service/Heketi service url which provision gluster volumes on demand. The general format should be "IP address:Port" and this is a mandatory parameter for GlusterFS dynamic provisioner| +| glusterfs\_provisioner\_clusterid | Optional, for example, 630372ccdc720a92c681fb928f27b53f is the ID of the cluster which will be used by Heketi when provisioning the volume. It can also be a list of clusterids | +| glusterfs\_provisioner\_restuser | Gluster REST service/Heketi user who has access to create volumes in the Gluster Trusted Pool | +| glusterfs\_provisioner\_secretName | Optional, identification of Secret instance that contains user password to use when talking to Gluster REST service, Installer will automatically create this secret in kube-system | +| glusterfs\_provisioner\_gidMin | The minimum value of GID range for the storage class | +| glusterfs\_provisioner\_gidMax |The maximum value of GID range for the storage class | +| glusterfs\_provisioner\_volumetype | The volume type and its parameters can be configured with this optional value, for example: ‘Replica volume’: volumetype: replicate:3 | +| jwt\_admin\_key | "jwt.admin.key" field is from "/etc/heketi/heketi.json" in Heketi server | + +**Attention:** + + > Please note: `"glusterfs_provisioner_clusterid"` could be returned from glusterfs server by running the following command: + + ```bash + export HEKETI_CLI_SERVER=http://localhost:8080 + heketi-cli cluster list + ``` + +### QingCloud Block Storage + +[QingCloud Block Storage](https://docs.qingcloud.com/product/Storage/volume/) is supported in KubeSphere as the persistent storage service. If you would like to experience dynamic provisioning when creating volume, we recommend you to use it as your persistent storage solution. KubeSphere integrates [QingCloud-CSI](https://github.com/yunify/qingcloud-csi/blob/master/README_zh.md), and allows you to use various block storage services of QingCloud. With simple configuration, you can quickly expand, clone PVCs and view the topology of PVCs, create/delete snapshot, as well as restore volume from snapshot. + +QingCloud-CSI plugin has implemented the standard CSI. You can easily create and manage different types of volumes in KubeSphere, which are provided by QingCloud. The corresponding PVCs will created with ReadWriteOnce access mode and mounted to running Pods. + +QingCloud-CSI supports create the following five types of volume in QingCloud: + +- High capacity +- Standard +- SSD Enterprise +- Super high performance +- High performance + +|**QingCloud-CSI** | **Description**| +| --- | ---| +| qingcloud\_csi\_enabled|Whether to use QingCloud-CSI as the persistent storage volume, defaults to false | +| qingcloud\_csi\_is\_default\_class| Whether to set QingCloud-CSI as default storage class, defaults to false | +qingcloud\_access\_key\_id ,
qingcloud\_secret\_access\_key| Please obtain it from [QingCloud Console](https://console.qingcloud.com/login) | +|qingcloud\_zone| Zone should be the same as the zone where the Kubernetes cluster is installed, and the CSI plugin will operate on the storage volumes for this zone. For example: zone can be set to these values, such as sh1a (Shanghai 1-A), sh1b (Shanghai 1-B), pek2 (Beijing 2), pek3a (Beijing 3-A), pek3b (Beijing 3-B), pek3c (Beijing 3-C), gd1 (Guangdong 1), gd2a (Guangdong 2-A), ap1 (Asia Pacific 1), ap2a (Asia Pacific 2-A) | +| type | The type of volume in QingCloud platform. In QingCloud platform, 0 represents high performance volume. 3 represents super high performance volume. 1 or 2 represents high capacity volume depending on cluster‘s zone, see [QingCloud Documentation](https://docs.qingcloud.com/product/api/action/volume/create_volumes.html)| +| maxSize, minSize | Limit the range of volume size in GiB| +| stepSize | Set the increment of volumes size in GiB| +| fsType | The file system of the storage volume, which supports ext3, ext4, xfs. The default is ext4| + +### QingStor NeonSAN + +The NeonSAN-CSI plugin supports the enterprise-level distributed storage [QingStor NeonSAN](https://www.qingcloud.com/products/qingstor-neonsan/) as the persistent storage solution. You need prepare the NeonSAN server, then configure the NeonSAN-CSI plugin to connect to its storage server in `conf/common.yaml`. Please refer to [NeonSAN-CSI Reference](https://github.com/wnxn/qingstor-csi/blob/master/docs/reference_zh.md#storageclass-%E5%8F%82%E6%95%B0) for further information. + +| **NeonSAN** | **Description** | +| --- | --- | +| neonsan\_csi\_enabled | Whether to use NeonSAN as the persistent storage, defaults to false | +| neonsan\_csi\_is\_default\_class | Whether to set NeonSAN-CSI as the default storage class, defaults to false| +Neonsan\_csi\_protocol | transportation protocol, user must set the option, such as TCP or RDMA| +| neonsan\_server\_address | NeonSAN server address | +| neonsan\_cluster\_name| NeonSAN server cluster name| +| neonsan\_server\_pool|A comma separated list of pools. Tell plugin to manager these pools. User must set the option, the default value is kube| +| neonsan\_server\_replicas|NeonSAN image replica count. Default: 1| +| neonsan\_server\_stepSize|set the increment of volumes size in GiB. Default: 1| +| neonsan\_server\_fsType|The file system to use for the volume. Default: ext4| diff --git a/content/zh/docs/installing-on-kubernetes/introduction/_index.md b/content/zh/docs/installing-on-kubernetes/introduction/_index.md new file mode 100644 index 000000000..2cf101ca5 --- /dev/null +++ b/content/zh/docs/installing-on-kubernetes/introduction/_index.md @@ -0,0 +1,7 @@ +--- +linkTitle: "Installation" +weight: 2100 + +_build: + render: false +--- \ No newline at end of file diff --git a/content/zh/docs/installing-on-kubernetes/introduction/intro.md b/content/zh/docs/installing-on-kubernetes/introduction/intro.md new file mode 100644 index 000000000..a176c3255 --- /dev/null +++ b/content/zh/docs/installing-on-kubernetes/introduction/intro.md @@ -0,0 +1,93 @@ +--- +title: "Introduction" +keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus' +description: 'KubeSphere Installation Overview' + +linkTitle: "Introduction" +weight: 2110 +--- + +[KubeSphere](https://kubesphere.io/) is an enterprise-grade multi-tenant container platform built on [Kubernetes](https://kubernetes.io). It provides an easy-to-use UI for users to manage application workloads and computing resources with a few clicks, which greatly reduces the learning curve and the complexity of daily work such as development, testing, operation and maintenance. KubeSphere aims to alleviate the pain points of Kubernetes including storage, network, security and ease of use, etc. + +KubeSphere supports installing on cloud-hosted and on-premises Kubernetes cluster, e.g. native K8s, GKE, EKS, RKE, etc. It also supports installing on Linux host including virtual machine and bare metal with provisioning fresh Kubernetes cluster. Both of the two methods are easy and friendly to install KubeSphere. Meanwhile, KubeSphere offers not only online installer, but air-gapped installer for such environment with no access to the internet. + +KubeSphere is open source project on [GitHub](https://github.com/kubesphere). There are thousands of users are using KunbeSphere, and many of them are running KubeSphere for their production workloads. + +In summary, there are several installation options you can choose. Please note not all options are mutually exclusive. For instance, you can deploy KubeSphere with minimal packages on existing K8s cluster on multiple nodes in air-gapped environment. Here is the decision tree shown in the following graph you may reference for your own situation. + +- [All-in-One](../all-in-one): Intall KubeSphere on a singe node. It is only for users to quickly get familar with KubeSphere. +- [Multi-Node](../multi-node): Install KubeSphere on multiple nodes. It is for testing or development. +- [Install KubeSphere on Air Gapped Linux](../install-ks-on-linux-airgapped): All images of KubeSphere have been encapsulated into a package, it is convenient for air gapped installation on Linux machines. +- [High Availability Multi-Node](../master-ha): Install high availability KubeSphere on multiple nodes which is used for production environment. +- [KubeSphere on Existing K8s](../install-on-k8s): Deploy KubeSphere on your Kubernetes cluster including cloud-hosted services such as GKE, EKS, etc. +- [KubeSphere on Air-Gapped K8s](../install-on-k8s-airgapped): Install KubeSphere on a disconnected Kubernetes cluster. +- Minimal Packages: Only install minimal required system components of KubeSphere. The minimum of resource requirement is down to 1 core and 2G memory. +- [Full Packages](../complete-installation): Install all available system components of KubeSphere including DevOps, service mesh, application store, etc. + +![Installer Options](https://pek3b.qingstor.com/kubesphere-docs/png/20200305093158.png) + +## Before Installation + +- As the installation will pull images and update operating system from the internet, your environment must have the internet access. If not, then you need to use the air-gapped installer instead. +- For all-in-one installation, the only one node is both the master and the worker. +- For multi-node installation, you are asked to specify the node roles in the configuration file before installation. +- Your linux host must have OpenSSH Server installed. +- Please check the [ports requirements](../port-firewall) before installation. + +## Quick Install For Development and Testing + +KubeSphere has decoupled some components since v2.1.0. The installer only installs required components by default which brings the benefits of fast installation and minimal resource consumption. If you want to install any optional component, please check the following section [Pluggable Components Overview](../intro#pluggable-components-overview) for details. + +The quick install of KubeSphere is only for development or testing since it uses local volume for storage by default. If you want a production install please refer to the section [High Availability Installation for Production Environment](../intro#high-availability-installation-for-production-environment). + +### 1. Install KubeSphere on Linux + +- [All-in-One](../all-in-one): It means a single-node hassle-free configuration installation with one-click. +- [Multi-Node](../multi-node): It allows you to install KubeSphere on multiple instances using local volume, which means it is not required to install storage server such as Ceph, GlusterFS. + +> Note:With regard to air-gapped installation please refer to [Install KubeSphere on Air Gapped Linux Machines](../install-ks-on-linux-airgapped). + +### 2. Install KubeSphere on Existing Kubernetes + +You can install KubeSphere on your existing Kubernetes cluster. Please refer [Install KubeSphere on Kubernetes](../install-on-k8s) for instructions. + +## High Availability Installation for Production Environment + +### 1. Install HA KubeSphere on Linux + +KubeSphere installer supports installing a highly available cluster for production with the prerequisites being a load balancer and persistent storage service set up in advance. + +- [Persistent Service Configuration](../storage-configuration): By default, KubeSphere Installer uses [Local Volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning in Kubernetes cluster. It is convenient for quick install of testing environment. In production environment, it must have a storage server set up. Please refer [Persistent Service Configuration](../storage-configuration) for details. +- [Load Balancer Configuration for HA install](../master-ha): Before you get started with multi-node installation in production environment, you need to configure a load balancer. Either cloud LB or `HAproxy + keepalived` works for the installation. + +### 2. Install HA KubeSphere on Existing Kubernetes + +Before you install KubeSphere on existing Kubernetes, please check the prerequisites of the installation on Linux described above, and verify the existing Kubernetes to see if it satisfies these prerequisites or not, i.e., a load balancer and persistent storage service. + +If your Kubernetes is ready, please refer [Install KubeSphere on Kubernetes](../install-on-k8s) for instructions. + +> You can install KubeSphere on cloud Kubernetes service such as [Installing KubeSphere on GKE cluster](../install-on-gke) + +## Pluggable Components Overview + +KubeSphere has decoupled some core feature components since v2.1.0. These components are designed to be pluggable, which means you can enable any of them before or after installation. The installer by default does not install the pluggable components. Please check the guide [Enable Pluggable Components Installation](../pluggable-components) for your requirement. + +![Pluggable Components](https://pek3b.qingstor.com/kubesphere-docs/png/20191207140846.png) + +## Storage Configuration Instruction + +The following links explain how to configure different types of persistent storage services. Please refer to [Storage Configuration Instruction](../storage-configuration) for detailed instructions regarding how to configure the storage class in KubeSphere. + +- [NFS](https://kubernetes.io/docs/concepts/storage/volumes/#nfs) +- [GlusterFS](https://www.gluster.org/) +- [Ceph RBD](https://ceph.com/) +- [QingCloud Block Storage](https://docs.qingcloud.com/product/storage/volume/) +- [QingStor NeonSAN](https://docs.qingcloud.com/product/storage/volume/super_high_performance_shared_volume/) + +## Add New Nodes + +KubeSphere Installer allows you to scale the number of nodes, see [Add New Nodes](../add-nodes). + +## Uninstall + +Uninstall will remove KubeSphere from the machines. This operation is irreversible and dangerous. Please check [Uninstall](../uninstall). diff --git a/content/zh/docs/installing-on-kubernetes/introduction/port-firewall.md b/content/zh/docs/installing-on-kubernetes/introduction/port-firewall.md new file mode 100644 index 000000000..875c2e9b0 --- /dev/null +++ b/content/zh/docs/installing-on-kubernetes/introduction/port-firewall.md @@ -0,0 +1,33 @@ +--- +title: "Port Requirements" +keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus' +description: '' + +linkTitle: "Requirements" +weight: 2120 +--- + + +KubeSphere requires certain ports to communicate among services, so you need to make sure the following ports open for use. + +| Service | Protocol | Action | Start Port | End Port | Notes | +|---|---|---|---|---|---| +| ssh | TCP | allow | 22 | | | +| etcd | TCP | allow | 2379 | 2380 | | +| apiserver | TCP | allow | 6443 | | | +| calico | TCP | allow | 9099 | 9100 | | +| bgp | TCP | allow | 179 | | | +| nodeport | TCP | allow | 30000 | 32767 | | +| master | TCP | allow | 10250 | 10258 | | +| dns | TCP | allow | 53 | | | +| dns | UDP | allow | 53 | | | +| local-registry | TCP | allow | 5000 | | Required for air gapped environment| +| local-apt | TCP | allow | 5080 | | Required for air gapped environment| +| rpcbind | TCP | allow | 111 | | When using NFS as storage server | +| ipip | IPIP | allow | | | Calico network requires ipip protocol | + +**Note** + +Please note when you use Calico network plugin and run your cluster in classic network in cloud environment, you need to open IPIP protocol for souce IP. For instance, the following is the sample on QingCloud showing how to open IPIP protocol. + +![](https://pek3b.qingstor.com/kubesphere-docs/png/20200304200605.png) diff --git a/content/zh/docs/installing-on-kubernetes/introduction/vars.md b/content/zh/docs/installing-on-kubernetes/introduction/vars.md new file mode 100644 index 000000000..cda3aa5db --- /dev/null +++ b/content/zh/docs/installing-on-kubernetes/introduction/vars.md @@ -0,0 +1,107 @@ +--- +title: "Common Configurations" +keywords: 'KubeSphere, kubernetes, docker, helm, jenkins, istio, prometheus' +description: 'Configure cluster parameters before installing' + +linkTitle: "Kubernetes Cluster Configuration" +weight: 2130 +--- + +This tutorial explains how to customize KubeSphere configurations in `conf/common.yaml`. You can reference the following section to understand each parameter. + +```yaml +######################### Kubernetes ######################### +# The default k8s version will be installed +kube_version: v1.16.7 + +# The default etcd version will be installed +etcd_version: v3.2.18 + +# Configure a cron job to backup etcd data, which is running on etcd machines. +# Period of running backup etcd job, the unit is minutes. +# The default value 30 means backup etcd every 30 minutes. +etcd_backup_period: 30 + +# How many backup replicas to keep. +# The default value5 means to keep latest 5 backups, older ones will be deleted by order. +keep_backup_number: 5 + +# The location to store etcd backups files on etcd machines. +etcd_backup_dir: "/var/backups/kube_etcd" + +# Add other registry. (For users who need to accelerate image download) +docker_registry_mirrors: + - https://docker.mirrors.ustc.edu.cn + - https://registry.docker-cn.com + - https://mirror.aliyuncs.com + +# Kubernetes network plugin, Calico will be installed by default. Note that Calico and flannel are recommended, which are tested and verified by KubeSphere. +kube_network_plugin: calico + +# A valid CIDR range for Kubernetes services, +# 1. should not overlap with node subnet +# 2. should not overlap with Kubernetes pod subnet +kube_service_addresses: 10.233.0.0/18 + +# A valid CIDR range for Kubernetes pod subnet, +# 1. should not overlap with node subnet +# 2. should not overlap with Kubernetes services subnet +kube_pods_subnet: 10.233.64.0/18 + +# Kube-proxy proxyMode configuration, either ipvs, or iptables +kube_proxy_mode: ipvs + +# Maximum pods allowed to run on every node. +kubelet_max_pods: 110 + +# Enable nodelocal dns cache, see https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.md#nodelocal-dns-cache for further information +enable_nodelocaldns: true + +# Highly Available loadbalancer example config +# apiserver_loadbalancer_domain_name: "lb.kubesphere.local" # Loadbalancer domain name +# loadbalancer_apiserver: # Loadbalancer apiserver configuration, please uncomment this line when you prepare HA install +# address: 192.168.0.10 # Loadbalancer apiserver IP address +# port: 6443 # apiserver port + +######################### KubeSphere ######################### + +# Version of KubeSphere +ks_version: v2.1.0 + +# KubeSphere console port, range 30000-32767, +# but 30180/30280/30380 are reserved for internal service +console_port: 30880 # KubeSphere console nodeport + +#CommonComponent +mysql_volume_size: 20Gi # MySQL PVC size +minio_volume_size: 20Gi # Minio PVC size +etcd_volume_size: 20Gi # etcd PVC size +openldap_volume_size: 2Gi # openldap PVC size +redis_volume_size: 2Gi # Redis PVC size + + +# Monitoring +prometheus_replica: 2 # Prometheus replicas with 2 by default which are responsible for monitoring different segments of data source and provide high availability as well. +prometheus_memory_request: 400Mi # Prometheus request memory +prometheus_volume_size: 20Gi # Prometheus PVC size +grafana_enabled: true # enable grafana or not + + +## Container Engine Acceleration +## Use nvidia gpu acceleration in containers +# nvidia_accelerator_enabled: true # enable Nvidia GPU accelerator or not. It supports hybrid node with GPU and CPU installed. +# nvidia_gpu_nodes: # The GPU nodes specified in hosts.ini. FOr now we only support Ubuntu 16.04 +# - kube-gpu-001 # The host name of the GPU node specified in hosts.ini +``` + +## How to Configure a GPU Node + +You may want to use GPU nodes for special purpose such as machine learning. Let's say you have a GPU node called `node2` in `hosts.ini`, then in the file `common.yaml` specify the following configuration. Please be aware the `- node2` has two spaces indent. + +```yaml + nvidia_accelerator_enabled: true + nvidia_gpu_nodes: + - node2 +``` + +> Note: The GPU node now only supports Ubuntu 16.04. \ No newline at end of file diff --git a/content/zh/docs/installing-on-kubernetes/on-prem-kubernetes/_index.md b/content/zh/docs/installing-on-kubernetes/on-prem-kubernetes/_index.md new file mode 100644 index 000000000..cd927f966 --- /dev/null +++ b/content/zh/docs/installing-on-kubernetes/on-prem-kubernetes/_index.md @@ -0,0 +1,7 @@ +--- +linkTitle: "Install on Linux" +weight: 2200 + +_build: + render: false +--- \ No newline at end of file diff --git a/content/zh/docs/installing-on-kubernetes/on-prem-kubernetes/install-ks-on-linux-airgapped.md b/content/zh/docs/installing-on-kubernetes/on-prem-kubernetes/install-ks-on-linux-airgapped.md new file mode 100644 index 000000000..26b3e4f04 --- /dev/null +++ b/content/zh/docs/installing-on-kubernetes/on-prem-kubernetes/install-ks-on-linux-airgapped.md @@ -0,0 +1,224 @@ +--- +title: "Air-Gapped Installation" +keywords: 'kubernetes, kubesphere, air gapped, installation' +description: 'How to install KubeSphere on air-gapped Linux machines' + + +weight: 2240 +--- + +The air-gapped installation is almost the same as the online installation except it creates a local registry to host the Docker images. We will demonstrate how to install KubeSphere and Kubernetes on air-gapped environment. + +> Note: The dependencies in different operating systems may cause upexpected problems. If you encounter any installation problems on air-gapped environment, please describe your OS information and error logs on [GitHub](https://github.com/kubesphere/kubesphere/issues). + +## Prerequisites + +- If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information. +> - Installer will use `/var/lib/docker` as the default directory where all Docker related files, including the images, are stored. We recommend you to add additional storage to a disk with at least 100G mounted at `/var/lib/docker` and `/mnt/registry` respectively, use the [fdisk](https://www.computerhope.com/unix/fdisk.htm) command for reference. +- Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [OpenEBS](https://openebs.io/) to provide storage service with dynamic provisioning. It is convenient for testing and development. For production, please [configure supported persistent storage service](../storage-configuration) and prepare [high availability configuration](../master-ha) before installation. +- Since the air-gapped machines cannot connect to apt or yum source, please use clean Linux machine to avoid this problem. + +## Step 1: Prepare Linux Hosts + +The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements. + +- Supported OSes: CentOS 7.4 ~ 7.7 (64-bit), Ubuntu 16.04.5/16.04.6/18.04.1/18.04.2/18.04.3 LTS (64-bit) +- Time synchronization is required across all nodes, otherwise the installation may not succeed; +- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`; +- If you are using `Ubuntu 18.04`, you need to use the user `root`. +- Ensure your disk of each node is at least 100G. +- CPU and memory in total of all machines: 2 cores and 4 GB for minimal installation; 8 cores and 16 GB for complete installation. + + +The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes. + +> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide. + +| Host IP | Host Name | Role | +| --- | --- | --- | +|192.168.0.1|master|master, etcd| +|192.168.0.2|node1|node| +|192.168.0.3|node2|node| + +### Cluster Architecture + +#### Single Master, Single Etcd, Two Nodes + +![Architecture](/cluster-architecture.svg) + +## Step 2: Download Installer Package + +Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`. + +```bash +curl -L https://kubesphere.io/download/offline/latest > kubesphere-all-offline-v2.1.1.tar.gz \ +&& tar -zxf kubesphere-all-offline-v2.1.1.tar.gz && cd kubesphere-all-offline-v2.1.1/conf +``` + +## Step 3: Configure Host Template + +> This step is only for multi-node installation, you can skip this step if you choose all-in-one installation. + +Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file. + +> Note: +> +> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`. +> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`. +> - master, node1 and node2 are the host names of each node and all host names should be in lowercase. + +### hosts.ini + +```ini +[all] +master ansible_connection=local ip=192.168.0.1 +node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD +node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD + +[local-registry] +master + +[kube-master] +master + +[kube-node] +node1 +node2 + +[etcd] +master + +[k8s-cluster:children] +kube-node +kube-master +``` + +> Note: +> +> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here. +> - Installer will use a node as the local registry for docker images, defaults to "master" in the group `[local-registry]`. +> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively. +> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`. +> +> Parameters Specification: +> +> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection. +> - `ansible_host`: The name of the host to be connected. +> - `ip`: The ip of the host to be connected. +> - `ansible_user`: The default ssh user name to use. +> - `ansible_become_pass`: Allows you to set the privilege escalation password. +> - `ansible_ssh_pass`: The password of the host to be connected using root. + +## Step 4: Enable All Components + +> This is step is complete installation. You can skip this step if you choose a minimal installation. + +Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default. + +```yaml +# LOGGING CONFIGURATION +# logging is an optional component when installing KubeSphere, and +# Kubernetes builtin logging APIs will be used if logging_enabled is set to false. +# Builtin logging only provides limited functions, so recommend to enable logging. +logging_enabled: true # Whether to install logging system +elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number +elasticsearch_data_replica: 2 # total number of data nodes +elasticsearch_volume_size: 20Gi # Elasticsearch volume size +log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default. +elk_prefix: logstash # the string making up index names. The index name will be formatted as ks--log +kibana_enabled: false # Kibana Whether to install built-in Grafana +#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption. +#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port + +#DevOps Configuration +devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image) +jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default +jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default +jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default +jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters +jenkinsJavaOpts_Xmx: 6g +jenkinsJavaOpts_MaxRAM: 8g +sonarqube_enabled: true # Whether to install built-in SonarQube +#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption. +#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token + +# Following components are all optional for KubeSphere, +# Which could be turned on to install it before installation or later by updating its value to true +openpitrix_enabled: true # KubeSphere application store +metrics_server_enabled: true # For KubeSphere HPA to use +servicemesh_enabled: true # KubeSphere service mesh system(Istio-based) +notification_enabled: true # KubeSphere notification system +alerting_enabled: true # KubeSphere alerting system +``` + +## Step 5: Install KubeSphere to Linux Machines + +> Note: +> +> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default. +> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions. +> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation. +> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts. + +**1.** Enter `scripts` folder, and execute `install.sh` using `root` user: + +```bash +cd ../cripts +./install.sh +``` + +**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume. + +```bash +################################################ + KubeSphere Installer Menu +################################################ +* 1) All-in-one +* 2) Multi-node +* 3) Quit +################################################ +https://kubesphere.io/ 2020-02-24 +################################################ +Please input an option: 2 + +``` + +**3.** Verify the multi-node installation: + +**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go. + +```bash +successsful! +##################################################### +### Welcome to KubeSphere! ### +##################################################### + +Console: http://192.168.0.1:30880 +Account: admin +Password: P@88w0rd + +NOTE:Please modify the default password after login. +##################################################### +``` + +> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components). + +**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in. + +![Login](https://pek3b.qingstor.com/kubesphere-docs/png/20191017172215.png) + +Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up. + +![Landing Page](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png) + +## Enable Pluggable Components + +If you already have set up minimal installation, you still can edit the ConfigMap of ks-installer using the following command. Please make sure there is enough resource in your machines, see [Pluggable Components Overview](/en/installation/pluggable-components/). + +```bash +kubectl edit cm -n kubesphere-system ks-installer +``` + +## FAQ + +If you have further questions please do not hesitate to raise issues on [GitHub](https://github.com/kubesphere/kubesphere/issues). diff --git a/content/zh/docs/installing-on-linux/_index.md b/content/zh/docs/installing-on-linux/_index.md new file mode 100644 index 000000000..2442646b9 --- /dev/null +++ b/content/zh/docs/installing-on-linux/_index.md @@ -0,0 +1,23 @@ +--- +title: "Installing on Linux" +description: "Help you to better understand KubeSphere with detailed graphics and contents" +layout: "single" + +linkTitle: "Installing on Linux" +weight: 2000 + +icon: "/images/docs/docs.svg" + +--- + +## Installing KubeSphere and Kubernetes on Linux + +In this chapter, we will demonstrate how to use KubeKey to provision a new Kubernetes and KubeSphere cluster based on different infrastructures. Kubekey can help you to quickly build a production-ready cluster architecture on a set of machines from zero to one. It also helps you to easily scale the cluster and install pluggable components on existing architecture. + +## Most Popular Pages + +Below you will find some of the most common and helpful pages from this chapter. We highly recommend you to review them at first. + +{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} + +{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} diff --git a/content/zh/docs/installing-on-linux/introduction/_index.md b/content/zh/docs/installing-on-linux/introduction/_index.md new file mode 100644 index 000000000..2cf101ca5 --- /dev/null +++ b/content/zh/docs/installing-on-linux/introduction/_index.md @@ -0,0 +1,7 @@ +--- +linkTitle: "Installation" +weight: 2100 + +_build: + render: false +--- \ No newline at end of file diff --git a/content/zh/docs/installing-on-linux/introduction/intro.md b/content/zh/docs/installing-on-linux/introduction/intro.md new file mode 100644 index 000000000..a176c3255 --- /dev/null +++ b/content/zh/docs/installing-on-linux/introduction/intro.md @@ -0,0 +1,93 @@ +--- +title: "Introduction" +keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus' +description: 'KubeSphere Installation Overview' + +linkTitle: "Introduction" +weight: 2110 +--- + +[KubeSphere](https://kubesphere.io/) is an enterprise-grade multi-tenant container platform built on [Kubernetes](https://kubernetes.io). It provides an easy-to-use UI for users to manage application workloads and computing resources with a few clicks, which greatly reduces the learning curve and the complexity of daily work such as development, testing, operation and maintenance. KubeSphere aims to alleviate the pain points of Kubernetes including storage, network, security and ease of use, etc. + +KubeSphere supports installing on cloud-hosted and on-premises Kubernetes cluster, e.g. native K8s, GKE, EKS, RKE, etc. It also supports installing on Linux host including virtual machine and bare metal with provisioning fresh Kubernetes cluster. Both of the two methods are easy and friendly to install KubeSphere. Meanwhile, KubeSphere offers not only online installer, but air-gapped installer for such environment with no access to the internet. + +KubeSphere is open source project on [GitHub](https://github.com/kubesphere). There are thousands of users are using KunbeSphere, and many of them are running KubeSphere for their production workloads. + +In summary, there are several installation options you can choose. Please note not all options are mutually exclusive. For instance, you can deploy KubeSphere with minimal packages on existing K8s cluster on multiple nodes in air-gapped environment. Here is the decision tree shown in the following graph you may reference for your own situation. + +- [All-in-One](../all-in-one): Intall KubeSphere on a singe node. It is only for users to quickly get familar with KubeSphere. +- [Multi-Node](../multi-node): Install KubeSphere on multiple nodes. It is for testing or development. +- [Install KubeSphere on Air Gapped Linux](../install-ks-on-linux-airgapped): All images of KubeSphere have been encapsulated into a package, it is convenient for air gapped installation on Linux machines. +- [High Availability Multi-Node](../master-ha): Install high availability KubeSphere on multiple nodes which is used for production environment. +- [KubeSphere on Existing K8s](../install-on-k8s): Deploy KubeSphere on your Kubernetes cluster including cloud-hosted services such as GKE, EKS, etc. +- [KubeSphere on Air-Gapped K8s](../install-on-k8s-airgapped): Install KubeSphere on a disconnected Kubernetes cluster. +- Minimal Packages: Only install minimal required system components of KubeSphere. The minimum of resource requirement is down to 1 core and 2G memory. +- [Full Packages](../complete-installation): Install all available system components of KubeSphere including DevOps, service mesh, application store, etc. + +![Installer Options](https://pek3b.qingstor.com/kubesphere-docs/png/20200305093158.png) + +## Before Installation + +- As the installation will pull images and update operating system from the internet, your environment must have the internet access. If not, then you need to use the air-gapped installer instead. +- For all-in-one installation, the only one node is both the master and the worker. +- For multi-node installation, you are asked to specify the node roles in the configuration file before installation. +- Your linux host must have OpenSSH Server installed. +- Please check the [ports requirements](../port-firewall) before installation. + +## Quick Install For Development and Testing + +KubeSphere has decoupled some components since v2.1.0. The installer only installs required components by default which brings the benefits of fast installation and minimal resource consumption. If you want to install any optional component, please check the following section [Pluggable Components Overview](../intro#pluggable-components-overview) for details. + +The quick install of KubeSphere is only for development or testing since it uses local volume for storage by default. If you want a production install please refer to the section [High Availability Installation for Production Environment](../intro#high-availability-installation-for-production-environment). + +### 1. Install KubeSphere on Linux + +- [All-in-One](../all-in-one): It means a single-node hassle-free configuration installation with one-click. +- [Multi-Node](../multi-node): It allows you to install KubeSphere on multiple instances using local volume, which means it is not required to install storage server such as Ceph, GlusterFS. + +> Note:With regard to air-gapped installation please refer to [Install KubeSphere on Air Gapped Linux Machines](../install-ks-on-linux-airgapped). + +### 2. Install KubeSphere on Existing Kubernetes + +You can install KubeSphere on your existing Kubernetes cluster. Please refer [Install KubeSphere on Kubernetes](../install-on-k8s) for instructions. + +## High Availability Installation for Production Environment + +### 1. Install HA KubeSphere on Linux + +KubeSphere installer supports installing a highly available cluster for production with the prerequisites being a load balancer and persistent storage service set up in advance. + +- [Persistent Service Configuration](../storage-configuration): By default, KubeSphere Installer uses [Local Volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning in Kubernetes cluster. It is convenient for quick install of testing environment. In production environment, it must have a storage server set up. Please refer [Persistent Service Configuration](../storage-configuration) for details. +- [Load Balancer Configuration for HA install](../master-ha): Before you get started with multi-node installation in production environment, you need to configure a load balancer. Either cloud LB or `HAproxy + keepalived` works for the installation. + +### 2. Install HA KubeSphere on Existing Kubernetes + +Before you install KubeSphere on existing Kubernetes, please check the prerequisites of the installation on Linux described above, and verify the existing Kubernetes to see if it satisfies these prerequisites or not, i.e., a load balancer and persistent storage service. + +If your Kubernetes is ready, please refer [Install KubeSphere on Kubernetes](../install-on-k8s) for instructions. + +> You can install KubeSphere on cloud Kubernetes service such as [Installing KubeSphere on GKE cluster](../install-on-gke) + +## Pluggable Components Overview + +KubeSphere has decoupled some core feature components since v2.1.0. These components are designed to be pluggable, which means you can enable any of them before or after installation. The installer by default does not install the pluggable components. Please check the guide [Enable Pluggable Components Installation](../pluggable-components) for your requirement. + +![Pluggable Components](https://pek3b.qingstor.com/kubesphere-docs/png/20191207140846.png) + +## Storage Configuration Instruction + +The following links explain how to configure different types of persistent storage services. Please refer to [Storage Configuration Instruction](../storage-configuration) for detailed instructions regarding how to configure the storage class in KubeSphere. + +- [NFS](https://kubernetes.io/docs/concepts/storage/volumes/#nfs) +- [GlusterFS](https://www.gluster.org/) +- [Ceph RBD](https://ceph.com/) +- [QingCloud Block Storage](https://docs.qingcloud.com/product/storage/volume/) +- [QingStor NeonSAN](https://docs.qingcloud.com/product/storage/volume/super_high_performance_shared_volume/) + +## Add New Nodes + +KubeSphere Installer allows you to scale the number of nodes, see [Add New Nodes](../add-nodes). + +## Uninstall + +Uninstall will remove KubeSphere from the machines. This operation is irreversible and dangerous. Please check [Uninstall](../uninstall). diff --git a/content/zh/docs/installing-on-linux/introduction/port-firewall.md b/content/zh/docs/installing-on-linux/introduction/port-firewall.md new file mode 100644 index 000000000..875c2e9b0 --- /dev/null +++ b/content/zh/docs/installing-on-linux/introduction/port-firewall.md @@ -0,0 +1,33 @@ +--- +title: "Port Requirements" +keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus' +description: '' + +linkTitle: "Requirements" +weight: 2120 +--- + + +KubeSphere requires certain ports to communicate among services, so you need to make sure the following ports open for use. + +| Service | Protocol | Action | Start Port | End Port | Notes | +|---|---|---|---|---|---| +| ssh | TCP | allow | 22 | | | +| etcd | TCP | allow | 2379 | 2380 | | +| apiserver | TCP | allow | 6443 | | | +| calico | TCP | allow | 9099 | 9100 | | +| bgp | TCP | allow | 179 | | | +| nodeport | TCP | allow | 30000 | 32767 | | +| master | TCP | allow | 10250 | 10258 | | +| dns | TCP | allow | 53 | | | +| dns | UDP | allow | 53 | | | +| local-registry | TCP | allow | 5000 | | Required for air gapped environment| +| local-apt | TCP | allow | 5080 | | Required for air gapped environment| +| rpcbind | TCP | allow | 111 | | When using NFS as storage server | +| ipip | IPIP | allow | | | Calico network requires ipip protocol | + +**Note** + +Please note when you use Calico network plugin and run your cluster in classic network in cloud environment, you need to open IPIP protocol for souce IP. For instance, the following is the sample on QingCloud showing how to open IPIP protocol. + +![](https://pek3b.qingstor.com/kubesphere-docs/png/20200304200605.png) diff --git a/content/zh/docs/installing-on-linux/introduction/vars.md b/content/zh/docs/installing-on-linux/introduction/vars.md new file mode 100644 index 000000000..cda3aa5db --- /dev/null +++ b/content/zh/docs/installing-on-linux/introduction/vars.md @@ -0,0 +1,107 @@ +--- +title: "Common Configurations" +keywords: 'KubeSphere, kubernetes, docker, helm, jenkins, istio, prometheus' +description: 'Configure cluster parameters before installing' + +linkTitle: "Kubernetes Cluster Configuration" +weight: 2130 +--- + +This tutorial explains how to customize KubeSphere configurations in `conf/common.yaml`. You can reference the following section to understand each parameter. + +```yaml +######################### Kubernetes ######################### +# The default k8s version will be installed +kube_version: v1.16.7 + +# The default etcd version will be installed +etcd_version: v3.2.18 + +# Configure a cron job to backup etcd data, which is running on etcd machines. +# Period of running backup etcd job, the unit is minutes. +# The default value 30 means backup etcd every 30 minutes. +etcd_backup_period: 30 + +# How many backup replicas to keep. +# The default value5 means to keep latest 5 backups, older ones will be deleted by order. +keep_backup_number: 5 + +# The location to store etcd backups files on etcd machines. +etcd_backup_dir: "/var/backups/kube_etcd" + +# Add other registry. (For users who need to accelerate image download) +docker_registry_mirrors: + - https://docker.mirrors.ustc.edu.cn + - https://registry.docker-cn.com + - https://mirror.aliyuncs.com + +# Kubernetes network plugin, Calico will be installed by default. Note that Calico and flannel are recommended, which are tested and verified by KubeSphere. +kube_network_plugin: calico + +# A valid CIDR range for Kubernetes services, +# 1. should not overlap with node subnet +# 2. should not overlap with Kubernetes pod subnet +kube_service_addresses: 10.233.0.0/18 + +# A valid CIDR range for Kubernetes pod subnet, +# 1. should not overlap with node subnet +# 2. should not overlap with Kubernetes services subnet +kube_pods_subnet: 10.233.64.0/18 + +# Kube-proxy proxyMode configuration, either ipvs, or iptables +kube_proxy_mode: ipvs + +# Maximum pods allowed to run on every node. +kubelet_max_pods: 110 + +# Enable nodelocal dns cache, see https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.md#nodelocal-dns-cache for further information +enable_nodelocaldns: true + +# Highly Available loadbalancer example config +# apiserver_loadbalancer_domain_name: "lb.kubesphere.local" # Loadbalancer domain name +# loadbalancer_apiserver: # Loadbalancer apiserver configuration, please uncomment this line when you prepare HA install +# address: 192.168.0.10 # Loadbalancer apiserver IP address +# port: 6443 # apiserver port + +######################### KubeSphere ######################### + +# Version of KubeSphere +ks_version: v2.1.0 + +# KubeSphere console port, range 30000-32767, +# but 30180/30280/30380 are reserved for internal service +console_port: 30880 # KubeSphere console nodeport + +#CommonComponent +mysql_volume_size: 20Gi # MySQL PVC size +minio_volume_size: 20Gi # Minio PVC size +etcd_volume_size: 20Gi # etcd PVC size +openldap_volume_size: 2Gi # openldap PVC size +redis_volume_size: 2Gi # Redis PVC size + + +# Monitoring +prometheus_replica: 2 # Prometheus replicas with 2 by default which are responsible for monitoring different segments of data source and provide high availability as well. +prometheus_memory_request: 400Mi # Prometheus request memory +prometheus_volume_size: 20Gi # Prometheus PVC size +grafana_enabled: true # enable grafana or not + + +## Container Engine Acceleration +## Use nvidia gpu acceleration in containers +# nvidia_accelerator_enabled: true # enable Nvidia GPU accelerator or not. It supports hybrid node with GPU and CPU installed. +# nvidia_gpu_nodes: # The GPU nodes specified in hosts.ini. FOr now we only support Ubuntu 16.04 +# - kube-gpu-001 # The host name of the GPU node specified in hosts.ini +``` + +## How to Configure a GPU Node + +You may want to use GPU nodes for special purpose such as machine learning. Let's say you have a GPU node called `node2` in `hosts.ini`, then in the file `common.yaml` specify the following configuration. Please be aware the `- node2` has two spaces indent. + +```yaml + nvidia_accelerator_enabled: true + nvidia_gpu_nodes: + - node2 +``` + +> Note: The GPU node now only supports Ubuntu 16.04. \ No newline at end of file diff --git a/content/zh/docs/installing-on-linux/on-premise/_index.md b/content/zh/docs/installing-on-linux/on-premise/_index.md new file mode 100644 index 000000000..cd927f966 --- /dev/null +++ b/content/zh/docs/installing-on-linux/on-premise/_index.md @@ -0,0 +1,7 @@ +--- +linkTitle: "Install on Linux" +weight: 2200 + +_build: + render: false +--- \ No newline at end of file diff --git a/content/zh/docs/installing-on-linux/on-premise/install-ks-on-linux-airgapped.md b/content/zh/docs/installing-on-linux/on-premise/install-ks-on-linux-airgapped.md new file mode 100644 index 000000000..26b3e4f04 --- /dev/null +++ b/content/zh/docs/installing-on-linux/on-premise/install-ks-on-linux-airgapped.md @@ -0,0 +1,224 @@ +--- +title: "Air-Gapped Installation" +keywords: 'kubernetes, kubesphere, air gapped, installation' +description: 'How to install KubeSphere on air-gapped Linux machines' + + +weight: 2240 +--- + +The air-gapped installation is almost the same as the online installation except it creates a local registry to host the Docker images. We will demonstrate how to install KubeSphere and Kubernetes on air-gapped environment. + +> Note: The dependencies in different operating systems may cause upexpected problems. If you encounter any installation problems on air-gapped environment, please describe your OS information and error logs on [GitHub](https://github.com/kubesphere/kubesphere/issues). + +## Prerequisites + +- If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information. +> - Installer will use `/var/lib/docker` as the default directory where all Docker related files, including the images, are stored. We recommend you to add additional storage to a disk with at least 100G mounted at `/var/lib/docker` and `/mnt/registry` respectively, use the [fdisk](https://www.computerhope.com/unix/fdisk.htm) command for reference. +- Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [OpenEBS](https://openebs.io/) to provide storage service with dynamic provisioning. It is convenient for testing and development. For production, please [configure supported persistent storage service](../storage-configuration) and prepare [high availability configuration](../master-ha) before installation. +- Since the air-gapped machines cannot connect to apt or yum source, please use clean Linux machine to avoid this problem. + +## Step 1: Prepare Linux Hosts + +The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements. + +- Supported OSes: CentOS 7.4 ~ 7.7 (64-bit), Ubuntu 16.04.5/16.04.6/18.04.1/18.04.2/18.04.3 LTS (64-bit) +- Time synchronization is required across all nodes, otherwise the installation may not succeed; +- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`; +- If you are using `Ubuntu 18.04`, you need to use the user `root`. +- Ensure your disk of each node is at least 100G. +- CPU and memory in total of all machines: 2 cores and 4 GB for minimal installation; 8 cores and 16 GB for complete installation. + + +The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes. + +> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide. + +| Host IP | Host Name | Role | +| --- | --- | --- | +|192.168.0.1|master|master, etcd| +|192.168.0.2|node1|node| +|192.168.0.3|node2|node| + +### Cluster Architecture + +#### Single Master, Single Etcd, Two Nodes + +![Architecture](/cluster-architecture.svg) + +## Step 2: Download Installer Package + +Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`. + +```bash +curl -L https://kubesphere.io/download/offline/latest > kubesphere-all-offline-v2.1.1.tar.gz \ +&& tar -zxf kubesphere-all-offline-v2.1.1.tar.gz && cd kubesphere-all-offline-v2.1.1/conf +``` + +## Step 3: Configure Host Template + +> This step is only for multi-node installation, you can skip this step if you choose all-in-one installation. + +Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file. + +> Note: +> +> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`. +> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`. +> - master, node1 and node2 are the host names of each node and all host names should be in lowercase. + +### hosts.ini + +```ini +[all] +master ansible_connection=local ip=192.168.0.1 +node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD +node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD + +[local-registry] +master + +[kube-master] +master + +[kube-node] +node1 +node2 + +[etcd] +master + +[k8s-cluster:children] +kube-node +kube-master +``` + +> Note: +> +> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here. +> - Installer will use a node as the local registry for docker images, defaults to "master" in the group `[local-registry]`. +> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively. +> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`. +> +> Parameters Specification: +> +> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection. +> - `ansible_host`: The name of the host to be connected. +> - `ip`: The ip of the host to be connected. +> - `ansible_user`: The default ssh user name to use. +> - `ansible_become_pass`: Allows you to set the privilege escalation password. +> - `ansible_ssh_pass`: The password of the host to be connected using root. + +## Step 4: Enable All Components + +> This is step is complete installation. You can skip this step if you choose a minimal installation. + +Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default. + +```yaml +# LOGGING CONFIGURATION +# logging is an optional component when installing KubeSphere, and +# Kubernetes builtin logging APIs will be used if logging_enabled is set to false. +# Builtin logging only provides limited functions, so recommend to enable logging. +logging_enabled: true # Whether to install logging system +elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number +elasticsearch_data_replica: 2 # total number of data nodes +elasticsearch_volume_size: 20Gi # Elasticsearch volume size +log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default. +elk_prefix: logstash # the string making up index names. The index name will be formatted as ks--log +kibana_enabled: false # Kibana Whether to install built-in Grafana +#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption. +#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port + +#DevOps Configuration +devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image) +jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default +jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default +jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default +jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters +jenkinsJavaOpts_Xmx: 6g +jenkinsJavaOpts_MaxRAM: 8g +sonarqube_enabled: true # Whether to install built-in SonarQube +#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption. +#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token + +# Following components are all optional for KubeSphere, +# Which could be turned on to install it before installation or later by updating its value to true +openpitrix_enabled: true # KubeSphere application store +metrics_server_enabled: true # For KubeSphere HPA to use +servicemesh_enabled: true # KubeSphere service mesh system(Istio-based) +notification_enabled: true # KubeSphere notification system +alerting_enabled: true # KubeSphere alerting system +``` + +## Step 5: Install KubeSphere to Linux Machines + +> Note: +> +> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default. +> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions. +> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation. +> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts. + +**1.** Enter `scripts` folder, and execute `install.sh` using `root` user: + +```bash +cd ../cripts +./install.sh +``` + +**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume. + +```bash +################################################ + KubeSphere Installer Menu +################################################ +* 1) All-in-one +* 2) Multi-node +* 3) Quit +################################################ +https://kubesphere.io/ 2020-02-24 +################################################ +Please input an option: 2 + +``` + +**3.** Verify the multi-node installation: + +**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go. + +```bash +successsful! +##################################################### +### Welcome to KubeSphere! ### +##################################################### + +Console: http://192.168.0.1:30880 +Account: admin +Password: P@88w0rd + +NOTE:Please modify the default password after login. +##################################################### +``` + +> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components). + +**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in. + +![Login](https://pek3b.qingstor.com/kubesphere-docs/png/20191017172215.png) + +Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up. + +![Landing Page](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png) + +## Enable Pluggable Components + +If you already have set up minimal installation, you still can edit the ConfigMap of ks-installer using the following command. Please make sure there is enough resource in your machines, see [Pluggable Components Overview](/en/installation/pluggable-components/). + +```bash +kubectl edit cm -n kubesphere-system ks-installer +``` + +## FAQ + +If you have further questions please do not hesitate to raise issues on [GitHub](https://github.com/kubesphere/kubesphere/issues). diff --git a/content/zh/docs/installing-on-linux/public-cloud/_index.md b/content/zh/docs/installing-on-linux/public-cloud/_index.md new file mode 100644 index 000000000..cd927f966 --- /dev/null +++ b/content/zh/docs/installing-on-linux/public-cloud/_index.md @@ -0,0 +1,7 @@ +--- +linkTitle: "Install on Linux" +weight: 2200 + +_build: + render: false +--- \ No newline at end of file diff --git a/content/zh/docs/installing-on-linux/public-cloud/all-in-one.md b/content/zh/docs/installing-on-linux/public-cloud/all-in-one.md new file mode 100644 index 000000000..8214171ef --- /dev/null +++ b/content/zh/docs/installing-on-linux/public-cloud/all-in-one.md @@ -0,0 +1,116 @@ +--- +title: "All-in-One Installation" +keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus' +description: 'The guide for installing all-in-one KubeSphere for developing or testing' + +linkTitle: "All-in-One" +weight: 2210 +--- + +For those who are new to KubeSphere and looking for a quick way to discover the platform, the all-in-one mode is your best choice to install it since it is one-click and hassle-free configuration installation with provisioning KubeSphere and Kubernetes on your machine. + +- The following instructions are for the default installation without enabling any optional components as we have made them pluggable since v2.1.0. If you want to enable any one, please see the section [Enable Pluggable Components](../all-in-one#enable-pluggable-components) below. +- If your machine has >= 8 cores and >= 16G memory, we recommend you to install the full package of KubeSphere by [enabling optional components](../complete-installation). + +## Prerequisites + +If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirement](../port-firewall) for more information. + +## Step 1: Prepare Linux Machine + +The following describes the requirements of hardware and operating system. + +- For `Ubuntu 16.04` OS, it is recommended to select the latest `16.04.5`. +- If you are using Ubuntu 18.04, you need to use the root user to install. +- If the Debian system does not have the sudo command installed, you need to execute the `apt update && apt install sudo` command using root before installation. + +### Hardware Recommendation + +| System | Minimum Requirements | +| ------- | ----------- | +| CentOS 7.4 ~ 7.7 (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:100 G | +| Ubuntu 16.04/18.04 LTS (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:100 G | +| Red Hat Enterprise Linux Server 7.4 (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:100 G | +| Debian Stretch 9.5 (64 bit)| CPU:2 Core, Memory:4 G, Disk Space:100 G | + +## Step 2: Download Installer Package + +Execute the following commands to download Installer 2.1.1 and unpack it. + +```bash +curl -L https://kubesphere.io/download/stable/latest > installer.tar.gz \ +&& tar -zxf installer.tar.gz && cd kubesphere-all-v2.1.1/scripts +``` + +## Step 3: Get Started with Installation + +You should not do anything except executing one command as follows. The installer will complete all things for you automatically including installing/updating dependency packages, installing Kubernetes with default version 1.16.7, storage service and so on. + +> Note: +> +> - Generally speaking, do not modify any configuration. +> - KubeSphere installs `calico` by default. If you would like to use a different network plugin, you are allowed to change the configuration in `conf/common.yaml`. You are also allowed to modify other configurations such as storage class, pluggable components, etc. +> - The default storage class is [OpenEBS](https://openebs.io/) which is a kind of [Local Volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) to provision persistence storage service. OpenEBS supports [dynamic provisioning PV](https://docs.openebs.io/docs/next/uglocalpv.html#Provision-OpenEBS-Local-PV-based-on-hostpath). It will be installed automatically for your testing purpose. +> - Please refer [storage configurations](../storage-configuration) for supported storage class. +> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts. + +**1.** Execute the following command: + +```bash +./install.sh +``` + +**2.** Enter `1` to select `All-in-one` mode and type `yes` if your machine satisfies the requirements to start: + +```bash +################################################ + KubeSphere Installer Menu +################################################ +* 1) All-in-one +* 2) Multi-node +* 3) Quit +################################################ +https://kubesphere.io/ 2020-02-24 +################################################ +Please input an option: 1 +``` + +**3.** Verify if KubeSphere is installed successfully or not: + +**(1).** If you see "Successful" returned after completed, it means the installation is successful. The console service is exposed through nodeport 30880 by default. You may need to bind EIP and configure port forwarding in your environment for outside users to access. Make sure you disable the related firewall. + +```bash +successsful! +##################################################### +### Welcome to KubeSphere! ### +##################################################### + +Console: http://192.168.0.8:30880 +Account: admin +Password: P@88w0rd + +NOTE:Please modify the default password after login. +##################################################### +``` + +> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components). + +**(2).** You will be able to use default account and password to log in the console to take a tour of KubeSphere. + +Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up. + +![Dashboard](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png) + +## Enable Pluggable Components + +The guide above is only used for minimal installation by default. You can execute the following command to open the configure map and enable pluggable components. Make sure your cluster has enough CPU and memory in advance, see [Enable Pluggable Components](../pluggable-components). + +```bash +kubectl edit cm -n kubesphere-system ks-installer +``` + +## FAQ + +The installer has been tested on Aliyun, AWS, Huawei Cloud, QingCloud and Tencent Cloud. Please check the [results](https://github.com/kubesphere/ks-installer/issues/23) for details. Also please read the [FAQ of installation](../../faq/faq-install). + +If you have any further questions please do not hesitate to file issues on [GitHub](https://github.com/kubesphere/kubesphere/issues). diff --git a/content/zh/docs/installing-on-linux/public-cloud/complete-installation.md b/content/zh/docs/installing-on-linux/public-cloud/complete-installation.md new file mode 100644 index 000000000..e0ab92099 --- /dev/null +++ b/content/zh/docs/installing-on-linux/public-cloud/complete-installation.md @@ -0,0 +1,76 @@ +--- +title: "Install All Optional Components" +keywords: 'kubesphere, kubernetes, docker, devops, service mesh, openpitrix' +description: 'Install KubeSphere with all optional components enabled on Linux machine' + + +weight: 2260 +--- + +The installer only installs required components (i.e. minimal installation) by default since v2.1.0. Other components are designed to be pluggable, which means you can enable any of them before or after installation. If your machine meets the following minimum requirements, we recommend you to **enable all components before installation**. A complete installation gives you an opportunity to comprehensively discover the container platform. + + +Minimum Requirements + +- CPU: 8 cores in total of all machines +- Memory: 16 GB in total of all machines + + + +> Note: +> +> - If your machines do not meet the minimum requirements of a complete installation, you can enable any of components at your will. Please refer to [Enable Pluggable Components Installation](../pluggable-components). +> - It works for [All-in-One](../all-in-one) and [Multi-Node](../multi-node). + +This tutorial will walk you through how to enable all components of KubeSphere. + +## Download Installer Package + +If you do not have the package yet, please run the following commands to download Installer 2.1.1 and unpack it, then enter `conf` folder. + +```bash +$ curl -L https://kubesphere.io/download/stable/v2.1.1 > installer.tar.gz \ +&& tar -zxf installer.tar.gz && cd kubesphere-all-v2.1.1/conf +``` + +## Enable All Components + +Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default. + +```yaml +# LOGGING CONFIGURATION +# logging is an optional component when installing KubeSphere, and +# Kubernetes builtin logging APIs will be used if logging_enabled is set to false. +# Builtin logging only provides limited functions, so recommend to enable logging. +logging_enabled: true # Whether to install logging system +elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number +elasticsearch_data_replica: 2 # total number of data nodes +elasticsearch_volume_size: 20Gi # Elasticsearch volume size +log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default. +elk_prefix: logstash # the string making up index names. The index name will be formatted as ks--log +kibana_enabled: false # Kibana Whether to install built-in Grafana +#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption. +#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port + +#DevOps Configuration +devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image) +jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default +jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default +jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default +jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters +jenkinsJavaOpts_Xmx: 6g +jenkinsJavaOpts_MaxRAM: 8g +sonarqube_enabled: true # Whether to install built-in SonarQube +#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption. +#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token + +# Following components are all optional for KubeSphere, +# Which could be turned on to install it before installation or later by updating its value to true +openpitrix_enabled: true # KubeSphere application store +metrics_server_enabled: true # For KubeSphere HPA to use +servicemesh_enabled: true # KubeSphere service mesh system(Istio-based) +notification_enabled: true # KubeSphere notification system +alerting_enabled: true # KubeSphere alerting system +``` + +Save it, then you can continue the installation process. diff --git a/content/zh/docs/installing-on-linux/public-cloud/install-ks-on-linux-airgapped.md b/content/zh/docs/installing-on-linux/public-cloud/install-ks-on-linux-airgapped.md new file mode 100644 index 000000000..26b3e4f04 --- /dev/null +++ b/content/zh/docs/installing-on-linux/public-cloud/install-ks-on-linux-airgapped.md @@ -0,0 +1,224 @@ +--- +title: "Air-Gapped Installation" +keywords: 'kubernetes, kubesphere, air gapped, installation' +description: 'How to install KubeSphere on air-gapped Linux machines' + + +weight: 2240 +--- + +The air-gapped installation is almost the same as the online installation except it creates a local registry to host the Docker images. We will demonstrate how to install KubeSphere and Kubernetes on air-gapped environment. + +> Note: The dependencies in different operating systems may cause upexpected problems. If you encounter any installation problems on air-gapped environment, please describe your OS information and error logs on [GitHub](https://github.com/kubesphere/kubesphere/issues). + +## Prerequisites + +- If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information. +> - Installer will use `/var/lib/docker` as the default directory where all Docker related files, including the images, are stored. We recommend you to add additional storage to a disk with at least 100G mounted at `/var/lib/docker` and `/mnt/registry` respectively, use the [fdisk](https://www.computerhope.com/unix/fdisk.htm) command for reference. +- Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [OpenEBS](https://openebs.io/) to provide storage service with dynamic provisioning. It is convenient for testing and development. For production, please [configure supported persistent storage service](../storage-configuration) and prepare [high availability configuration](../master-ha) before installation. +- Since the air-gapped machines cannot connect to apt or yum source, please use clean Linux machine to avoid this problem. + +## Step 1: Prepare Linux Hosts + +The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements. + +- Supported OSes: CentOS 7.4 ~ 7.7 (64-bit), Ubuntu 16.04.5/16.04.6/18.04.1/18.04.2/18.04.3 LTS (64-bit) +- Time synchronization is required across all nodes, otherwise the installation may not succeed; +- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`; +- If you are using `Ubuntu 18.04`, you need to use the user `root`. +- Ensure your disk of each node is at least 100G. +- CPU and memory in total of all machines: 2 cores and 4 GB for minimal installation; 8 cores and 16 GB for complete installation. + + +The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes. + +> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide. + +| Host IP | Host Name | Role | +| --- | --- | --- | +|192.168.0.1|master|master, etcd| +|192.168.0.2|node1|node| +|192.168.0.3|node2|node| + +### Cluster Architecture + +#### Single Master, Single Etcd, Two Nodes + +![Architecture](/cluster-architecture.svg) + +## Step 2: Download Installer Package + +Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`. + +```bash +curl -L https://kubesphere.io/download/offline/latest > kubesphere-all-offline-v2.1.1.tar.gz \ +&& tar -zxf kubesphere-all-offline-v2.1.1.tar.gz && cd kubesphere-all-offline-v2.1.1/conf +``` + +## Step 3: Configure Host Template + +> This step is only for multi-node installation, you can skip this step if you choose all-in-one installation. + +Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file. + +> Note: +> +> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`. +> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`. +> - master, node1 and node2 are the host names of each node and all host names should be in lowercase. + +### hosts.ini + +```ini +[all] +master ansible_connection=local ip=192.168.0.1 +node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD +node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD + +[local-registry] +master + +[kube-master] +master + +[kube-node] +node1 +node2 + +[etcd] +master + +[k8s-cluster:children] +kube-node +kube-master +``` + +> Note: +> +> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here. +> - Installer will use a node as the local registry for docker images, defaults to "master" in the group `[local-registry]`. +> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively. +> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`. +> +> Parameters Specification: +> +> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection. +> - `ansible_host`: The name of the host to be connected. +> - `ip`: The ip of the host to be connected. +> - `ansible_user`: The default ssh user name to use. +> - `ansible_become_pass`: Allows you to set the privilege escalation password. +> - `ansible_ssh_pass`: The password of the host to be connected using root. + +## Step 4: Enable All Components + +> This is step is complete installation. You can skip this step if you choose a minimal installation. + +Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default. + +```yaml +# LOGGING CONFIGURATION +# logging is an optional component when installing KubeSphere, and +# Kubernetes builtin logging APIs will be used if logging_enabled is set to false. +# Builtin logging only provides limited functions, so recommend to enable logging. +logging_enabled: true # Whether to install logging system +elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number +elasticsearch_data_replica: 2 # total number of data nodes +elasticsearch_volume_size: 20Gi # Elasticsearch volume size +log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default. +elk_prefix: logstash # the string making up index names. The index name will be formatted as ks--log +kibana_enabled: false # Kibana Whether to install built-in Grafana +#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption. +#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port + +#DevOps Configuration +devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image) +jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default +jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default +jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default +jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters +jenkinsJavaOpts_Xmx: 6g +jenkinsJavaOpts_MaxRAM: 8g +sonarqube_enabled: true # Whether to install built-in SonarQube +#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption. +#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token + +# Following components are all optional for KubeSphere, +# Which could be turned on to install it before installation or later by updating its value to true +openpitrix_enabled: true # KubeSphere application store +metrics_server_enabled: true # For KubeSphere HPA to use +servicemesh_enabled: true # KubeSphere service mesh system(Istio-based) +notification_enabled: true # KubeSphere notification system +alerting_enabled: true # KubeSphere alerting system +``` + +## Step 5: Install KubeSphere to Linux Machines + +> Note: +> +> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default. +> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions. +> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation. +> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts. + +**1.** Enter `scripts` folder, and execute `install.sh` using `root` user: + +```bash +cd ../cripts +./install.sh +``` + +**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume. + +```bash +################################################ + KubeSphere Installer Menu +################################################ +* 1) All-in-one +* 2) Multi-node +* 3) Quit +################################################ +https://kubesphere.io/ 2020-02-24 +################################################ +Please input an option: 2 + +``` + +**3.** Verify the multi-node installation: + +**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go. + +```bash +successsful! +##################################################### +### Welcome to KubeSphere! ### +##################################################### + +Console: http://192.168.0.1:30880 +Account: admin +Password: P@88w0rd + +NOTE:Please modify the default password after login. +##################################################### +``` + +> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components). + +**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in. + +![Login](https://pek3b.qingstor.com/kubesphere-docs/png/20191017172215.png) + +Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up. + +![Landing Page](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png) + +## Enable Pluggable Components + +If you already have set up minimal installation, you still can edit the ConfigMap of ks-installer using the following command. Please make sure there is enough resource in your machines, see [Pluggable Components Overview](/en/installation/pluggable-components/). + +```bash +kubectl edit cm -n kubesphere-system ks-installer +``` + +## FAQ + +If you have further questions please do not hesitate to raise issues on [GitHub](https://github.com/kubesphere/kubesphere/issues). diff --git a/content/zh/docs/installing-on-linux/public-cloud/master-ha.md b/content/zh/docs/installing-on-linux/public-cloud/master-ha.md new file mode 100644 index 000000000..ee8f26203 --- /dev/null +++ b/content/zh/docs/installing-on-linux/public-cloud/master-ha.md @@ -0,0 +1,152 @@ +--- +title: "High Availability Configuration" +keywords: "kubesphere, kubernetes, docker,installation, HA, high availability" +description: "The guide for installing a high availability of KubeSphere cluster" + +weight: 2230 +--- + +## Introduction + +[Multi-node installation](../multi-node) can help you to quickly set up a single-master cluster on multiple machines for development and testing. However, we need to consider the high availability of the cluster for production. Since the key components on the master node, i.e. kube-apiserver, kube-scheduler, and kube-controller-manager are running on a single master node, Kubernetes and KubeSphere will be unavailable during the master being down. Therefore we need to set up a high availability cluster by provisioning load balancers and multiple masters. You can use any cloud load balancer, or any hardware load balancer (e.g. F5). In addition, keepalved and Haproxy is also an alternative for creating such high-availability cluster. + +This document walks you through an example how to create two [QingCloud Load Balancer](https://docs.qingcloud.com/product/network/loadbalancer), serving as internal load balancer and external load balancer respectively, and how to configure the high availability of masters and Etcd using the load balancers. + +## Prerequisites + +- Please make sure that you already read [Multi-Node installation](../multi-node). This document only demonstrates how to configure load balancers. +- You need a [QingCloud](https://console.qingcloud.com/login) account to create load balancers, or follow the guide of any other cloud provider to create load balancers. + +## Architecture + +This example prepares six machines of CentOS 7.5. We will create two load balancers, and deploy three masters and Etcd nodes on three of the machines. You can configure these masters and Etcd nodes in `conf/hosts.ini`. + +![Master and etcd node high availability architecture](https://pek3b.qingstor.com/kubesphere-docs/png/20200307215924.png) + +## Install HA Cluster + +### Step 1: Create Load Balancers + +This step briefly shows an example of creating a load balancer on QingCloud platform. + +#### Create an Internal Load Balancer + +1.1. Log in [QingCloud Console](https://console.qingcloud.com/login) and select **Network & CDN → Load Balancers**, then click on the create button and fill in the basic information. + +1.2. Choose the VxNet that your machines are created within from the **Network** dropdown list. Here is `kube`. Other settings can be default values as follows. Click **Submit** to complete the creation. + +![Create Internal LB on QingCloud](https://pek3b.qingstor.com/kubesphere-docs/png/20200215224125.png) + +1.3. Drill into the detail page of the load balancer, then create a listener that listens to the port `6443` of the `TCP` protocol. + +- Name: Define a name for this Listener +- Listener Protocol: Select `TCP` protocol +- Port: `6443` +- Load mode: `Poll` + +> Note: After creating the listener, please check the firewall rules of the load balancer. Make sure that the port `6443` has been added to the firewall rules and the external traffic can pass through `6443`. Otherwise, the installation will fail. + +![Add Listener to LB](https://pek3b.qingstor.com/kubesphere-docs/png/20200215225205.png) + +1.4. Click **Add Backend**, choose the VxNet `kube` that we chose. Then click on the button **Advanced Search** and choose the three master nodes under the VxNet and set the port to `6443` which is the default secure port of api-server. + +Click **Submit** when you are done. + +![Choose Backends](https://pek3b.qingstor.com/kubesphere-docs/png/20200215225550.png) + +1.5. Click on the button **Apply Changes** to activate the configurations. At this point, you can find the three masters have been added as the backend servers of the listener that is behind the internal load balancer. + +> Please note: The status of all masters might shows `Not available` after you added them as backends. This is normal since the port `6443` of api-server are not active in masters yet. The status will change to `Active` and the port of api-server will be exposed after installation complete, which means the internal load balancer you configured works as expected. + +![Apply Changes](https://pek3b.qingstor.com/kubesphere-docs/png/20200215230107.png) + +#### Create an External Load Balancer + +You need to create an EIP in advance. + +1.6. Similarly, create an external load balancer without joining any network, but associate the EIP that you created to this load balancer. + +1.7. Enter the load balancer detail page, create a listener that listens to the port `30880` of the `HTTP` protocol which is the nodeport of KubeSphere console.. + +> Note: After creating the listener, please check the firewall rules of the load balancer. Make sure that the port `30880` has been added to the firewall rules and the external traffic can pass through `6443`. Otherwise, the installation will fail. + +![Create external LB](https://pek3b.qingstor.com/kubesphere-docs/png/20200215232114.png) + +1.8. Click **Add Backend**, then choose the `six` machines that we are going to install KubeSphere within the VxNet `Kube`, and set the port to `30880`. + +Click **Submit** when you are done. + +1.9. Click on the button **Apply Changes** to activate the configurations. At this point, you can find the six machines have been added as the backend servers of the listener that is behind the external load balancer. + +![Apply Changes](https://pek3b.qingstor.com/kubesphere-docs/png/20200215232445.png) + +### Step 2: Modify the host.ini + +Go to the taskbox where you downloaded the installer by following the [Multi-node Installation](../multi-node) and complete the following configurations. + +| **Parameter** | **Description** | +|--------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `[all]` | node information. Use the following syntax if you run installation as `root` user:
- ` ansible_connection= ip=`
- ` ansible_host= ip= ansible_ssh_pass=`
If you log in as a non-root user, use the syntax:
- ` ansible_connection= ip= ansible_user= ansible_become_pass=` | +| `[kube-master]` | master node names | +| `[kube-node]` | worker node names | +| `[etcd]` | etcd node names. The number of `etcd` nodes needs to be odd. | +| `[k8s-cluster:children]` | group names of `[kube-master]` and `[kube-node]` | + + +We use **CentOS 7.5** with `root` user to install an HA cluster. Please see the following configuration as an example: + +> Note: +>
+> If the _taskbox_ cannot establish `ssh` connection with the rest nodes, try to use the non-root user configuration. + +#### host.ini example + +```ini +[all] +master1 ansible_connection=local ip=192.168.0.1 +master2 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD +master3 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD +node1 ansible_host=192.168.0.4 ip=192.168.0.4 ansible_ssh_pass=PASSWORD +node2 ansible_host=192.168.0.5 ip=192.168.0.5 ansible_ssh_pass=PASSWORD +node3 ansible_host=192.168.0.6 ip=192.168.0.6 ansible_ssh_pass=PASSWORD + +[kube-master] +master1 +master2 +master3 + +[kube-node] +node1 +node2 +node3 + +[etcd] +master1 +master2 +master3 + +[k8s-cluster:children] +kube-node +kube-master +``` + +### Step 3: Configure the Load Balancer Parameters + +Besides configuring the `common.yaml` by following the [Multi-node Installation](../multi-node), you need to modify the load balancer information in the `common.yaml`. Assume the **VIP** address and listening port of the **internal load balancer** are `192.168.0.253` and `6443`, then you can refer to the following example. + +> - Note that address and port should be indented by two spaces in `common.yaml`, and the address should be VIP. +> - The domain name of the load balancer is "lb.kubesphere.local" by default for internal access. If you need to change the domain name, please uncomment and modify it. + +#### The configuration sample in common.yaml + +```yaml +## External LB example config +## apiserver_loadbalancer_domain_name: "lb.kubesphere.local" +loadbalancer_apiserver: + address: 192.168.0.253 + port: 6443 +``` + +Finally, please refer to the [guide](../storage-configuration) to configure the persistent storage service in `common.yaml` and start your HA cluster installation. + +Then it is ready to install the high availability KubeSphere cluster. diff --git a/content/zh/docs/installing-on-linux/public-cloud/multi-node.md b/content/zh/docs/installing-on-linux/public-cloud/multi-node.md new file mode 100644 index 000000000..d1cd790ea --- /dev/null +++ b/content/zh/docs/installing-on-linux/public-cloud/multi-node.md @@ -0,0 +1,176 @@ +--- +title: "Multi-node Installation" +keywords: 'kubesphere, kubernetes, docker, kubesphere installer' +description: 'The guide for installing KubeSphere on Multi-Node in development or testing environment' + +weight: 2220 +--- + +`Multi-Node` installation enables installing KubeSphere on multiple nodes. Typically, use any one node as _taskbox_ to run the installation task. Please note `ssh` communication is required to be established between taskbox and other nodes. + +- The following instructions are for the default installation without enabling any optional components as we have made them pluggable since v2.1.0. If you want to enable any one, please read [Enable Pluggable Components](../pluggable-components). +- If your machines in total have >= 8 cores and >= 16G memory, we recommend you to install the full package of KubeSphere by [Enabling Optional Components](../complete-installation). +- The installation time depends on your network bandwidth, your computer configuration, the number of nodes, etc. + +## Prerequisites + +If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information. + +## Step 1: Prepare Linux Hosts + +The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements. + +- Time synchronization is required across all nodes, otherwise the installation may not succeed; +- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`; +- If you are using `Ubuntu 18.04`, you need to use the user `root`; +- If the Debian system does not have the sudo command installed, you need to execute `apt update && apt install sudo` command using root before installation. + +### Hardware Recommendation + +- KubeSphere can be installed on any cloud platform. +- The installation speed can be accelerated by increasing network bandwidth. +- If you choose air-gapped installation, ensure your disk of each node is at least 100G. + +| System | Minimum Requirements (Each node) | +| --- | --- | +| CentOS 7.4 ~ 7.7 (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:40 G | +| Ubuntu 16.04/18.04 LTS (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:40 G | +| Red Hat Enterprise Linux Server 7.4 (64 bit) | CPU:2 Core, Memory:4 G, Disk Space:40 G | +| Debian Stretch 9.5 (64 bit)| CPU:2 Core, Memory:4 G, Disk Space:40 G | + +The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes. + +> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide. + +| Host IP | Host Name | Role | +| --- | --- | --- | +|192.168.0.1|master|master, etcd| +|192.168.0.2|node1|node| +|192.168.0.3|node2|node| + +### Cluster Architecture + +#### Single Master, Single Etcd, Two Nodes + +![Architecture](/cluster-architecture.svg) + +## Step 2: Download Installer Package + +**1.** Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`. + +```bash +curl -L https://kubesphere.io/download/stable/latest > installer.tar.gz \ +&& tar -zxf installer.tar.gz && cd kubesphere-all-v2.1.1/conf +``` + +**2.** Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file. + +> Note: +> +> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`. +> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`. +> - master, node1 and node2 are the host names of each node and all host names should be in lowercase. + +### hosts.ini + +```ini +[all] +master ansible_connection=local ip=192.168.0.1 +node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD +node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD + +[kube-master] +master + +[kube-node] +node1 +node2 + +[etcd] +master + +[k8s-cluster:children] +kube-node +kube-master +``` + +> Note: +> +> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here. +> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively. +> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`. +> +> Parameters Specification: +> +> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection. +> - `ansible_host`: The name of the host to be connected. +> - `ip`: The ip of the host to be connected. +> - `ansible_user`: The default ssh user name to use. +> - `ansible_become_pass`: Allows you to set the privilege escalation password. +> - `ansible_ssh_pass`: The password of the host to be connected using root. + +## Step 3: Install KubeSphere to Linux Machines + +> Note: +> +> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default. +> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions. +> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation. +> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts. + +**1.** Enter `scripts` folder, and execute `install.sh` using `root` user: + +```bash +cd ../cripts +./install.sh +``` + +**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume. + +```bash +################################################ + KubeSphere Installer Menu +################################################ +* 1) All-in-one +* 2) Multi-node +* 3) Quit +################################################ +https://kubesphere.io/ 2020-02-24 +################################################ +Please input an option: 2 + +``` + +**3.** Verify the multi-node installation: + +**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go. + +```bash +successsful! +##################################################### +### Welcome to KubeSphere! ### +##################################################### + +Console: http://192.168.0.1:30880 +Account: admin +Password: P@88w0rd + +NOTE:Please modify the default password after login. +##################################################### +``` + +> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components). + +**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in. + +![Login](https://pek3b.qingstor.com/kubesphere-docs/png/20191017172215.png) + +Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up. + +![Landing Page](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png) + +## FAQ + +The installer has been tested on Aliyun, AWS, Huawei Cloud, QingCloud, Tencent Cloud. Please check the [results](https://github.com/kubesphere/ks-installer/issues/23) for details. Also please read the [FAQ of installation](../../faq/faq-install). + +If you have any further questions please do not hesitate to file issues on [GitHub](https://github.com/kubesphere/kubesphere/issues). diff --git a/content/zh/docs/installing-on-linux/public-cloud/storage-configuration.md b/content/zh/docs/installing-on-linux/public-cloud/storage-configuration.md new file mode 100644 index 000000000..a3d8d5156 --- /dev/null +++ b/content/zh/docs/installing-on-linux/public-cloud/storage-configuration.md @@ -0,0 +1,157 @@ +--- +title: "StorageClass Configuration" +keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus' +description: 'Instructions for Setting up StorageClass for KubeSphere' + +weight: 2250 +--- + +Currently, Installer supports the following [Storage Class](https://kubernetes.io/docs/concepts/storage/storage-classes/), providing persistent storage service for KubeSphere (more storage classes will be supported soon). + +- NFS +- Ceph RBD +- GlusterFS +- QingCloud Block Storage +- QingStor NeonSAN +- Local Volume (for development and test only) + +The versions of storage systems and corresponding CSI plugins in the table listed below have been well tested. + +| **Name** | **Version** | **Reference** | +| ----------- | --- |---| +Ceph RBD Server | v0.94.10 | For development and testing, refer to [Install Ceph Storage Server](/zh-CN/appendix/ceph-ks-install/) for details. Please refer to [Ceph Documentation](http://docs.ceph.com/docs/master/) for production. | +Ceph RBD Client | v12.2.5 | Before installing KubeSphere, you need to configure the corresponding parameters in `common.yaml`. Please refer to [Ceph RBD](../storage-configuration/#ceph-rbd) | +GlusterFS Server | v3.7.6 | For development and testing, refer to [Deploying GlusterFS Storage Server](/zh-CN/appendix/glusterfs-ks-install/) for details. Please refer to [Gluster Documentation](https://www.gluster.org/install/) or [Gluster Documentation](http://gluster.readthedocs.io/en/latest/Install-Guide/Install/) for production. Note you need to install [Heketi Manager (V3.0.0)](https://github.com/heketi/heketi/tree/master/docs/admin). | +|GlusterFS Client |v3.12.10|Before installing KubeSphere, you need to configure the corresponding parameters in `common.yaml`. Please refer to [GlusterFS](../storage-configuration/#glusterfs)| +|NFS Client | v3.1.0 | Before installing KubeSphere, you need to configure the corresponding parameters in `common.yaml`. Make sure you have prepared NFS storage server. Please see [NFS Client](../storage-configuration/#nfs) | +QingCloud-CSI|v0.2.0.1|You need to configure the corresponding parameters in `common.yaml` before installing KubeSphere. Please refer to [QingCloud CSI](../storage-configuration/#qingcloud-csi) for details| +NeonSAN-CSI|v0.3.0| Before installing KubeSphere, you need to configure the corresponding parameters in `common.yaml`. Make sure you have prepared QingStor NeonSAN storage server. Please see [Neonsan-CSI](../storage-configuration/#neonsan-csi) | + +> Note: You are only allowed to set ONE default storage classes in the cluster. To specify a default storage class, make sure there is no default storage class already exited in the cluster. + +## Storage Configuration + +After preparing the storage server, you need to refer to the parameters description in the following table. Then modify the corresponding configurations in `conf/common.yaml` accordingly. + +The following describes the storage configuration in `common.yaml`. + +> Note: Local Volume is configured as the default storage class in `common.yaml` by default. If you are going to set other storage class as the default, disable the Local Volume and modify the configuration for other storage class. + +### Local Volume (For developing or testing only) + +A [Local Volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) represents a mounted local storage device such as a disk, partition or directory. Local volumes can only be used as a statically created PersistentVolume. We recommend you to use Local volume in testing or development only since it is quick and easy to install KubeSphere without the struggle to set up persistent storage server. Refer to following table for the definition in `conf/common.yaml`. + +| **Local volume** | **Description** | +| --- | --- | +| local\_volume\_provisioner\_enabled | Whether to use Local as the persistent storage, defaults to true | +| local\_volume\_provisioner\_storage\_class | Storage class name, default value:local | +| local\_volume\_is\_default\_class | Whether to set Local as the default storage class, defaults to true.| + +### NFS + +An NFS volume allows an existing NFS (Network File System) share to be mounted into your Pod. NFS can be configured in `conf/common.yaml`. Note you need to prepare NFS server in advance. + +| **NFS** | **Description** | +| --- | --- | +| nfs\_client\_enable | Whether to use NFS as the persistent storage, defaults to false | +| nfs\_client\_is\_default\_class | Whether to set NFS as default storage class, defaults to false. | +| nfs\_server | The NFS server address, either IP or Hostname | +| nfs\_path | NFS shared directory, which is the file directory shared on the server, see [Kubernetes Documentation](https://kubernetes.io/docs/concepts/storage/volumes/#nfs) | +|nfs\_vers3\_enabled | Specifies which version of the NFS protocol to use, defaults to false which means v4. True means v4 | +|nfs_archiveOnDelete | Archive PVC when deleting. It will automatically remove data from `oldPath` when it sets to false | + +### Ceph RBD + +The open source [Ceph RBD](https://ceph.com/) distributed storage system can be configured to use in `conf/common.yaml`. You need to prepare Ceph storage server in advance. Please refer to [Kubernetes Documentation](https://kubernetes.io/docs/concepts/storage/storage-classes/#ceph-rbd) for more details. + +| **Ceph\_RBD** | **Description** | +| --- | --- | +| ceph\_rbd\_enabled | Whether to use Ceph RBD as the persistent storage, defaults to false | +| ceph\_rbd\_storage\_class | Storage class name | +| ceph\_rbd\_is\_default\_class | Whether to set Ceph RBD as default storage class, defaults to false | +| ceph\_rbd\_monitors | Ceph monitors, comma delimited. This parameter is required, which depends on Ceph RBD server parameters | +| ceph\_rbd\_admin\_id | Ceph client ID that is capable of creating images in the pool. Defaults to “admin” | +| ceph\_rbd\_admin\_secret | Admin_id's secret, secret name for "adminId". This parameter is required. The provided secret must have type “kubernetes.io/rbd” | +| ceph\_rbd\_pool | Ceph RBD pool. Default is “rbd” | +| ceph\_rbd\_user\_id | Ceph client ID that is used to map the RBD image. Default is the same as adminId | +| ceph\_rbd\_user\_secret | Secret for User_id, it is required to create this secret in namespace which used rbd image | +| ceph\_rbd\_fsType | fsType that is supported by Kubernetes. Default: "ext4"| +| ceph\_rbd\_imageFormat | Ceph RBD image format, “1” or “2”. Default is “1” | +|ceph\_rbd\_imageFeatures| This parameter is optional and should only be used if you set imageFormat to “2”. Currently supported features are layering only. Default is “”, and no features are turned on| + +> Note: +> +> The ceph secret, which is created in storage class, like "ceph_rbd_admin_secret" and "ceph_rbd_user_secret", is retrieved using following command in Ceph storage server. + +```bash +ceph auth get-key client.admin +``` + +### GlusterFS + +[GlusterFS](https://docs.gluster.org/en/latest/) is a scalable network filesystem suitable for data-intensive tasks such as cloud storage and media streaming. You need to prepare GlusterFS storage server in advance. Please refer to [Kubernetes Documentation](https://kubernetes.io/docs/concepts/storage/storage-classes/#glusterfs) for further information. + +| **GlusterFS(It requires glusterfs cluster which is managed by heketi)**|**Description** | +| --- | --- | +| glusterfs\_provisioner\_enabled | Whether to use GlusterFS as the persistent storage, defaults to false | +| glusterfs\_provisioner\_storage\_class | Storage class name | +| glusterfs\_is\_default\_class | Whether to set GlusterFS as default storage class, defaults to false | +| glusterfs\_provisioner\_restauthenabled | Gluster REST service authentication boolean that enables authentication to the REST server | +| glusterfs\_provisioner\_resturl | Gluster REST service/Heketi service url which provision gluster volumes on demand. The general format should be "IP address:Port" and this is a mandatory parameter for GlusterFS dynamic provisioner| +| glusterfs\_provisioner\_clusterid | Optional, for example, 630372ccdc720a92c681fb928f27b53f is the ID of the cluster which will be used by Heketi when provisioning the volume. It can also be a list of clusterids | +| glusterfs\_provisioner\_restuser | Gluster REST service/Heketi user who has access to create volumes in the Gluster Trusted Pool | +| glusterfs\_provisioner\_secretName | Optional, identification of Secret instance that contains user password to use when talking to Gluster REST service, Installer will automatically create this secret in kube-system | +| glusterfs\_provisioner\_gidMin | The minimum value of GID range for the storage class | +| glusterfs\_provisioner\_gidMax |The maximum value of GID range for the storage class | +| glusterfs\_provisioner\_volumetype | The volume type and its parameters can be configured with this optional value, for example: ‘Replica volume’: volumetype: replicate:3 | +| jwt\_admin\_key | "jwt.admin.key" field is from "/etc/heketi/heketi.json" in Heketi server | + +**Attention:** + + > Please note: `"glusterfs_provisioner_clusterid"` could be returned from glusterfs server by running the following command: + + ```bash + export HEKETI_CLI_SERVER=http://localhost:8080 + heketi-cli cluster list + ``` + +### QingCloud Block Storage + +[QingCloud Block Storage](https://docs.qingcloud.com/product/Storage/volume/) is supported in KubeSphere as the persistent storage service. If you would like to experience dynamic provisioning when creating volume, we recommend you to use it as your persistent storage solution. KubeSphere integrates [QingCloud-CSI](https://github.com/yunify/qingcloud-csi/blob/master/README_zh.md), and allows you to use various block storage services of QingCloud. With simple configuration, you can quickly expand, clone PVCs and view the topology of PVCs, create/delete snapshot, as well as restore volume from snapshot. + +QingCloud-CSI plugin has implemented the standard CSI. You can easily create and manage different types of volumes in KubeSphere, which are provided by QingCloud. The corresponding PVCs will created with ReadWriteOnce access mode and mounted to running Pods. + +QingCloud-CSI supports create the following five types of volume in QingCloud: + +- High capacity +- Standard +- SSD Enterprise +- Super high performance +- High performance + +|**QingCloud-CSI** | **Description**| +| --- | ---| +| qingcloud\_csi\_enabled|Whether to use QingCloud-CSI as the persistent storage volume, defaults to false | +| qingcloud\_csi\_is\_default\_class| Whether to set QingCloud-CSI as default storage class, defaults to false | +qingcloud\_access\_key\_id ,
qingcloud\_secret\_access\_key| Please obtain it from [QingCloud Console](https://console.qingcloud.com/login) | +|qingcloud\_zone| Zone should be the same as the zone where the Kubernetes cluster is installed, and the CSI plugin will operate on the storage volumes for this zone. For example: zone can be set to these values, such as sh1a (Shanghai 1-A), sh1b (Shanghai 1-B), pek2 (Beijing 2), pek3a (Beijing 3-A), pek3b (Beijing 3-B), pek3c (Beijing 3-C), gd1 (Guangdong 1), gd2a (Guangdong 2-A), ap1 (Asia Pacific 1), ap2a (Asia Pacific 2-A) | +| type | The type of volume in QingCloud platform. In QingCloud platform, 0 represents high performance volume. 3 represents super high performance volume. 1 or 2 represents high capacity volume depending on cluster‘s zone, see [QingCloud Documentation](https://docs.qingcloud.com/product/api/action/volume/create_volumes.html)| +| maxSize, minSize | Limit the range of volume size in GiB| +| stepSize | Set the increment of volumes size in GiB| +| fsType | The file system of the storage volume, which supports ext3, ext4, xfs. The default is ext4| + +### QingStor NeonSAN + +The NeonSAN-CSI plugin supports the enterprise-level distributed storage [QingStor NeonSAN](https://www.qingcloud.com/products/qingstor-neonsan/) as the persistent storage solution. You need prepare the NeonSAN server, then configure the NeonSAN-CSI plugin to connect to its storage server in `conf/common.yaml`. Please refer to [NeonSAN-CSI Reference](https://github.com/wnxn/qingstor-csi/blob/master/docs/reference_zh.md#storageclass-%E5%8F%82%E6%95%B0) for further information. + +| **NeonSAN** | **Description** | +| --- | --- | +| neonsan\_csi\_enabled | Whether to use NeonSAN as the persistent storage, defaults to false | +| neonsan\_csi\_is\_default\_class | Whether to set NeonSAN-CSI as the default storage class, defaults to false| +Neonsan\_csi\_protocol | transportation protocol, user must set the option, such as TCP or RDMA| +| neonsan\_server\_address | NeonSAN server address | +| neonsan\_cluster\_name| NeonSAN server cluster name| +| neonsan\_server\_pool|A comma separated list of pools. Tell plugin to manager these pools. User must set the option, the default value is kube| +| neonsan\_server\_replicas|NeonSAN image replica count. Default: 1| +| neonsan\_server\_stepSize|set the increment of volumes size in GiB. Default: 1| +| neonsan\_server\_fsType|The file system to use for the volume. Default: ext4| diff --git a/content/zh/docs/introduction/_index.md b/content/zh/docs/introduction/_index.md new file mode 100644 index 000000000..25a021201 --- /dev/null +++ b/content/zh/docs/introduction/_index.md @@ -0,0 +1,22 @@ +--- +title: "Introduction" +description: "Help you to better understand KubeSphere with detailed graphics and contents" +layout: "single" + +linkTitle: "Introduction" + +weight: 1000 + +icon: "/images/docs/docs.svg" + +--- + +## Installing KubeSphere and Kubernetes on Linux + +In this chapter, we will demonstrate how to use KubeKey to provision a new Kubernetes and KubeSphere cluster based on different infrastructures. Kubekey can help you to quickly build a production-ready cluster architecture on a set of machines from zero to one. It also helps you to easily scale the cluster and install pluggable components on existing architecture. + +## Most Popular Pages + +Below you will find some of the most common and helpful pages from this chapter. We highly recommend you to review them at first. + +{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} diff --git a/content/zh/docs/introduction/advantages.md b/content/zh/docs/introduction/advantages.md new file mode 100644 index 000000000..64c1f2e89 --- /dev/null +++ b/content/zh/docs/introduction/advantages.md @@ -0,0 +1,97 @@ +--- +title: "Advantages" +keywords: "kubesphere, kubernetes, docker, helm, jenkins, istio, prometheus, service mesh, advantages" +description: "KubeSphere advantages" + +weight: 1400 +--- + +## Vision + +{{< notice note >}} +### This is a simple note. +{{}} + +{{< notice tip >}} +This is a simple tip. +{{}} + +{{< notice info >}} +This is a simple info. +{{}} + +{{< notice warning >}} +This is a simple warning. +{{}} + +{{< tabs >}} + +{{< tab "first" >}} +### Why KubeSphere +{{}} + +{{< tab "second" >}} +``` +console.log('test') +``` +{{}} + +{{< tab "third" >}} +this is third tab +{{}} + +{{}} + +KubeSphere is a distributed operating system that provides full stack system services and a pluggable framework for third-party software integration for enterprise-critical containerized workloads running in data center. + +Kubernetes has now become the de facto standard for deploying containerized applications at scale in private, public and hybrid cloud environments. However, many people easily get confused when they start to use Kubernetes as it is complicated and has many additional components to manage, some of which need to be installed and deployed by users themselves, such as storage service and network service. At present, Kubernetes only provides open source solutions or projects, which may be difficult to install, maintain and operate to some extent. For users, learning costs and barrier to entry are both high. In a word, it is not easy to get started quickly. + +If you want to deploy your cloud-native applications on the cloud, it is a good practice to adopt KubeSphere to help you run Kubernetes since KubeSphere already provides rich and required services for running your applications successfully so that you can focus on your core business. More specifically, KubeSphere provides application lifecycle management, infrastructure management, CI/CD pipeline, service mesh, observability such as monitoring, logging, alerting, events and notification. In another word, Kubernetes is a wonderful open-source platform. KubeSphere steps further to make the container platform more user-friendly and powerful not only to ease the learning curve and drive the adoption of Kubernetes, but also to help users deliver cloud-native applications faster and easier. + +## Why KubeSphere + +KubeSphere provides high-performance and scalable container service management for enterprise users, It aims to help enterprises accomplish the digital transformation driven by the new generation of Internet technology, and accelerate the speed of iteration and delivery of business to meet the ever-changing business needs of enterprises. + +## Awesome User Experience and Wizard UI + +- KubeSphere provides user-friendly web console for developing, testing and operating. With the wizard UI, users greatly reduce the learning and operating cost of Kubernetes. +- Users can deploy an enterprise application with one click from template, and use the application lifecycle management service to deliver their products in the console. + +## High Reliability and Availability + +- Automatic elastic scaling: Deployment is able to scale the number of Pods horizontally, and Pod is able to scale vertically based on observed metrics such as CPU utilization when user requests change, which guarantees applications keep running without crash because of resource pressure. +- Health check service: Supporting visually setting health check probes for containers to ensure the reliability of business. + +## Containerized DevOps Delivery + +- Easy-to-use pipeline: CI/CD pipeline management is visualized without user configuring, also the system ships many built-in pipeline templates. +- Source to Image (S2I):Through S2I, users do not need to write Dockerfile. The system can get source code from code repository and build the image automatically, deploy the workload into Kubernetes environment and push it to image registry automatically as well. +- Binary to Image (B2I):exactly same as S2I except the input is binary artifacts instead of source code which is much useful for developers without Docker skills or legacy applications dockerized. +- End-to-end pipeline configuration: supports end-to-end pipeline configuration from pulling source code from repository such as GitHub, SVN and Git, to compiling code, to packaging image, to scanning image in terms of security, then to pushing image to registry, and to releasing the application. +- Source code quality management: supports static analysis scanning for code quality for the application in DevOps project. +- Logging: Logs all steps of CI/CD pipeline. + +## Out-of-Box Microservice Governance + +- Flexible micro-service framework: provides visual micro-service governance capabilities based on Istio micro-service framework, and divides Kubernetes services into finer-grained services to support non-intrusive micro-service governance. +- Comprehensive governance services: offers excellent microservice governance such as grayscale releasing, circuit break, traffic monitoring, traffic control, rate limit, tracing, intelligent routing, etc. + +## Multiple Persistent Storage Support + +- Support GlusterFS, CephRBD, NFS, etc., open source storage solutions. +- Provide NeonSAN CSI plug-in to connect commercial QingStor NeonSAN service to meet core business requirements, i.e., low latency, strong resilient, high performance. +- Provide QingCloud CSI plug-in that accesses commercial QingCloud block storage services. + +## Flexible Network Solution Support + +- Support open-source network solutions such as Calico and Flannel. +- A bare metal load balancer plug-in [Porter](https://github.com/kubesphere/porter) for Kubernetes installed on physical machines. + +## Multi-tenant and Multi-dimensional Monitoring and Logging + +- Monitoring system is fully visualized, and provides open standard APIs for enterprises to integrate their existing operating platforms such as alerting, monitoring, logging etc. in order to have a unified system for their daily operating work. +- Multi-dimensional and second-level precision monitoring metrics. +- Provide resource usage ranking by node, workspace and project. +- Provide service component monitoring for user to quickly locate component failures. +- Provide rich alerting rules based on multi-tenancy and multi-dimensional monitoring metrics. Currently, the system supports two types of alerting. One is infrastructure alerting for cluster administrator. The other one is workload alerting for tenants. +- Provide multi-tenant log management. In KubeSphere log search system, different tenants can only see their own log information. diff --git a/content/zh/docs/introduction/architecture.md b/content/zh/docs/introduction/architecture.md new file mode 100644 index 000000000..4714708d1 --- /dev/null +++ b/content/zh/docs/introduction/architecture.md @@ -0,0 +1,48 @@ +--- +title: "Architecture" +keywords: "kubesphere, kubernetes, docker, helm, jenkins, istio, prometheus, devops, service mesh" +description: "KubeSphere architecture" + +linkTitle: "Architecture" +weight: 1300 +--- + +## Separation of frontend and backend + +KubeSphere separates [frontend](https://github.com/kubesphere/console) from [backend](https://github.com/kubesphere/kubesphere), and it itself is a cloud native application and provides open standard REST APIs for external systems to use. Please see [API documentation](../../api-reference/api-docs) for details. The following figure is the system architecture. KubeSphere can run anywhere from on-premise datacenter to any cloud to edge. In addition, it can be deployed on any Kubernetes distribution. + +![Architecture](https://pek3b.qingstor.com/kubesphere-docs/png/20190810073322.png) + +## Components List + +| Back-end component | Function description | +|---|---| +| ks-account | Account service provides APIs for account and role management | +| ks-apiserver | The KubeSphere API server validates and configures data for the API objects which include Kubernetes objects. The API Server services REST operations and provides the frontend to the cluster's shared state through which all other components interact. | +| ks-apigateway | The API gateway is responsible for handling external requests for KubeSphere services. | +| ks-console | KubeSphere console offers KubeSphere console service | +| ks-controller-manager | KubeSphere controller takes care of business logic, for example, when create a workspace, the controller will automatically create corresponding permissions and configurations for it. | +| metrics-server | Kubernetes monitoring component collects metrics from Kubelet on each node. | +| Prometheus | provides monitoring metrics and services of clusters, nodes, workloads, API objects. | +| Elasticsearch | provides log indexing, querying and data management. Besides the built-in service, KubeSphere supports the integration of external Elasticsearch service. | +| Fluent Bit | collects logs and forwarding them to ElasticSearch or Kafka. | +| Jenkins | provides CI/CD pipeline service. | +| SonarQube | is an optional component that provides code static checking and quality analysis. | +| Source-to-Image | automatically compiles and packages source code into Docker image. | +| Istio | provides microservice governance and traffic control, such as grayscale release, canary release, circuit break, traffic mirroring and so on. | +| Jaeger | collects sidecar data and provides distributed tracing service. | +| OpenPitrix | provides application lifecycle management such as template management, deployment, app store management, etc. | +| Alert | provides configurable alert service for cluster, workload, Pod, and container etc. | +| Notification | is an integrated notification service; it currently supports mail delivery method. | +| Redis | caches the data of ks-console and ks-account. | +| MySQL | is the shared database for cluster back-end components including monitoring, alarm, DevOps, OpenPitrix etc. | +| PostgreSQL | SonarQube and Harbor's back-end database | +| OpenLDAP | is responsible for centralized storage and management of user account and integrates with external LDAP server. | +| Storage | built-in CSI plug-in collecting cloud platform storage services. It supports open source NFS/Ceph/Gluster client. | +| Network | supports Calico/Flannel and other open source network plug-ins to integrate with cloud platform SDN. | + +## Service Components + +Each component has many services, see [Service Components](../../infrastructure/components) for more details. + +![Service Components](https://pek3b.qingstor.com/kubesphere-docs/png/20191017163549.png) diff --git a/content/zh/docs/introduction/features.md b/content/zh/docs/introduction/features.md new file mode 100644 index 000000000..7911df620 --- /dev/null +++ b/content/zh/docs/introduction/features.md @@ -0,0 +1,128 @@ +--- +title: "Features and Benefits" +keywords: "kubesphere, kubernetes, docker, helm, jenkins, istio, prometheus" +description: "The document describes the features and benefits of KubeSphere" + +linkTitle: "Features" +weight: 1200 +--- + +## Overview + +As an open source container platform, KubeSphere provides enterprises with a robust, secure and feature-rich platform, including most common functionalities needed for enterprise adopting Kubernetes, such as workload management, Service Mesh (Istio-based), DevOps projects (CI/CD), Source to Image and Binary to Image, multi-tenancy management, multi-dimensional monitoring, log query and collection, alerting and notification, service and network management, application management, infrastructure management, image registry management, application management. It also supports various open source storage and network solutions, as well as cloud storage services. Meanwhile, KubeSphere provides an easy-to-use web console to ease the learning curve and drive the adoption of Kubernetes. + +![Overview](https://pek3b.qingstor.com/kubesphere-docs/png/20200202153355.png) + +The following modules elaborate the key features and benefits provided by KubeSphere container platform. + +## Provisioning and Maintaining Kubernetes + +### Provisioning Kubernetes Cluster + +KubeSphere Installer allows you to deploy Kubernetes on your infrastructure out of box, provisioning Kubernetes cluster with high availability. It is recommended that at least three master nodes are configured behind a load balancer for production environment. + +### Kubernetes Resource Management + +KubeSphere provides graphical interface for creating and managing Kubernetes resources, including Pods and Containers, Workloads, Secrets and ConfigMaps, Services and Ingress, Jobs and CronJobs, HPA, etc. As well as powerful observability including resources monitoring, events, logging, alerting and notification. + +### Cluster Upgrade and Scaling + +KubeSphere Installer provides ease of setup, installation, management and maintenance. Moreover, it supports rolling upgrades of Kubernetes clusters so that the cluster service is always available while being upgraded. Additionally, it provides the ability to roll back to previous stable version in case of failure. Also, you can add new nodes to a Kubernetes cluster in order to support more workloads by using KubeSphere Installer. + +## DevOps Support + +KubeSphere provides pluggable DevOps component based on popular CI/CD tools such as Jenkins, and offers automated workflow and tools including binary-to-image (B2I) and source-to-image (S2I) to get source code or binary artifacts into ready-to-run container images. The following are the detailed description of CI/CD pipeline, S2I and B2I. + +![DevOps](https://pek3b.qingstor.com/kubesphere-docs/png/20200202220455.png) + +### CI/CD Pipeline + +- CI/CD pipelines and build strategies are based on Jenkins, which streamlines the creation and automation of development, test and production process, and supports dependency cache to accelerate build and deployment. +- Ship out-of-box Jenkins build strategy and client plugin to create a Jenkins pipeline based on Git repository/SVN. You can define any step and stage in your built-in Jenkinsfile. +- Design a visualized control panel to create CI/CD pipelines, and deliver complete visibility to simplify user interaction. +- Integrate source code quality analysis, also support output and collect logs of each step. + +### Source to Image + +Source-to-Image (S2I) is a toolkit and automated workflow for building reproducible container images from source code. S2I produces ready-to-run images by injecting source code into a container image and making the container ready to execute from source code. + +S2I allows you to publish your service to Kubernetes without writing Dockerfile. You just need to provide source code repository address, and specify the target image registry. All configurations will be stored as different resources in Kubernetes. Your service will be automatically published to Kubernetes, and the image will be pushed to target registry as well. + +![S2I](https://pek3b.qingstor.com/kubesphere-docs/png/20200204131749.png) + +### Binary to Image + +As similar as S2I, Binary to Image (B2I) is a toolkit and automated workflow for building reproducible container images from binary (e.g. Jar, War, Binary package). + +You just need to upload your application binary package, and specify the image registry to which you want to push. The rest is exactly same as S2I. + +## Istio-based Service Mesh + +KubeSphere service mesh is composed of a set of ecosystem projects, including Istio, Envoy and Jaeger, etc. We design a unified user interface to use and manage these tools. Most features are out-of-box and have been designed from developer's perspective, which means KubeSphere can help you to reduce the learning curve since you do not need to deep dive into those tools individually. + +KubeSphere service mesh provides fine-grained traffic management, observability, tracing, and service identity and security for a distributed microservice application, so the developer can focus on core business. With a service mesh management on KubeSphere, users can better track, route and optimize communications within Kubernetes for cloud native apps. + +### Traffic Management + +- **Canary release** provides canary rollouts, and staged rollouts with percentage-based traffic splits. +- **Blue-green deployment** allows the new version of the application to be deployed in the green environment and tested for functionality and performance. Once the testing results are successful, application traffic is routed from blue to green. Green then becomes the new production. +- **Traffic mirroring** enables teams to bring changes to production with as little risk as possible. Mirroring sends a copy of live traffic to a mirrored service. +- **Circuit breakers** allows users to set limits for calls to individual hosts within a service, such as the number of concurrent connections or how many times calls to this host have failed. + +### Visualization + +KubeSphere service mesh has the ability to visualize the connections between microservices and the topology of how they interconnect. As we know, observability is extremely useful in understanding cloud-native microservice interconnections. + +### Distributed Tracing + +Based on Jaeger, KubeSphere service mesh enables users to track how each service interacts with other services. It brings a deeper understanding about request latency, bottlenecks, serialization and parallelism via visualization. + +## Multi-tenant Management + +- Multi-tenancy: provides unified authentication with fine-grained roles and three-tier authorization system. +- Unified authentication: supports docking to a central enterprise authentication system that is LDAP/AD based protocol. And supports single sign-on (SSO) to achieve unified authentication of tenant identity. +- Authorization system: It is organized into three levels, namely, cluster, workspace and project. We ensure the resource sharing as well as isolation among different roles at multiple levels to fully guarantee resource security. + +## Multi-dimensional Monitoring + +- Monitoring system is fully visualized, and provides open standard APIs for enterprises to integrate their existing operating platforms such as alerting, monitoring, logging etc. in order to have a unified system for their daily operating work. +- Comprehensive and second-level precision monitoring metrics. + - In the aspect of infrastructure monitoring, the system provides many metrics including CPU utilization, memory utilization, CPU load average, disk usage, inode utilization, disk throughput, IOPS, network interface outbound/inbound rate, Pod status, ETCD service status, API Server status, etc. + - In the aspect of application resources, the system provides five monitoring metrics, i.e., CPU utilization, memory consumption, the number of Pods of applications, network outbound/inbound rate of an application. Besides, it supports sorting according to resource consumption, user-defined time range query and quickly locating the place where exception happens. +- Provide resource usage ranking by node, workspace and project. +- Provide service component monitoring for user to quickly locate component failures. + +## Alerting and Notification System + +- Provide rich alerting rules based on multi-tenancy and multi-dimensional monitoring metrics. Currently, the system supports two types of alerting. One is infrastructure alerting for cluster administrator. The other one is workload alerting for tenants. +- Flexible alerting policy: You can customize an alerting policy that contains multiple alerting rules, and you can specify notification rules and repeat alerting rules. +- Rich monitoring metrics for alerting: Provide alerting for infrastructure and workloads. +- Flexible alerting rules: You can customize the detection period, duration and alerting level of monitoring metrics. +- Flexible notification rules: You can customize the notification delivery period and receiver list. Mail notification is currently supported. +- Custom repeat alerting rules: Support to set the repeat alerting cycle, maximum repeat times, and the alerting level. + +## Log Query and Collection + +- Provide multi-tenant log management. In KubeSphere log search system, different tenants can only see their own log information. +- Contain multi-level log queries (project/workload/container group/container and keywords) as well as flexible and convenient log collection configuration options. +- Support multiple log collection platforms such as Elasticsearch, Kafka, Fluentd. + +## Application Management and Orchestration + +- Use open source [OpenPitrix](https://github.com/openpitrix/openpitrix) to set up app store and app repository services which provides full lifecycle of application management. +- Users can easily deploy an application from templates with one click. + +## Infrastructure Management + +Support storage management, host management and monitoring, resource quota management, image registry management, authorization management. + +## Multiple Storage Solutions Support + +- Support GlusterFS, CephRBD, NFS, etc., open source storage solutions. +- Provide NeonSAN CSI plug-in to connect QingStor NeonSAN service to meet core business requirements, i.e., low latency, strong resilient, high performance. +- Provide QingCloud CSI plug-in that accesses QingCloud block storage services. + +## Multiple Network Solutions Support + +- Support Calico, Flannel, etc., open source network solutions. +- A bare metal load balancer plug-in [Porter](https://github.com/kubesphere/porter) for Kubernetes installed on physical machines. diff --git a/content/zh/docs/introduction/glossary.md b/content/zh/docs/introduction/glossary.md new file mode 100644 index 000000000..b38fc4023 --- /dev/null +++ b/content/zh/docs/introduction/glossary.md @@ -0,0 +1,28 @@ +--- +title: "Glossary" +keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus' +description: '' + +weight: 1500 +--- + +This document describes some frequently used glossaries in KubeSphere as shown below: + + +| Object | Concepts| +|------------|--------------| +| Project | It is Kubernetes Namespace which provides virtual isolation for the resources in KubeSphere, see [Namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/). | +| Pod | A Pod is the smallest deployable computing unit that can be created and managed in KubeSphere, see [Pods](https://kubernetes.io/docs/concepts/workloads/pods/pod/). | +| Deployment | Deployment is used to describe a desired state in a deployment object, and the deployment controller changes the actual state to the desired state at a controlled rate, see [Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/). | +| StatefulSet | StatefulSet is the workload object used to manage stateful applications, such as MySQL, see [StatefulSet](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/). | +| DaemonSet | A DaemonSet ensures that all (or some) Nodes run a copy of a Pod,such as fluentd or logstash, see [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/). | +| Job | A job creates one or more pods and ensures that a specified number of them successfully terminate, see [Job](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/). | +| CronJob | CronJob creates Jobs on a time-based schedule. A CronJob object is like one line of a crontab (cron table) file. It runs a job periodically on a given schedule, see [CronJob](https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/). | +| Service | A Kubernetes service is an abstraction object which defines a logical set of Pods and a policy by which to access them - sometimes called a micro-service. See [Service](https://kubernetes.io/docs/concepts/services-networking/service/). | +| Route | It is Kubernetes Ingress, an API object that manages external access to the services in a cluster, typically HTTP. Ingress can provide load balancing, SSL termination and name-based virtual hosting, see [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/). | +| Image Registry | Image registry is used to store and distribute Docker Images. It could be public or private, see [Image](https://kubernetes.io/docs/concepts/containers/images/). | +| Volume | It is Kubernetes Persistent Volume Claim (PVC). Volume is a request for storage by a user, allowing a user to consume abstract storage resources, see [PVC](https://kubernetes.io/docs/concepts/storage/persistent-volumes/). | +| Storage Classes | A storage class provides a way for administrators to describe the “classes” of storage they offer, see [StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/). | +| Pipeline | Jenkins Pipeline is a suite of plugins which supports implementing and integrating continuous delivery pipelines into Jenkins, see [Pipeline](https://jenkins.io/doc/book/pipeline/). | +| WorkSpace | Workspace is a logical unit to organize your workload projects, DevOps projects, to manage resource access and share information within your team. It is an isolated working place for your team. | +| Node | A node is a worker machine that may be a virtual machine or physical machine, depending on the cluster setup. Each node contains the services necessary to run pods and is managed by the master components. see [Node](https://kubernetes.io/docs/concepts/architecture/nodes/). | \ No newline at end of file diff --git a/content/zh/docs/introduction/what-is-kubesphere.md b/content/zh/docs/introduction/what-is-kubesphere.md new file mode 100644 index 000000000..fac311a54 --- /dev/null +++ b/content/zh/docs/introduction/what-is-kubesphere.md @@ -0,0 +1,46 @@ +--- +title: "What is KubeSphere" +keywords: 'Kubernetes, docker, jenkins, devops, istio, service mesh, devops, microservice' +description: 'What is KubeSphere' + +linkTitle: "Introduction" +weight: 1100 +--- + +## Overview + +[KubeSphere](https://kubesphere.io) is a **distributed operating system providing cloud native stack** with [Kubernetes](https://kubernetes.io) as its kernel, and aims to be plug-and-play architecture for third-party applications seamless integration to boost its ecosystem. KubeSphere is also a multi-tenant enterprise-grade container platform with full-stack automated IT operation and streamlined DevOps workflows. It provides developer-friendly wizard web UI, helping enterprises to build out a more robust and feature-rich platform, which includes most common functionalities needed for enterprise Kubernetes strategy, such as the Kubernetes resource management, DevOps (CI/CD), application lifecycle management, monitoring, logging, service mesh, multi-tenancy, alerting and notification, storage and networking, autoscaling, access control, GPU support, etc., as well as multi-cluster management, network policy, registry management, more security enhancements in upcoming releases. + +KubeSphere delivers **consolidated views while integrating a wide breadth of ecosystem tools** around Kubernetes and offers consistent user experience to reduce complexity, and develops new features and capabilities that are not yet available in upstream Kubernetes in order to alleviate the pain points of Kubernetes including storage, network, security and ease of use. Not only does KubeSphere allow developers and DevOps teams use their favorite tools in a unified console, but, most importantly, these functionalities are loosely coupled with the platform since they are pluggable and optional. + +Last but not least, KubeSphere does not change Kubernetes itself at all. In another word, KubeSphere can be deployed **on any existing version-compatible Kubernetes cluster across any infrastructure** including virtual machine, bare metal, on-premise, public cloud and hybrid cloud. KubeSphere screens users from the infrastructure underneath and helps your enterprise modernize, migrate, deploy and manage existing and containerized apps seamlessly across a variety of infrastructure, so that developers and Ops team can focus on application development and accelerate DevOps automated workflows and delivery processes with enterprise-level observability and troubleshooting, unified monitoring and logging, centralized storage and networking management, easy-to-use CI/CD pipelines. + +![KubeSphere Overview](https://pek3b.qingstor.com/kubesphere-docs/png/20200224091526.png) + +## Video on Youtube + + + +## What is New in 2.1 + +We decouple some main feature components and make them pluggable and optional to choose so that users can install a default KubeSphere with resource requirements down to 2 cores CPU and 4G memory. Meanwhile, there are great enhancements in application store, especially in application lifecycle management. + +It is worth mentioning that both DevOps and observability components have been improved significantly. For example, we add lots of new features including Binary-to-Image, dependency caching support in pipeline, branch switch support and Git logs output within DevOps component. We also bring upgrade, enhancements and bugfix in storage, authentication and security, as well as user experience improvements. See [Release Notes For 2.1.0](../../release/release-v210) for details. + +## Open Source + +As we adopt open source model, development is taking in the open way and driven by KubeSphere community. KubeSphere is **100% open source** and available on [GitHub](https://github.com/kubesphere/) where you can find all source code, documents and discussions. It has been widely installed and used in development testing and production environments, and a large number of services are running smoothly in KubeSphere. + +## Roadmap + +### Express Edition -> KubeSphere 1.0.x -> KubeSphere 2.0.x -> KubeSphere 2.1.x -> KubeSphere 3.0.0 + +![Roadmap](https://pek3b.qingstor.com/kubesphere-docs/png/20190926000413.png) + +## Landscapes + +KubeSphere is a member of CNCF and a [Kubernetes Conformance Certified platform +](https://www.cncf.io/certification/software-conformance/#logos), which enriches the [CNCF CLOUD NATIVE Landscape. +](https://landscape.cncf.io/landscape=observability-and-analysis&license=apache-license-2-0) + +![CNCF Landscape](https://pek3b.qingstor.com/kubesphere-docs/png/20191011233719.png) diff --git a/content/zh/docs/multicluster-management/_index.md b/content/zh/docs/multicluster-management/_index.md new file mode 100644 index 000000000..da9e078dd --- /dev/null +++ b/content/zh/docs/multicluster-management/_index.md @@ -0,0 +1,22 @@ +--- +title: "Multi-cluster Management" +description: "Import a hosted or on-premise Kubernetes cluster into KubeSphere" +layout: "single" + +linkTitle: "Multi-cluster Management" + +weight: 3000 + +icon: "/images/docs/docs.svg" + +--- + +## Installing KubeSphere and Kubernetes on Linux + +In this chapter, we will demonstrate how to use KubeKey to provision a new Kubernetes and KubeSphere cluster based on different infrastructures. Kubekey can help you to quickly build a production-ready cluster architecture on a set of machines from zero to one. It also helps you to easily scale the cluster and install pluggable components on existing architecture. + +## Most Popular Pages + +Below you will find some of the most common and helpful pages from this chapter. We highly recommend you to review them at first. + +{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} diff --git a/content/zh/docs/multicluster-management/release-v210.md b/content/zh/docs/multicluster-management/release-v210.md new file mode 100644 index 000000000..1eb9cedb7 --- /dev/null +++ b/content/zh/docs/multicluster-management/release-v210.md @@ -0,0 +1,10 @@ +--- +title: "Enable Multicluster Management" +keywords: "kubernetes, StorageClass, kubesphere, PVC" +description: "Enable Multicluster Management in KubeSphere" + +linkTitle: "Enable Multicluster Management" +weight: 200 +--- + +TBD diff --git a/content/zh/docs/multicluster-management/release-v211.md b/content/zh/docs/multicluster-management/release-v211.md new file mode 100644 index 000000000..66048687f --- /dev/null +++ b/content/zh/docs/multicluster-management/release-v211.md @@ -0,0 +1,8 @@ +--- +title: "Kubernetes Federation in KubeSphere" +keywords: "kubernetes, multicluster, kubesphere, federation, hybridcloud" +description: "Kubernetes and KubeSphere node management" + +linkTitle: "Kubernetes Federation in KubeSphere" +weight: 100 +--- diff --git a/content/zh/docs/multicluster-management/release-v300.md b/content/zh/docs/multicluster-management/release-v300.md new file mode 100644 index 000000000..e52dee1e1 --- /dev/null +++ b/content/zh/docs/multicluster-management/release-v300.md @@ -0,0 +1,10 @@ +--- +title: "Introduction" +keywords: "kubernetes, multicluster, kubesphere, hybridcloud" +description: "Upgrade KubeSphere" + +linkTitle: "Introduction" +weight: 50 +--- + +TBD diff --git a/content/zh/docs/pluggable-components/_index.md b/content/zh/docs/pluggable-components/_index.md new file mode 100644 index 000000000..ce07e09e0 --- /dev/null +++ b/content/zh/docs/pluggable-components/_index.md @@ -0,0 +1,22 @@ +--- +title: "Enable Pluggable Components" +description: "Enable KubeSphere Pluggable Components" +layout: "single" + +linkTitle: "Enable Pluggable Components" + +weight: 3500 + +icon: "/images/docs/docs.svg" + +--- + +## Installing KubeSphere and Kubernetes on Linux + +In this chapter, we will demonstrate how to use KubeKey to provision a new Kubernetes and KubeSphere cluster based on different infrastructures. Kubekey can help you to quickly build a production-ready cluster architecture on a set of machines from zero to one. It also helps you to easily scale the cluster and install pluggable components on existing architecture. + +## Most Popular Pages + +Below you will find some of the most common and helpful pages from this chapter. We highly recommend you to review them at first. + +{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} diff --git a/content/zh/docs/pluggable-components/release-v200.md b/content/zh/docs/pluggable-components/release-v200.md new file mode 100644 index 000000000..ba048fe22 --- /dev/null +++ b/content/zh/docs/pluggable-components/release-v200.md @@ -0,0 +1,92 @@ +--- +title: "Release Notes For 2.0.0" +keywords: "kubernetes, docker, kubesphere, jenkins, istio, prometheus" +description: "KubeSphere Release Notes For 2.0.0" + +linkTitle: "Release Notes - 2.0.0" +weight: 500 +--- + +KubeSphere 2.0.0 was released on **May 18th, 2019**. + +## What's New in 2.0.0 + +### Component Upgrades + +- Support Kubernetes [Kubernetes 1.13.5](https://github.com/kubernetes/kubernetes/releases/tag/v1.13.5) +- Integrate [QingCloud Cloud Controller](https://github.com/yunify/qingcloud-cloud-controller-manager). After installing load balancer, QingCloud load balancer can be created through KubeSphere console and the backend workload is bound automatically.  +- Integrate [QingStor CSI v0.3.0](https://github.com/yunify/qingstor-csi/tree/v0.3.0) storage plugin and support physical NeonSAN storage system. Support SAN storage service with high availability and high performance. +- Integrate [QingCloud CSI v0.2.1](https://github.com/yunify/qingcloud-csi/tree/v0.2.1) storage plugin and support many types of volume to create QingCloud block services. +- Harbor is upgraded to 1.7.5. +- GitLab is upgraded to 11.8.1. +- Prometheus is upgraded to 2.5.0. + +### Microservice Governance + +- Integrate Istio 1.1.1 and support visualization of service mesh management. +- Enable the access to the project's external websites and the application traffic governance. +- Provide built-in sample microservice [Bookinfo Application](https://istio.io/docs/examples/bookinfo/). +- Support traffic governance. +- Support traffic images. +- Provide load balancing of microservice based on Istio. +- Support canary release. +- Enable blue-green deployment. +- Enable circuit breaking. +- Enable microservice tracing. + +### DevOps (CI/CD Pipeline) + +- CI/CD pipeline provides email notification and supports the email notification during construction. +- Enhance CI/CD graphical editing pipelines, and more pipelines for common plugins and execution conditions. +- Provide source code vulnerability scanning based on SonarQube 7.4. +- Support [Source to Image](https://github.com/kubesphere/s2ioperator) feature. + +### Monitoring + +- Provide Kubernetes component independent monitoring page including etcd, kube-apiserver and kube-scheduler. +- Optimize several monitoring algorithm. +- Optimize monitoring resources. Reduce Prometheus storage and the disk usage up to 80%. + +### Logging + +- Provide unified log console in terms of tenant. +- Enable accurate and fuzzy retrieval. +- Support real-time and history logs. +- Support combined log query based on namespace, workload, Pod, container, key words and time limit.   +- Support detail page of single and direct logs. Pods and containers can be switched. +- [FluentBit Operator](https://github.com/kubesphere/fluentbit-operator) supports logging gathering settings: ElasticSearch, Kafka and Fluentd can be added, activated or turned off as log collectors. Before sending to log collectors, you can configure filtering conditions for needed logs. + +### Alerting and Notifications + +- Email notifications are available for cluster nodes and workload resources.  +- Notification rules: combined multiple monitoring resources are available. Different warning levels, detection cycle, push times and threshold can be configured. +- Time and notifiers can be set. +- Enable notification repeating rules for different levels. + +### Security Enhancement + +- Fix RunC Container Escape Vulnerability [Runc container breakout](https://log.qingcloud.com/archives/5127) +- Fix Alpine Docker's image Vulnerability [Alpine container shadow breakout](https://www.alpinelinux.org/posts/Docker-image-vulnerability-CVE-2019-5021.html) +- Support single and multi-login configuration items. +- Verification code is required after multiple invalid logins. +- Enhance passwords' policy and prevent weak passwords. +- Others security enhancements. + +### Interface Optimization + +- Optimize multiple user experience of console, such as the switch between DevOps project and other projects. +- Optimize many Chinese-English webpages. + +### Others + +- Support Etcd backup and recovery. +- Support regular cleanup of the docker's image. + +## Bugs Fixes + +- Fix delay updates of the resource and deleted pages. +- Fix the left dirty data after deleting the HPA workload. +- Fix incorrect Job status display. +- Correct resource quota, Pod usage and storage metrics algorithm. +- Adjust CPU usage percentages. +- many more bugfix diff --git a/content/zh/docs/pluggable-components/release-v201.md b/content/zh/docs/pluggable-components/release-v201.md new file mode 100644 index 000000000..2407dce8a --- /dev/null +++ b/content/zh/docs/pluggable-components/release-v201.md @@ -0,0 +1,19 @@ +--- +title: "Release Notes For 2.0.1" +keywords: "kubernetes, docker, kubesphere, jenkins, istio, prometheus" +description: "KubeSphere Release Notes For 2.0.1" + +linkTitle: "Release Notes - 2.0.1" +weight: 400 +--- + +KubeSphere 2.0.1 was released on **June 9th, 2019**. + +## Bug Fix + +- Fix the issue that CI/CD pipeline cannot recognize correct special characters in the code branch. +- Fix CI/CD pipeline's issue of being unable to check logs. +- Fix no-log data output problem caused by index document fragmentation abnormity during the log query. +- Fix prompt exceptions when searching for logs that do not exist. +- Fix the line-overlap problem on traffic governance topology and fixed invalid image strategy application. +- Many more bugfix diff --git a/content/zh/docs/pluggable-components/release-v202.md b/content/zh/docs/pluggable-components/release-v202.md new file mode 100644 index 000000000..3c8fec965 --- /dev/null +++ b/content/zh/docs/pluggable-components/release-v202.md @@ -0,0 +1,40 @@ +--- +title: "Release Notes For 2.0.2" +keywords: "kubernetes, docker, kubesphere, jenkins, istio, prometheus" +description: "KubeSphere Release Notes For 2.0.2" + +linkTitle: "Release Notes - 2.0.2" +weight: 300 +--- + +KubeSphere 2.0.2 was released on July 9, 2019, which fixes known bugs and enhances existing feature. If you have installed versions of 1.0.x, 2.0.0 or 2.0.1, please download KubeSphere installer v2.0.2 to upgrade. + +## What's New in 2.0.2 + +### Enhanced Features + +- [API docs](/api-reference/api-docs/) are available on the official website. +- Block brute-force attacks. +- Standardize the maximum length of resource names. +- Upgrade the gateway of project (Ingress Controller) to the version of 0.24.1. Support Ingress grayscale release. + +## List of Fixed Bugs + +- Fix the issue that traffic topology displays resources outside of this project. +- Fix the extra service component issue from traffic topology under specific circumstances. +- Fix the execution issue when "Source to Image" reconstructs images under specific circumstances. +- Fix the page display problem when "Source to Image" job fails. +- Fix the log checking problem when Pod status is abnormal. +- Fix the issue that disk monitor cannot detect some types of volume mounting, such as LVM volume. +- Fix the problem of detecting deployed applications. +- Fix incorrect status of application component. +- Fix host node's number calculation errors. +- Fix input data loss caused by switching reference configuration buttons when adding environmental variables. +- Fix the rerun job issue that the Operator role cannot execute. +- Fix the initialization issue on IPv4 environment uuid. +- Fix the issue that the log detail page cannot be scrolled down to check past logs. +- Fix wrong APIServer addresses in KubeConfig files. +- Fix the issue that DevOps project's name cannot be changed. +- Fix the issue that container logs cannot specify query time. +- Fix the saving problem on relevant repository's secrets under certain circumstances. +- Fix the issue that application's service component creation page does not have image registry's secrets. diff --git a/content/zh/docs/pluggable-components/release-v210.md b/content/zh/docs/pluggable-components/release-v210.md new file mode 100644 index 000000000..ae876bee6 --- /dev/null +++ b/content/zh/docs/pluggable-components/release-v210.md @@ -0,0 +1,155 @@ +--- +title: "Release Notes For 2.1.0" +keywords: "kubernetes, docker, kubesphere, jenkins, istio, prometheus" +description: "KubeSphere Release Notes For 2.1.0" + +linkTitle: "Release Notes - 2.1.0" +weight: 200 +--- + +KubeSphere 2.1.0 was released on Nov 11th, 2019, which fixes known bugs, adds some new features and brings some enhancement. If you have installed versions of 2.0.x, please upgrade it and enjoy the better user experience of v2.1.0. + +## Installer Enhancement + +- Decouple some components and make components including DevOps, service mesh, app store, logging, alerting and notification optional and pluggable +- Add Grafana (v5.2.4) as the optional component +- Upgrade Kubernetes to 1.15.5. It is also compatible with 1.14.x and 1.13.x +- Upgrade [OpenPitrix](https://openpitrix.io/) to v0.4.5 +- Upgrade the log forwarder Fluent Bit to v1.3.2 +- Upgrade Jenkins to v2.176.2 +- Upgrade Istio to 1.3.3 +- Optimize the high availability for core components + +## App Store + +### Features + +Support upload / test / review / deploy / publish/ classify / upgrade / deploy and delete apps, and provide nine built-in applications + +### Upgrade & Enhancement + +- The application repository configuration is moved from global to each workspace +- Support adding application repository to share applications in a workspace + +## Storage + +### Features + +- Support Local Volume with dynamic provisioning +- Provide the real-time monitoring feature for QingCloud block storage + +### Upgrade & Enhancement + +QingCloud CSI is adapted to CSI 1.1.0, supports upgrade, topology, create or delete a snapshot. It also supports creating PVC based on a snapshot + +### BUG Fixes + +Fix the StorageClass list display problem + +## Observability + +### Features + +- Support for collecting the file logs on the disk. It is used for the Pod which preserves the logs as the file on the disk +- Support integrating with external ElasticSearch 7.x +- Ability to search logs containinh Chinese words +- Add initContainer log display +- Ability to export logs +- Support for canceling the notification from alerting + +### UPGRADE & ENHANCEMENT + +- Improve the performance of log search +- Refine the hints when the logging service is abnormal +- Optimize the information when the monitoring metrics request is abnormal +- Support pod anti-affinity rule for Prometheus + +### BUG FIXES + +- Fix the mistaken highlights in the logs search result +- Fix log search not matching phrases correctly +- Fix the issue that log could not be retrieved for a deleted workload when it is searched by workload name +- Fix the issue where the results were truncated when the log is highlighted +- Fix some metrics exceptions: node `inode`, maximum pod tolerance +- Fix the issue with an incorrect number of alerting targets +- Fix filter failure problem of multi-metric monitoring +- Fix the problem of no logging and monitoring information on taint nodes (Adjust the toleration attributes of node-exporter and fluent-bit to deploy on all nodes by default, ignoring taints) + +## DevOps + +### Features + +- Add support for branch exchange and git log export in S2I +- Add B2I, ability to build Binary/WAR/JAR package and release to Kubernetes +- Support dependency cache for the pipeline, S2I, and B2I +- Support delete Kubernetes resource action in `kubernetesDeploy` step +- Multi-branch pipeline supports trigger other pipelines when create or delete the branch + +### Upgrades & Enhancement + +- Support BitBucket in the pipeline +- Support Cron script validation in the pipeline +- Support Jenkinsfile syntax validation +- Support custom the link in SonarQube +- Support event trigger build in the pipeline +- Optimize the agent node selection in the pipeline +- Accelerate the start speed of the pipeline +- Use dynamical volume as the work directory of the Agent in the pipeline, also contributes to Jenkins [#589](https://github.com/jenkinsci/kubernetes-plugin/pull/598) +- Optimize the Jenkins kubernetesDeploy plugin, add more resources and versions (v1, app/v1, extensions/v1beta1、apps/v1beta2、apps/v1beta1、autoscaling/v1、autoscaling/v2beta1、autoscaling/v2beta2、networking.k8s.io/v1、batch/v1beta1、batch/v2alpha1), also contributes to Jenkins [#614](https://github.com/jenkinsci/kubernetes-plugin/pull/614) +- Add support for PV, PVC, Network Policy in deploy step of the pipeline, also contributes to Jenkins [#87](https://github.com/jenkinsci/kubernetes-cd-plugin/pull/87)、[#88](https://github.com/jenkinsci/kubernetes-cd-plugin/pull/88) + +### Bug Fixes + +- Fix the issue that 400 bad request in GitHub Webhook +- incompatible change: DevOps Webhook's URL prefix is changed from `/webhook/xxx` to `/devops_webhook/xxx` + +## Authentication and authority + +### Features + +Support sync and authenticate with AD account + +### Upgrades & Enhancement + +- Reduce the LDAP component's RAM consumption +- Add protection against brute force attacks + +### Bug Fixes + +- Fix LDAP connection pool leak +- Fix the issue where users could not be added in the workspace +- Fix sensitive data transmission leaks + +## User Experience + +### Features + +Ability to wizard management of projects (namespace) that are not assigned to the workspace + +### Upgrades & Enhancement + +- Support bash-completion in web kubectl +- Optimize the host information display +- Add connection test of the email server +- Add prompt on resource list page +- Optimize the project overview page and project basic information +- Simplify the service creation process +- Simplify the workload creation process +- Support real-time status update in the resource list +- optimize YAML editing +- Support image search and image information display +- Add the pod list to the workload page +- Update the web terminal theme +- Support container switching in container terminal +- Optimize Pod information display, and add Pod scheduling information +- More detailed workload status display + +### Bug Fixes + +- Fix the issue where the default request resource of the project is displayed incorrectly +- Optimize the web terminal design, make it much easier to find +- Fix the Pod status update delay +- Fix the issue where a host could not be searched based on roles +- Fix DevOps project quantity error in workspace detail page +- Fix the issue with the workspace list pages not turning properly +- Fix the problem of inconsistent result ordering after query on workspace list page diff --git a/content/zh/docs/pluggable-components/release-v211.md b/content/zh/docs/pluggable-components/release-v211.md new file mode 100644 index 000000000..d8acba698 --- /dev/null +++ b/content/zh/docs/pluggable-components/release-v211.md @@ -0,0 +1,122 @@ +--- +title: "Release Notes For 2.1.1" +keywords: "kubernetes, docker, kubesphere, jenkins, istio, prometheus" +description: "KubeSphere Release Notes For 2.1.1" + +linkTitle: "Release Notes - 2.1.1" +weight: 100 +--- + +KubeSphere 2.1.1 was released on Feb 23rd, 2020, which has fixed known bugs and brought some enhancements. For the users who have installed versions of 2.0.x or 2.1.0, make sure to read the user manual carefully about how to upgrade before doing that, and feel free to raise any questions on [GitHub](https://github.com/kubesphere/kubesphere/issues). + +## What's New in 2.1.1 + +## Installer + +### UPGRADE & ENHANCEMENT + +- Support Kubernetes v1.14.x、v1.15.x、v1.16.x、v1.17.x,also solve the issue of Kubernetes API Compatibility#[1829](https://github.com/kubesphere/kubesphere/issues/1829) +- Simplify the steps of installation on existing Kubernetes, and remove the step of specifying cluster's CA certification, also specifying Etcd certification is no longer mandatory step if users don't need Etcd monitoring metrics +- Backup the configuration of CoreDNS before upgrading + +### BUG FIXES + +- Fix the issue of importing apps to App Store + +## App Store + +### UPGRADE & ENHANCEMENT + +- Upgrade OpenPitrix to v0.4.8 + +### BUG FIXES + +- Fix the latest version display issue for the published app #[1130](https://github.com/kubesphere/kubesphere/issues/1130) +- Fix the column name display issue in app approval list page #[1498](https://github.com/kubesphere/kubesphere/issues/1498) +- Fix the searching issue by app name/workspace #[1497](https://github.com/kubesphere/kubesphere/issues/1497) +- Fix the issue of failing to create app with the same name of previously deleted app #[1821](https://github.com/kubesphere/kubesphere/pull/1821) #[1564](https://github.com/kubesphere/kubesphere/issues/1564) +- Fix the issue of failing to deploy apps in some cases #[1619](https://github.com/kubesphere/kubesphere/issues/1619) #[1730](https://github.com/kubesphere/kubesphere/issues/1730) + +## Storage + +### UPGRADE & ENHANCEMENT + +- Support CSI plugins of Alibaba Cloud and Tencent Cloud + +### BUG FIXES + +- Fix the paging issue of storage class list page #[1583](https://github.com/kubesphere/kubesphere/issues/1583) #[1591](https://github.com/kubesphere/kubesphere/issues/1591) +- Fix the issue that the value of imageFeatures parameter displays '2' when creating ceph storage class #[1593](https://github.com/kubesphere/kubesphere/issues/1593) +- Fix the issue that search filter fails to work in persistent volumes list page #[1582](https://github.com/kubesphere/kubesphere/issues/1582) +- Fix the display issue for abnormal persistent volume #[1581](https://github.com/kubesphere/kubesphere/issues/1581) +- Fix the display issue for the persistent volumes which associated storage class is deleted #[1580](https://github.com/kubesphere/kubesphere/issues/1580) #[1579](https://github.com/kubesphere/kubesphere/issues/1579) + +## Observability + +### UPGRADE & ENHANCEMENT + +- Upgrade Fluent Bit to v1.3.5 #[1505](https://github.com/kubesphere/kubesphere/issues/1505) +- Upgrade Kube-state-metrics to v1.7.2 +- Upgrade Elastic Curator to v5.7.6 #[517](https://github.com/kubesphere/ks-installer/issues/517) +- Fluent Bit Operator support to detect the location of soft linked docker log folder dynamically on host machines +- Fluent Bit Operator support to manage the instance of Fluent Bit by declarative configuration through updating the ConfigMap of Operator +- Fix the issue of sort orders in alert list page #[1397](https://github.com/kubesphere/kubesphere/issues/1397) +- Adjust the metric of container memory usage with 'container_memory_working_set_bytes' + +### BUG FIXES + +- Fix the lag issue of container logs #[1650](https://github.com/kubesphere/kubesphere/issues/1650) +- Fix the display issue that some replicas of workload have no logs on container detail log page #[1505](https://github.com/kubesphere/kubesphere/issues/1505) +- Fix the compatibility issue of Curator to support ElasticSearch 7.x #[517](https://github.com/kubesphere/ks-installer/issues/517) +- Fix the display issue of container log page during container initialization #[1518](https://github.com/kubesphere/kubesphere/issues/1518) +- Fix the blank node issue when these nodes are resized #[1464](https://github.com/kubesphere/kubesphere/issues/1464) +- Fix the display issue of components status in monitor center, to keep them up-to date #[1858](https://github.com/kubesphere/kubesphere/issues/1858) +- Fix the wrong monitoring targets number in alert detail page #[61](https://github.com/kubesphere/console/issues/61) + +## DevOps + +### BUG FIXES + +- Fix the issue of UNSTABLE state not visible in the pipeline #[1428](https://github.com/kubesphere/kubesphere/issues/1428) +- Fix the format issue of KubeConfig in DevOps pipeline #[1529](https://github.com/kubesphere/kubesphere/issues/1529) +- Fix the image repo compatibility issue in B2I, to support image repo of Alibaba Cloud #[1500](https://github.com/kubesphere/kubesphere/issues/1500) +- Fix the paging issue in DevOps pipelines' branches list page #[1517](https://github.com/kubesphere/kubesphere/issues/1517) +- Fix the issue of failing to display pipeline configuration after modifying it #[1522](https://github.com/kubesphere/kubesphere/issues/1522) +- Fix the issue of failing to download generated artifact in S2I job #[1547](https://github.com/kubesphere/kubesphere/issues/1547) +- Fix the issue of [data loss occasionally after restarting Jenkins]( https://kubesphere.com.cn/forum/d/283-jenkins) +- Fix the issue that only 'PR-HEAD' is fetched when binding pipeline with GitHub #[1780](https://github.com/kubesphere/kubesphere/issues/1780) +- Fix 414 issue when updating DevOps credential #[1824](https://github.com/kubesphere/kubesphere/issues/1824) +- Fix wrong s2ib/s2ir naming issue from B2I/S2I #[1840](https://github.com/kubesphere/kubesphere/issues/1840) +- Fix the issue of failing to drag and drop tasks on pipeline editing page #[62](https://github.com/kubesphere/console/issues/62) + +## Authentication and Authorization + +### UPGRADE & ENHANCEMENT + +- Generate client certification through CSR #[1449](https://github.com/kubesphere/kubesphere/issues/1449) + +### BUG FIXES + +- Fix content loss issue in KubeConfig token file #[1529](https://github.com/kubesphere/kubesphere/issues/1529) +- Fix the issue that users with different permission fail to log in on the same browser #[1600](https://github.com/kubesphere/kubesphere/issues/1600) + +## User Experience + +### UPGRADE & ENHANCEMENT + +- Support to edit SecurityContext in workload editing page #[1530](https://github.com/kubesphere/kubesphere/issues/1530) +- Support to configure init container in workload editing page #[1488](https://github.com/kubesphere/kubesphere/issues/1488) +- Add support of startupProbe, also add periodSeconds, successThreshold, failureThreshold parameters in probe editing page #[1487](https://github.com/kubesphere/kubesphere/issues/1487) +- Optimize the status update display of Pods #[1187](https://github.com/kubesphere/kubesphere/issues/1187) +- Optimize the error message report on console #[43](https://github.com/kubesphere/console/issues/43) + +### BUG FIXES + +- Fix the status display issue for the Pods that are not under running status #[1187](https://github.com/kubesphere/kubesphere/issues/1187) +- Fix the issue that the added annotation can't be deleted when creating service of QingCloud LoadBalancer #[1395](https://github.com/kubesphere/kubesphere/issues/1395) +- Fix the display issue when selecting workload on service editing page #[1596](https://github.com/kubesphere/kubesphere/issues/1596) +- Fix the issue of failing to edit configuration file when editing 'Job' #[1521](https://github.com/kubesphere/kubesphere/issues/1521) +- Fix the issue of failing to update the service of 'StatefulSet' #[1513](https://github.com/kubesphere/kubesphere/issues/1513) +- Fix the issue of image searching for QingCloud and Alibaba Cloud image repos #[1627](https://github.com/kubesphere/kubesphere/issues/1627) +- Fix resource ordering issue with the same creation timestamp #[1750](https://github.com/kubesphere/kubesphere/pull/1750) +- Fix the issue of failing to edit configuration file when editing service #[41](https://github.com/kubesphere/console/issues/41) diff --git a/content/zh/docs/pluggable-components/release-v300.md b/content/zh/docs/pluggable-components/release-v300.md new file mode 100644 index 000000000..98c787c91 --- /dev/null +++ b/content/zh/docs/pluggable-components/release-v300.md @@ -0,0 +1,10 @@ +--- +title: "Release Notes For 3.0.0" +keywords: "kubernetes, docker, kubesphere, jenkins, istio, prometheus" +description: "KubeSphere Release Notes For 3.0.0" + +linkTitle: "Release Notes - 3.0.0" +weight: 50 +--- + +TBD diff --git a/content/zh/docs/project-user-guide/_index.md b/content/zh/docs/project-user-guide/_index.md new file mode 100644 index 000000000..490cd0364 --- /dev/null +++ b/content/zh/docs/project-user-guide/_index.md @@ -0,0 +1,23 @@ +--- +title: "Project User Guide" +description: "Help you to better manage resources in a KubeSphere project" +layout: "single" + +linkTitle: "Project User Guide" +weight: 4300 + +icon: "/images/docs/docs.svg" + +--- + +## Installing KubeSphere and Kubernetes on Linux + +In this chapter, we will demonstrate how to use KubeKey to provision a new Kubernetes and KubeSphere cluster based on different infrastructures. Kubekey can help you to quickly build a production-ready cluster architecture on a set of machines from zero to one. It also helps you to easily scale the cluster and install pluggable components on existing architecture. + +## Most Popular Pages + +Below you will find some of the most common and helpful pages from this chapter. We highly recommend you to review them at first. + +{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} + +{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} diff --git a/content/zh/docs/project-user-guide/application-workloads/_index.md b/content/zh/docs/project-user-guide/application-workloads/_index.md new file mode 100644 index 000000000..d28bdca57 --- /dev/null +++ b/content/zh/docs/project-user-guide/application-workloads/_index.md @@ -0,0 +1,7 @@ +--- +linkTitle: "Application Workloads" +weight: 2200 + +_build: + render: false +--- diff --git a/content/zh/docs/project-user-guide/application-workloads/app-template.md b/content/zh/docs/project-user-guide/application-workloads/app-template.md new file mode 100644 index 000000000..f0d13febd --- /dev/null +++ b/content/zh/docs/project-user-guide/application-workloads/app-template.md @@ -0,0 +1,44 @@ +--- +title: "Application Template" +keywords: 'kubernetes, chart, helm, KubeSphere, application' +description: 'Application Template' + +linkTitle: "Application Template" +weight: 2210 +--- + +TBD + +{{< notice note >}} +### This is a simple note. +{{}} + +{{< notice tip >}} +This is a simple tip. +{{}} + +{{< notice info >}} +This is a simple info. +{{}} + +{{< notice warning >}} +This is a simple warning. +{{}} + +{{< tabs >}} + +{{< tab "first" >}} +### Why KubeSphere +{{}} + +{{< tab "second" >}} +``` +console.log('test') +``` +{{}} + +{{< tab "third" >}} +this is third tab +{{}} + +{{}} diff --git a/content/zh/docs/project-user-guide/application-workloads/composing-app.md b/content/zh/docs/project-user-guide/application-workloads/composing-app.md new file mode 100644 index 000000000..57e705e5c --- /dev/null +++ b/content/zh/docs/project-user-guide/application-workloads/composing-app.md @@ -0,0 +1,44 @@ +--- +title: "Composing an App for Microservices" +keywords: 'kubesphere, kubernetes, docker, devops, service mesh, openpitrix' +description: 'Composing an app for microservices' + + +weight: 2260 +--- + +TBD + +{{< notice note >}} +### This is a simple note. +{{}} + +{{< notice tip >}} +This is a simple tip. +{{}} + +{{< notice info >}} +This is a simple info. +{{}} + +{{< notice warning >}} +This is a simple warning. +{{}} + +{{< tabs >}} + +{{< tab "first" >}} +### Why KubeSphere +{{}} + +{{< tab "second" >}} +``` +console.log('test') +``` +{{}} + +{{< tab "third" >}} +this is third tab +{{}} + +{{}} diff --git a/content/zh/docs/project-user-guide/application-workloads/cronjob.md b/content/zh/docs/project-user-guide/application-workloads/cronjob.md new file mode 100644 index 000000000..3a1a1d401 --- /dev/null +++ b/content/zh/docs/project-user-guide/application-workloads/cronjob.md @@ -0,0 +1,44 @@ +--- +title: "CronJobs" +keywords: 'kubesphere, kubernetes, jobs, cronjobs' +description: 'Create a Kubernetes CronJob' + + +weight: 2260 +--- + +TBD + +{{< notice note >}} +### This is a simple note. +{{}} + +{{< notice tip >}} +This is a simple tip. +{{}} + +{{< notice info >}} +This is a simple info. +{{}} + +{{< notice warning >}} +This is a simple warning. +{{}} + +{{< tabs >}} + +{{< tab "first" >}} +### Why KubeSphere +{{}} + +{{< tab "second" >}} +``` +console.log('test') +``` +{{}} + +{{< tab "third" >}} +this is third tab +{{}} + +{{}} diff --git a/content/zh/docs/project-user-guide/application-workloads/daemonsets.md b/content/zh/docs/project-user-guide/application-workloads/daemonsets.md new file mode 100644 index 000000000..99938c55e --- /dev/null +++ b/content/zh/docs/project-user-guide/application-workloads/daemonsets.md @@ -0,0 +1,44 @@ +--- +title: "DaemonSets" +keywords: 'kubesphere, kubernetes, docker, devops, service mesh, openpitrix' +description: 'Kubernetes DaemonSets' + + +weight: 2250 +--- + +TBD + +{{< notice note >}} +### This is a simple note. +{{}} + +{{< notice tip >}} +This is a simple tip. +{{}} + +{{< notice info >}} +This is a simple info. +{{}} + +{{< notice warning >}} +This is a simple warning. +{{}} + +{{< tabs >}} + +{{< tab "first" >}} +### Why KubeSphere +{{}} + +{{< tab "second" >}} +``` +console.log('test') +``` +{{}} + +{{< tab "third" >}} +this is third tab +{{}} + +{{}} diff --git a/content/zh/docs/project-user-guide/application-workloads/deployments.md b/content/zh/docs/project-user-guide/application-workloads/deployments.md new file mode 100644 index 000000000..ec4e7682d --- /dev/null +++ b/content/zh/docs/project-user-guide/application-workloads/deployments.md @@ -0,0 +1,44 @@ +--- +title: "Deployments" +keywords: 'kubesphere, kubernetes, docker, devops, service mesh, openpitrix' +description: 'Kubernetes Deployments' + + +weight: 2230 +--- + +TBD + +{{< notice note >}} +### This is a simple note. +{{}} + +{{< notice tip >}} +This is a simple tip. +{{}} + +{{< notice info >}} +This is a simple info. +{{}} + +{{< notice warning >}} +This is a simple warning. +{{}} + +{{< tabs >}} + +{{< tab "first" >}} +### Why KubeSphere +{{}} + +{{< tab "second" >}} +``` +console.log('test') +``` +{{}} + +{{< tab "third" >}} +this is third tab +{{}} + +{{}} diff --git a/content/zh/docs/project-user-guide/application-workloads/ingress.md b/content/zh/docs/project-user-guide/application-workloads/ingress.md new file mode 100644 index 000000000..92f56c935 --- /dev/null +++ b/content/zh/docs/project-user-guide/application-workloads/ingress.md @@ -0,0 +1,44 @@ +--- +title: "Jobs" +keywords: 'kubesphere, kubernetes, docker, jobs' +description: 'Create a Kubernetes Job' + + +weight: 2260 +--- + +TBD + +{{< notice note >}} +### This is a simple note. +{{}} + +{{< notice tip >}} +This is a simple tip. +{{}} + +{{< notice info >}} +This is a simple info. +{{}} + +{{< notice warning >}} +This is a simple warning. +{{}} + +{{< tabs >}} + +{{< tab "first" >}} +### Why KubeSphere +{{}} + +{{< tab "second" >}} +``` +console.log('test') +``` +{{}} + +{{< tab "third" >}} +this is third tab +{{}} + +{{}} diff --git a/content/zh/docs/project-user-guide/application-workloads/jobs.md b/content/zh/docs/project-user-guide/application-workloads/jobs.md new file mode 100644 index 000000000..92f56c935 --- /dev/null +++ b/content/zh/docs/project-user-guide/application-workloads/jobs.md @@ -0,0 +1,44 @@ +--- +title: "Jobs" +keywords: 'kubesphere, kubernetes, docker, jobs' +description: 'Create a Kubernetes Job' + + +weight: 2260 +--- + +TBD + +{{< notice note >}} +### This is a simple note. +{{}} + +{{< notice tip >}} +This is a simple tip. +{{}} + +{{< notice info >}} +This is a simple info. +{{}} + +{{< notice warning >}} +This is a simple warning. +{{}} + +{{< tabs >}} + +{{< tab "first" >}} +### Why KubeSphere +{{}} + +{{< tab "second" >}} +``` +console.log('test') +``` +{{}} + +{{< tab "third" >}} +this is third tab +{{}} + +{{}} diff --git a/content/zh/docs/project-user-guide/application-workloads/s2i-template.md b/content/zh/docs/project-user-guide/application-workloads/s2i-template.md new file mode 100644 index 000000000..92f56c935 --- /dev/null +++ b/content/zh/docs/project-user-guide/application-workloads/s2i-template.md @@ -0,0 +1,44 @@ +--- +title: "Jobs" +keywords: 'kubesphere, kubernetes, docker, jobs' +description: 'Create a Kubernetes Job' + + +weight: 2260 +--- + +TBD + +{{< notice note >}} +### This is a simple note. +{{}} + +{{< notice tip >}} +This is a simple tip. +{{}} + +{{< notice info >}} +This is a simple info. +{{}} + +{{< notice warning >}} +This is a simple warning. +{{}} + +{{< tabs >}} + +{{< tab "first" >}} +### Why KubeSphere +{{}} + +{{< tab "second" >}} +``` +console.log('test') +``` +{{}} + +{{< tab "third" >}} +this is third tab +{{}} + +{{}} diff --git a/content/zh/docs/project-user-guide/application-workloads/services.md b/content/zh/docs/project-user-guide/application-workloads/services.md new file mode 100644 index 000000000..92f56c935 --- /dev/null +++ b/content/zh/docs/project-user-guide/application-workloads/services.md @@ -0,0 +1,44 @@ +--- +title: "Jobs" +keywords: 'kubesphere, kubernetes, docker, jobs' +description: 'Create a Kubernetes Job' + + +weight: 2260 +--- + +TBD + +{{< notice note >}} +### This is a simple note. +{{}} + +{{< notice tip >}} +This is a simple tip. +{{}} + +{{< notice info >}} +This is a simple info. +{{}} + +{{< notice warning >}} +This is a simple warning. +{{}} + +{{< tabs >}} + +{{< tab "first" >}} +### Why KubeSphere +{{}} + +{{< tab "second" >}} +``` +console.log('test') +``` +{{}} + +{{< tab "third" >}} +this is third tab +{{}} + +{{}} diff --git a/content/zh/docs/project-user-guide/application-workloads/statefulsets.md b/content/zh/docs/project-user-guide/application-workloads/statefulsets.md new file mode 100644 index 000000000..034fa6a0b --- /dev/null +++ b/content/zh/docs/project-user-guide/application-workloads/statefulsets.md @@ -0,0 +1,44 @@ +--- +title: "StatefulSets" +keywords: 'kubesphere, kubernetes, StatefulSets, dashboard, service' +description: 'Kubernetes StatefulSets' + + +weight: 2240 +--- + +TBD + +{{< notice note >}} +### This is a simple note. +{{}} + +{{< notice tip >}} +This is a simple tip. +{{}} + +{{< notice info >}} +This is a simple info. +{{}} + +{{< notice warning >}} +This is a simple warning. +{{}} + +{{< tabs >}} + +{{< tab "first" >}} +### Why KubeSphere +{{}} + +{{< tab "second" >}} +``` +console.log('test') +``` +{{}} + +{{< tab "third" >}} +this is third tab +{{}} + +{{}} diff --git a/content/zh/docs/project-user-guide/configuration/_index.md b/content/zh/docs/project-user-guide/configuration/_index.md new file mode 100644 index 000000000..2cf101ca5 --- /dev/null +++ b/content/zh/docs/project-user-guide/configuration/_index.md @@ -0,0 +1,7 @@ +--- +linkTitle: "Installation" +weight: 2100 + +_build: + render: false +--- \ No newline at end of file diff --git a/content/zh/docs/project-user-guide/configuration/configmaps.md b/content/zh/docs/project-user-guide/configuration/configmaps.md new file mode 100644 index 000000000..ae6f08d5c --- /dev/null +++ b/content/zh/docs/project-user-guide/configuration/configmaps.md @@ -0,0 +1,44 @@ +--- +title: "ConfigMaps" +keywords: 'kubernetes, docker, helm, ConfigMaps' +description: 'Create a Kubernetes ConfigMap' + +linkTitle: "ConfigMaps" +weight: 2110 +--- + +TBD + +{{< notice note >}} +### This is a simple note. +{{}} + +{{< notice tip >}} +This is a simple tip. +{{}} + +{{< notice info >}} +This is a simple info. +{{}} + +{{< notice warning >}} +This is a simple warning. +{{}} + +{{< tabs >}} + +{{< tab "first" >}} +### Why KubeSphere +{{}} + +{{< tab "second" >}} +``` +console.log('test') +``` +{{}} + +{{< tab "third" >}} +this is third tab +{{}} + +{{}} diff --git a/content/zh/docs/project-user-guide/configuration/image-registry.md b/content/zh/docs/project-user-guide/configuration/image-registry.md new file mode 100644 index 000000000..1e41dbbc1 --- /dev/null +++ b/content/zh/docs/project-user-guide/configuration/image-registry.md @@ -0,0 +1,44 @@ +--- +title: "Secrets" +keywords: 'KubeSphere, kubernetes, docker, Secrets' +description: 'Create a Kubernetes Secret' + +linkTitle: "Secrets" +weight: 2130 +--- + +TBD + +{{< notice note >}} +### This is a simple note. +{{}} + +{{< notice tip >}} +This is a simple tip. +{{}} + +{{< notice info >}} +This is a simple info. +{{}} + +{{< notice warning >}} +This is a simple warning. +{{}} + +{{< tabs >}} + +{{< tab "first" >}} +### Why KubeSphere +{{}} + +{{< tab "second" >}} +``` +console.log('test') +``` +{{}} + +{{< tab "third" >}} +this is third tab +{{}} + +{{}} diff --git a/content/zh/docs/project-user-guide/configuration/secrets.md b/content/zh/docs/project-user-guide/configuration/secrets.md new file mode 100644 index 000000000..1e41dbbc1 --- /dev/null +++ b/content/zh/docs/project-user-guide/configuration/secrets.md @@ -0,0 +1,44 @@ +--- +title: "Secrets" +keywords: 'KubeSphere, kubernetes, docker, Secrets' +description: 'Create a Kubernetes Secret' + +linkTitle: "Secrets" +weight: 2130 +--- + +TBD + +{{< notice note >}} +### This is a simple note. +{{}} + +{{< notice tip >}} +This is a simple tip. +{{}} + +{{< notice info >}} +This is a simple info. +{{}} + +{{< notice warning >}} +This is a simple warning. +{{}} + +{{< tabs >}} + +{{< tab "first" >}} +### Why KubeSphere +{{}} + +{{< tab "second" >}} +``` +console.log('test') +``` +{{}} + +{{< tab "third" >}} +this is third tab +{{}} + +{{}} diff --git a/content/zh/docs/project-user-guide/grayscale-release/_index.md b/content/zh/docs/project-user-guide/grayscale-release/_index.md new file mode 100644 index 000000000..2cf101ca5 --- /dev/null +++ b/content/zh/docs/project-user-guide/grayscale-release/_index.md @@ -0,0 +1,7 @@ +--- +linkTitle: "Installation" +weight: 2100 + +_build: + render: false +--- \ No newline at end of file diff --git a/content/zh/docs/project-user-guide/grayscale-release/blue-green-deployment.md b/content/zh/docs/project-user-guide/grayscale-release/blue-green-deployment.md new file mode 100644 index 000000000..d701b1ced --- /dev/null +++ b/content/zh/docs/project-user-guide/grayscale-release/blue-green-deployment.md @@ -0,0 +1,107 @@ +--- +title: "Volume Snapshots" +keywords: 'KubeSphere, kubernetes, docker, helm, jenkins, istio, prometheus' +description: 'Volume Snapshots' + +linkTitle: "Volume Snapshots" +weight: 2130 +--- + +This tutorial explains how to customize KubeSphere configurations in `conf/common.yaml`. You can reference the following section to understand each parameter. + +```yaml +######################### Kubernetes ######################### +# The default k8s version will be installed +kube_version: v1.16.7 + +# The default etcd version will be installed +etcd_version: v3.2.18 + +# Configure a cron job to backup etcd data, which is running on etcd machines. +# Period of running backup etcd job, the unit is minutes. +# The default value 30 means backup etcd every 30 minutes. +etcd_backup_period: 30 + +# How many backup replicas to keep. +# The default value5 means to keep latest 5 backups, older ones will be deleted by order. +keep_backup_number: 5 + +# The location to store etcd backups files on etcd machines. +etcd_backup_dir: "/var/backups/kube_etcd" + +# Add other registry. (For users who need to accelerate image download) +docker_registry_mirrors: + - https://docker.mirrors.ustc.edu.cn + - https://registry.docker-cn.com + - https://mirror.aliyuncs.com + +# Kubernetes network plugin, Calico will be installed by default. Note that Calico and flannel are recommended, which are tested and verified by KubeSphere. +kube_network_plugin: calico + +# A valid CIDR range for Kubernetes services, +# 1. should not overlap with node subnet +# 2. should not overlap with Kubernetes pod subnet +kube_service_addresses: 10.233.0.0/18 + +# A valid CIDR range for Kubernetes pod subnet, +# 1. should not overlap with node subnet +# 2. should not overlap with Kubernetes services subnet +kube_pods_subnet: 10.233.64.0/18 + +# Kube-proxy proxyMode configuration, either ipvs, or iptables +kube_proxy_mode: ipvs + +# Maximum pods allowed to run on every node. +kubelet_max_pods: 110 + +# Enable nodelocal dns cache, see https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.md#nodelocal-dns-cache for further information +enable_nodelocaldns: true + +# Highly Available loadbalancer example config +# apiserver_loadbalancer_domain_name: "lb.kubesphere.local" # Loadbalancer domain name +# loadbalancer_apiserver: # Loadbalancer apiserver configuration, please uncomment this line when you prepare HA install +# address: 192.168.0.10 # Loadbalancer apiserver IP address +# port: 6443 # apiserver port + +######################### KubeSphere ######################### + +# Version of KubeSphere +ks_version: v2.1.0 + +# KubeSphere console port, range 30000-32767, +# but 30180/30280/30380 are reserved for internal service +console_port: 30880 # KubeSphere console nodeport + +#CommonComponent +mysql_volume_size: 20Gi # MySQL PVC size +minio_volume_size: 20Gi # Minio PVC size +etcd_volume_size: 20Gi # etcd PVC size +openldap_volume_size: 2Gi # openldap PVC size +redis_volume_size: 2Gi # Redis PVC size + + +# Monitoring +prometheus_replica: 2 # Prometheus replicas with 2 by default which are responsible for monitoring different segments of data source and provide high availability as well. +prometheus_memory_request: 400Mi # Prometheus request memory +prometheus_volume_size: 20Gi # Prometheus PVC size +grafana_enabled: true # enable grafana or not + + +## Container Engine Acceleration +## Use nvidia gpu acceleration in containers +# nvidia_accelerator_enabled: true # enable Nvidia GPU accelerator or not. It supports hybrid node with GPU and CPU installed. +# nvidia_gpu_nodes: # The GPU nodes specified in hosts.ini. FOr now we only support Ubuntu 16.04 +# - kube-gpu-001 # The host name of the GPU node specified in hosts.ini +``` + +## How to Configure a GPU Node + +You may want to use GPU nodes for special purpose such as machine learning. Let's say you have a GPU node called `node2` in `hosts.ini`, then in the file `common.yaml` specify the following configuration. Please be aware the `- node2` has two spaces indent. + +```yaml + nvidia_accelerator_enabled: true + nvidia_gpu_nodes: + - node2 +``` + +> Note: The GPU node now only supports Ubuntu 16.04. diff --git a/content/zh/docs/project-user-guide/grayscale-release/canary-release.md b/content/zh/docs/project-user-guide/grayscale-release/canary-release.md new file mode 100644 index 000000000..d701b1ced --- /dev/null +++ b/content/zh/docs/project-user-guide/grayscale-release/canary-release.md @@ -0,0 +1,107 @@ +--- +title: "Volume Snapshots" +keywords: 'KubeSphere, kubernetes, docker, helm, jenkins, istio, prometheus' +description: 'Volume Snapshots' + +linkTitle: "Volume Snapshots" +weight: 2130 +--- + +This tutorial explains how to customize KubeSphere configurations in `conf/common.yaml`. You can reference the following section to understand each parameter. + +```yaml +######################### Kubernetes ######################### +# The default k8s version will be installed +kube_version: v1.16.7 + +# The default etcd version will be installed +etcd_version: v3.2.18 + +# Configure a cron job to backup etcd data, which is running on etcd machines. +# Period of running backup etcd job, the unit is minutes. +# The default value 30 means backup etcd every 30 minutes. +etcd_backup_period: 30 + +# How many backup replicas to keep. +# The default value5 means to keep latest 5 backups, older ones will be deleted by order. +keep_backup_number: 5 + +# The location to store etcd backups files on etcd machines. +etcd_backup_dir: "/var/backups/kube_etcd" + +# Add other registry. (For users who need to accelerate image download) +docker_registry_mirrors: + - https://docker.mirrors.ustc.edu.cn + - https://registry.docker-cn.com + - https://mirror.aliyuncs.com + +# Kubernetes network plugin, Calico will be installed by default. Note that Calico and flannel are recommended, which are tested and verified by KubeSphere. +kube_network_plugin: calico + +# A valid CIDR range for Kubernetes services, +# 1. should not overlap with node subnet +# 2. should not overlap with Kubernetes pod subnet +kube_service_addresses: 10.233.0.0/18 + +# A valid CIDR range for Kubernetes pod subnet, +# 1. should not overlap with node subnet +# 2. should not overlap with Kubernetes services subnet +kube_pods_subnet: 10.233.64.0/18 + +# Kube-proxy proxyMode configuration, either ipvs, or iptables +kube_proxy_mode: ipvs + +# Maximum pods allowed to run on every node. +kubelet_max_pods: 110 + +# Enable nodelocal dns cache, see https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.md#nodelocal-dns-cache for further information +enable_nodelocaldns: true + +# Highly Available loadbalancer example config +# apiserver_loadbalancer_domain_name: "lb.kubesphere.local" # Loadbalancer domain name +# loadbalancer_apiserver: # Loadbalancer apiserver configuration, please uncomment this line when you prepare HA install +# address: 192.168.0.10 # Loadbalancer apiserver IP address +# port: 6443 # apiserver port + +######################### KubeSphere ######################### + +# Version of KubeSphere +ks_version: v2.1.0 + +# KubeSphere console port, range 30000-32767, +# but 30180/30280/30380 are reserved for internal service +console_port: 30880 # KubeSphere console nodeport + +#CommonComponent +mysql_volume_size: 20Gi # MySQL PVC size +minio_volume_size: 20Gi # Minio PVC size +etcd_volume_size: 20Gi # etcd PVC size +openldap_volume_size: 2Gi # openldap PVC size +redis_volume_size: 2Gi # Redis PVC size + + +# Monitoring +prometheus_replica: 2 # Prometheus replicas with 2 by default which are responsible for monitoring different segments of data source and provide high availability as well. +prometheus_memory_request: 400Mi # Prometheus request memory +prometheus_volume_size: 20Gi # Prometheus PVC size +grafana_enabled: true # enable grafana or not + + +## Container Engine Acceleration +## Use nvidia gpu acceleration in containers +# nvidia_accelerator_enabled: true # enable Nvidia GPU accelerator or not. It supports hybrid node with GPU and CPU installed. +# nvidia_gpu_nodes: # The GPU nodes specified in hosts.ini. FOr now we only support Ubuntu 16.04 +# - kube-gpu-001 # The host name of the GPU node specified in hosts.ini +``` + +## How to Configure a GPU Node + +You may want to use GPU nodes for special purpose such as machine learning. Let's say you have a GPU node called `node2` in `hosts.ini`, then in the file `common.yaml` specify the following configuration. Please be aware the `- node2` has two spaces indent. + +```yaml + nvidia_accelerator_enabled: true + nvidia_gpu_nodes: + - node2 +``` + +> Note: The GPU node now only supports Ubuntu 16.04. diff --git a/content/zh/docs/project-user-guide/grayscale-release/overview.md b/content/zh/docs/project-user-guide/grayscale-release/overview.md new file mode 100644 index 000000000..b9b129818 --- /dev/null +++ b/content/zh/docs/project-user-guide/grayscale-release/overview.md @@ -0,0 +1,10 @@ +--- +title: "Volumes" +keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus' +description: 'Create Volumes (PVCs)' + +linkTitle: "Volumes" +weight: 2110 +--- + +TBD diff --git a/content/zh/docs/project-user-guide/grayscale-release/traffic-mirroring.md b/content/zh/docs/project-user-guide/grayscale-release/traffic-mirroring.md new file mode 100644 index 000000000..d701b1ced --- /dev/null +++ b/content/zh/docs/project-user-guide/grayscale-release/traffic-mirroring.md @@ -0,0 +1,107 @@ +--- +title: "Volume Snapshots" +keywords: 'KubeSphere, kubernetes, docker, helm, jenkins, istio, prometheus' +description: 'Volume Snapshots' + +linkTitle: "Volume Snapshots" +weight: 2130 +--- + +This tutorial explains how to customize KubeSphere configurations in `conf/common.yaml`. You can reference the following section to understand each parameter. + +```yaml +######################### Kubernetes ######################### +# The default k8s version will be installed +kube_version: v1.16.7 + +# The default etcd version will be installed +etcd_version: v3.2.18 + +# Configure a cron job to backup etcd data, which is running on etcd machines. +# Period of running backup etcd job, the unit is minutes. +# The default value 30 means backup etcd every 30 minutes. +etcd_backup_period: 30 + +# How many backup replicas to keep. +# The default value5 means to keep latest 5 backups, older ones will be deleted by order. +keep_backup_number: 5 + +# The location to store etcd backups files on etcd machines. +etcd_backup_dir: "/var/backups/kube_etcd" + +# Add other registry. (For users who need to accelerate image download) +docker_registry_mirrors: + - https://docker.mirrors.ustc.edu.cn + - https://registry.docker-cn.com + - https://mirror.aliyuncs.com + +# Kubernetes network plugin, Calico will be installed by default. Note that Calico and flannel are recommended, which are tested and verified by KubeSphere. +kube_network_plugin: calico + +# A valid CIDR range for Kubernetes services, +# 1. should not overlap with node subnet +# 2. should not overlap with Kubernetes pod subnet +kube_service_addresses: 10.233.0.0/18 + +# A valid CIDR range for Kubernetes pod subnet, +# 1. should not overlap with node subnet +# 2. should not overlap with Kubernetes services subnet +kube_pods_subnet: 10.233.64.0/18 + +# Kube-proxy proxyMode configuration, either ipvs, or iptables +kube_proxy_mode: ipvs + +# Maximum pods allowed to run on every node. +kubelet_max_pods: 110 + +# Enable nodelocal dns cache, see https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.md#nodelocal-dns-cache for further information +enable_nodelocaldns: true + +# Highly Available loadbalancer example config +# apiserver_loadbalancer_domain_name: "lb.kubesphere.local" # Loadbalancer domain name +# loadbalancer_apiserver: # Loadbalancer apiserver configuration, please uncomment this line when you prepare HA install +# address: 192.168.0.10 # Loadbalancer apiserver IP address +# port: 6443 # apiserver port + +######################### KubeSphere ######################### + +# Version of KubeSphere +ks_version: v2.1.0 + +# KubeSphere console port, range 30000-32767, +# but 30180/30280/30380 are reserved for internal service +console_port: 30880 # KubeSphere console nodeport + +#CommonComponent +mysql_volume_size: 20Gi # MySQL PVC size +minio_volume_size: 20Gi # Minio PVC size +etcd_volume_size: 20Gi # etcd PVC size +openldap_volume_size: 2Gi # openldap PVC size +redis_volume_size: 2Gi # Redis PVC size + + +# Monitoring +prometheus_replica: 2 # Prometheus replicas with 2 by default which are responsible for monitoring different segments of data source and provide high availability as well. +prometheus_memory_request: 400Mi # Prometheus request memory +prometheus_volume_size: 20Gi # Prometheus PVC size +grafana_enabled: true # enable grafana or not + + +## Container Engine Acceleration +## Use nvidia gpu acceleration in containers +# nvidia_accelerator_enabled: true # enable Nvidia GPU accelerator or not. It supports hybrid node with GPU and CPU installed. +# nvidia_gpu_nodes: # The GPU nodes specified in hosts.ini. FOr now we only support Ubuntu 16.04 +# - kube-gpu-001 # The host name of the GPU node specified in hosts.ini +``` + +## How to Configure a GPU Node + +You may want to use GPU nodes for special purpose such as machine learning. Let's say you have a GPU node called `node2` in `hosts.ini`, then in the file `common.yaml` specify the following configuration. Please be aware the `- node2` has two spaces indent. + +```yaml + nvidia_accelerator_enabled: true + nvidia_gpu_nodes: + - node2 +``` + +> Note: The GPU node now only supports Ubuntu 16.04. diff --git a/content/zh/docs/project-user-guide/project-administration/_index.md b/content/zh/docs/project-user-guide/project-administration/_index.md new file mode 100644 index 000000000..2cf101ca5 --- /dev/null +++ b/content/zh/docs/project-user-guide/project-administration/_index.md @@ -0,0 +1,7 @@ +--- +linkTitle: "Installation" +weight: 2100 + +_build: + render: false +--- \ No newline at end of file diff --git a/content/zh/docs/project-user-guide/project-administration/project-gateway.md b/content/zh/docs/project-user-guide/project-administration/project-gateway.md new file mode 100644 index 000000000..d701b1ced --- /dev/null +++ b/content/zh/docs/project-user-guide/project-administration/project-gateway.md @@ -0,0 +1,107 @@ +--- +title: "Volume Snapshots" +keywords: 'KubeSphere, kubernetes, docker, helm, jenkins, istio, prometheus' +description: 'Volume Snapshots' + +linkTitle: "Volume Snapshots" +weight: 2130 +--- + +This tutorial explains how to customize KubeSphere configurations in `conf/common.yaml`. You can reference the following section to understand each parameter. + +```yaml +######################### Kubernetes ######################### +# The default k8s version will be installed +kube_version: v1.16.7 + +# The default etcd version will be installed +etcd_version: v3.2.18 + +# Configure a cron job to backup etcd data, which is running on etcd machines. +# Period of running backup etcd job, the unit is minutes. +# The default value 30 means backup etcd every 30 minutes. +etcd_backup_period: 30 + +# How many backup replicas to keep. +# The default value5 means to keep latest 5 backups, older ones will be deleted by order. +keep_backup_number: 5 + +# The location to store etcd backups files on etcd machines. +etcd_backup_dir: "/var/backups/kube_etcd" + +# Add other registry. (For users who need to accelerate image download) +docker_registry_mirrors: + - https://docker.mirrors.ustc.edu.cn + - https://registry.docker-cn.com + - https://mirror.aliyuncs.com + +# Kubernetes network plugin, Calico will be installed by default. Note that Calico and flannel are recommended, which are tested and verified by KubeSphere. +kube_network_plugin: calico + +# A valid CIDR range for Kubernetes services, +# 1. should not overlap with node subnet +# 2. should not overlap with Kubernetes pod subnet +kube_service_addresses: 10.233.0.0/18 + +# A valid CIDR range for Kubernetes pod subnet, +# 1. should not overlap with node subnet +# 2. should not overlap with Kubernetes services subnet +kube_pods_subnet: 10.233.64.0/18 + +# Kube-proxy proxyMode configuration, either ipvs, or iptables +kube_proxy_mode: ipvs + +# Maximum pods allowed to run on every node. +kubelet_max_pods: 110 + +# Enable nodelocal dns cache, see https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.md#nodelocal-dns-cache for further information +enable_nodelocaldns: true + +# Highly Available loadbalancer example config +# apiserver_loadbalancer_domain_name: "lb.kubesphere.local" # Loadbalancer domain name +# loadbalancer_apiserver: # Loadbalancer apiserver configuration, please uncomment this line when you prepare HA install +# address: 192.168.0.10 # Loadbalancer apiserver IP address +# port: 6443 # apiserver port + +######################### KubeSphere ######################### + +# Version of KubeSphere +ks_version: v2.1.0 + +# KubeSphere console port, range 30000-32767, +# but 30180/30280/30380 are reserved for internal service +console_port: 30880 # KubeSphere console nodeport + +#CommonComponent +mysql_volume_size: 20Gi # MySQL PVC size +minio_volume_size: 20Gi # Minio PVC size +etcd_volume_size: 20Gi # etcd PVC size +openldap_volume_size: 2Gi # openldap PVC size +redis_volume_size: 2Gi # Redis PVC size + + +# Monitoring +prometheus_replica: 2 # Prometheus replicas with 2 by default which are responsible for monitoring different segments of data source and provide high availability as well. +prometheus_memory_request: 400Mi # Prometheus request memory +prometheus_volume_size: 20Gi # Prometheus PVC size +grafana_enabled: true # enable grafana or not + + +## Container Engine Acceleration +## Use nvidia gpu acceleration in containers +# nvidia_accelerator_enabled: true # enable Nvidia GPU accelerator or not. It supports hybrid node with GPU and CPU installed. +# nvidia_gpu_nodes: # The GPU nodes specified in hosts.ini. FOr now we only support Ubuntu 16.04 +# - kube-gpu-001 # The host name of the GPU node specified in hosts.ini +``` + +## How to Configure a GPU Node + +You may want to use GPU nodes for special purpose such as machine learning. Let's say you have a GPU node called `node2` in `hosts.ini`, then in the file `common.yaml` specify the following configuration. Please be aware the `- node2` has two spaces indent. + +```yaml + nvidia_accelerator_enabled: true + nvidia_gpu_nodes: + - node2 +``` + +> Note: The GPU node now only supports Ubuntu 16.04. diff --git a/content/zh/docs/project-user-guide/project-administration/project-members.md b/content/zh/docs/project-user-guide/project-administration/project-members.md new file mode 100644 index 000000000..caa49c5b2 --- /dev/null +++ b/content/zh/docs/project-user-guide/project-administration/project-members.md @@ -0,0 +1,107 @@ +--- +title: "StorageClass" +keywords: 'KubeSphere, kubernetes, docker, helm, jenkins, istio, prometheus' +description: 'StorageClass' + +linkTitle: "Volume Snapshots" +weight: 2130 +--- + +This tutorial explains how to customize KubeSphere configurations in `conf/common.yaml`. You can reference the following section to understand each parameter. + +```yaml +######################### Kubernetes ######################### +# The default k8s version will be installed +kube_version: v1.16.7 + +# The default etcd version will be installed +etcd_version: v3.2.18 + +# Configure a cron job to backup etcd data, which is running on etcd machines. +# Period of running backup etcd job, the unit is minutes. +# The default value 30 means backup etcd every 30 minutes. +etcd_backup_period: 30 + +# How many backup replicas to keep. +# The default value5 means to keep latest 5 backups, older ones will be deleted by order. +keep_backup_number: 5 + +# The location to store etcd backups files on etcd machines. +etcd_backup_dir: "/var/backups/kube_etcd" + +# Add other registry. (For users who need to accelerate image download) +docker_registry_mirrors: + - https://docker.mirrors.ustc.edu.cn + - https://registry.docker-cn.com + - https://mirror.aliyuncs.com + +# Kubernetes network plugin, Calico will be installed by default. Note that Calico and flannel are recommended, which are tested and verified by KubeSphere. +kube_network_plugin: calico + +# A valid CIDR range for Kubernetes services, +# 1. should not overlap with node subnet +# 2. should not overlap with Kubernetes pod subnet +kube_service_addresses: 10.233.0.0/18 + +# A valid CIDR range for Kubernetes pod subnet, +# 1. should not overlap with node subnet +# 2. should not overlap with Kubernetes services subnet +kube_pods_subnet: 10.233.64.0/18 + +# Kube-proxy proxyMode configuration, either ipvs, or iptables +kube_proxy_mode: ipvs + +# Maximum pods allowed to run on every node. +kubelet_max_pods: 110 + +# Enable nodelocal dns cache, see https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.md#nodelocal-dns-cache for further information +enable_nodelocaldns: true + +# Highly Available loadbalancer example config +# apiserver_loadbalancer_domain_name: "lb.kubesphere.local" # Loadbalancer domain name +# loadbalancer_apiserver: # Loadbalancer apiserver configuration, please uncomment this line when you prepare HA install +# address: 192.168.0.10 # Loadbalancer apiserver IP address +# port: 6443 # apiserver port + +######################### KubeSphere ######################### + +# Version of KubeSphere +ks_version: v2.1.0 + +# KubeSphere console port, range 30000-32767, +# but 30180/30280/30380 are reserved for internal service +console_port: 30880 # KubeSphere console nodeport + +#CommonComponent +mysql_volume_size: 20Gi # MySQL PVC size +minio_volume_size: 20Gi # Minio PVC size +etcd_volume_size: 20Gi # etcd PVC size +openldap_volume_size: 2Gi # openldap PVC size +redis_volume_size: 2Gi # Redis PVC size + + +# Monitoring +prometheus_replica: 2 # Prometheus replicas with 2 by default which are responsible for monitoring different segments of data source and provide high availability as well. +prometheus_memory_request: 400Mi # Prometheus request memory +prometheus_volume_size: 20Gi # Prometheus PVC size +grafana_enabled: true # enable grafana or not + + +## Container Engine Acceleration +## Use nvidia gpu acceleration in containers +# nvidia_accelerator_enabled: true # enable Nvidia GPU accelerator or not. It supports hybrid node with GPU and CPU installed. +# nvidia_gpu_nodes: # The GPU nodes specified in hosts.ini. FOr now we only support Ubuntu 16.04 +# - kube-gpu-001 # The host name of the GPU node specified in hosts.ini +``` + +## How to Configure a GPU Node + +You may want to use GPU nodes for special purpose such as machine learning. Let's say you have a GPU node called `node2` in `hosts.ini`, then in the file `common.yaml` specify the following configuration. Please be aware the `- node2` has two spaces indent. + +```yaml + nvidia_accelerator_enabled: true + nvidia_gpu_nodes: + - node2 +``` + +> Note: The GPU node now only supports Ubuntu 16.04. diff --git a/content/zh/docs/project-user-guide/project-administration/project-quota.md b/content/zh/docs/project-user-guide/project-administration/project-quota.md new file mode 100644 index 000000000..b9b129818 --- /dev/null +++ b/content/zh/docs/project-user-guide/project-administration/project-quota.md @@ -0,0 +1,10 @@ +--- +title: "Volumes" +keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus' +description: 'Create Volumes (PVCs)' + +linkTitle: "Volumes" +weight: 2110 +--- + +TBD diff --git a/content/zh/docs/project-user-guide/project-administration/project-roles.md b/content/zh/docs/project-user-guide/project-administration/project-roles.md new file mode 100644 index 000000000..d701b1ced --- /dev/null +++ b/content/zh/docs/project-user-guide/project-administration/project-roles.md @@ -0,0 +1,107 @@ +--- +title: "Volume Snapshots" +keywords: 'KubeSphere, kubernetes, docker, helm, jenkins, istio, prometheus' +description: 'Volume Snapshots' + +linkTitle: "Volume Snapshots" +weight: 2130 +--- + +This tutorial explains how to customize KubeSphere configurations in `conf/common.yaml`. You can reference the following section to understand each parameter. + +```yaml +######################### Kubernetes ######################### +# The default k8s version will be installed +kube_version: v1.16.7 + +# The default etcd version will be installed +etcd_version: v3.2.18 + +# Configure a cron job to backup etcd data, which is running on etcd machines. +# Period of running backup etcd job, the unit is minutes. +# The default value 30 means backup etcd every 30 minutes. +etcd_backup_period: 30 + +# How many backup replicas to keep. +# The default value5 means to keep latest 5 backups, older ones will be deleted by order. +keep_backup_number: 5 + +# The location to store etcd backups files on etcd machines. +etcd_backup_dir: "/var/backups/kube_etcd" + +# Add other registry. (For users who need to accelerate image download) +docker_registry_mirrors: + - https://docker.mirrors.ustc.edu.cn + - https://registry.docker-cn.com + - https://mirror.aliyuncs.com + +# Kubernetes network plugin, Calico will be installed by default. Note that Calico and flannel are recommended, which are tested and verified by KubeSphere. +kube_network_plugin: calico + +# A valid CIDR range for Kubernetes services, +# 1. should not overlap with node subnet +# 2. should not overlap with Kubernetes pod subnet +kube_service_addresses: 10.233.0.0/18 + +# A valid CIDR range for Kubernetes pod subnet, +# 1. should not overlap with node subnet +# 2. should not overlap with Kubernetes services subnet +kube_pods_subnet: 10.233.64.0/18 + +# Kube-proxy proxyMode configuration, either ipvs, or iptables +kube_proxy_mode: ipvs + +# Maximum pods allowed to run on every node. +kubelet_max_pods: 110 + +# Enable nodelocal dns cache, see https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.md#nodelocal-dns-cache for further information +enable_nodelocaldns: true + +# Highly Available loadbalancer example config +# apiserver_loadbalancer_domain_name: "lb.kubesphere.local" # Loadbalancer domain name +# loadbalancer_apiserver: # Loadbalancer apiserver configuration, please uncomment this line when you prepare HA install +# address: 192.168.0.10 # Loadbalancer apiserver IP address +# port: 6443 # apiserver port + +######################### KubeSphere ######################### + +# Version of KubeSphere +ks_version: v2.1.0 + +# KubeSphere console port, range 30000-32767, +# but 30180/30280/30380 are reserved for internal service +console_port: 30880 # KubeSphere console nodeport + +#CommonComponent +mysql_volume_size: 20Gi # MySQL PVC size +minio_volume_size: 20Gi # Minio PVC size +etcd_volume_size: 20Gi # etcd PVC size +openldap_volume_size: 2Gi # openldap PVC size +redis_volume_size: 2Gi # Redis PVC size + + +# Monitoring +prometheus_replica: 2 # Prometheus replicas with 2 by default which are responsible for monitoring different segments of data source and provide high availability as well. +prometheus_memory_request: 400Mi # Prometheus request memory +prometheus_volume_size: 20Gi # Prometheus PVC size +grafana_enabled: true # enable grafana or not + + +## Container Engine Acceleration +## Use nvidia gpu acceleration in containers +# nvidia_accelerator_enabled: true # enable Nvidia GPU accelerator or not. It supports hybrid node with GPU and CPU installed. +# nvidia_gpu_nodes: # The GPU nodes specified in hosts.ini. FOr now we only support Ubuntu 16.04 +# - kube-gpu-001 # The host name of the GPU node specified in hosts.ini +``` + +## How to Configure a GPU Node + +You may want to use GPU nodes for special purpose such as machine learning. Let's say you have a GPU node called `node2` in `hosts.ini`, then in the file `common.yaml` specify the following configuration. Please be aware the `- node2` has two spaces indent. + +```yaml + nvidia_accelerator_enabled: true + nvidia_gpu_nodes: + - node2 +``` + +> Note: The GPU node now only supports Ubuntu 16.04. diff --git a/content/zh/docs/project-user-guide/storage/_index.md b/content/zh/docs/project-user-guide/storage/_index.md new file mode 100644 index 000000000..2cf101ca5 --- /dev/null +++ b/content/zh/docs/project-user-guide/storage/_index.md @@ -0,0 +1,7 @@ +--- +linkTitle: "Installation" +weight: 2100 + +_build: + render: false +--- \ No newline at end of file diff --git a/content/zh/docs/project-user-guide/storage/volume-snapshots.md b/content/zh/docs/project-user-guide/storage/volume-snapshots.md new file mode 100644 index 000000000..d701b1ced --- /dev/null +++ b/content/zh/docs/project-user-guide/storage/volume-snapshots.md @@ -0,0 +1,107 @@ +--- +title: "Volume Snapshots" +keywords: 'KubeSphere, kubernetes, docker, helm, jenkins, istio, prometheus' +description: 'Volume Snapshots' + +linkTitle: "Volume Snapshots" +weight: 2130 +--- + +This tutorial explains how to customize KubeSphere configurations in `conf/common.yaml`. You can reference the following section to understand each parameter. + +```yaml +######################### Kubernetes ######################### +# The default k8s version will be installed +kube_version: v1.16.7 + +# The default etcd version will be installed +etcd_version: v3.2.18 + +# Configure a cron job to backup etcd data, which is running on etcd machines. +# Period of running backup etcd job, the unit is minutes. +# The default value 30 means backup etcd every 30 minutes. +etcd_backup_period: 30 + +# How many backup replicas to keep. +# The default value5 means to keep latest 5 backups, older ones will be deleted by order. +keep_backup_number: 5 + +# The location to store etcd backups files on etcd machines. +etcd_backup_dir: "/var/backups/kube_etcd" + +# Add other registry. (For users who need to accelerate image download) +docker_registry_mirrors: + - https://docker.mirrors.ustc.edu.cn + - https://registry.docker-cn.com + - https://mirror.aliyuncs.com + +# Kubernetes network plugin, Calico will be installed by default. Note that Calico and flannel are recommended, which are tested and verified by KubeSphere. +kube_network_plugin: calico + +# A valid CIDR range for Kubernetes services, +# 1. should not overlap with node subnet +# 2. should not overlap with Kubernetes pod subnet +kube_service_addresses: 10.233.0.0/18 + +# A valid CIDR range for Kubernetes pod subnet, +# 1. should not overlap with node subnet +# 2. should not overlap with Kubernetes services subnet +kube_pods_subnet: 10.233.64.0/18 + +# Kube-proxy proxyMode configuration, either ipvs, or iptables +kube_proxy_mode: ipvs + +# Maximum pods allowed to run on every node. +kubelet_max_pods: 110 + +# Enable nodelocal dns cache, see https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.md#nodelocal-dns-cache for further information +enable_nodelocaldns: true + +# Highly Available loadbalancer example config +# apiserver_loadbalancer_domain_name: "lb.kubesphere.local" # Loadbalancer domain name +# loadbalancer_apiserver: # Loadbalancer apiserver configuration, please uncomment this line when you prepare HA install +# address: 192.168.0.10 # Loadbalancer apiserver IP address +# port: 6443 # apiserver port + +######################### KubeSphere ######################### + +# Version of KubeSphere +ks_version: v2.1.0 + +# KubeSphere console port, range 30000-32767, +# but 30180/30280/30380 are reserved for internal service +console_port: 30880 # KubeSphere console nodeport + +#CommonComponent +mysql_volume_size: 20Gi # MySQL PVC size +minio_volume_size: 20Gi # Minio PVC size +etcd_volume_size: 20Gi # etcd PVC size +openldap_volume_size: 2Gi # openldap PVC size +redis_volume_size: 2Gi # Redis PVC size + + +# Monitoring +prometheus_replica: 2 # Prometheus replicas with 2 by default which are responsible for monitoring different segments of data source and provide high availability as well. +prometheus_memory_request: 400Mi # Prometheus request memory +prometheus_volume_size: 20Gi # Prometheus PVC size +grafana_enabled: true # enable grafana or not + + +## Container Engine Acceleration +## Use nvidia gpu acceleration in containers +# nvidia_accelerator_enabled: true # enable Nvidia GPU accelerator or not. It supports hybrid node with GPU and CPU installed. +# nvidia_gpu_nodes: # The GPU nodes specified in hosts.ini. FOr now we only support Ubuntu 16.04 +# - kube-gpu-001 # The host name of the GPU node specified in hosts.ini +``` + +## How to Configure a GPU Node + +You may want to use GPU nodes for special purpose such as machine learning. Let's say you have a GPU node called `node2` in `hosts.ini`, then in the file `common.yaml` specify the following configuration. Please be aware the `- node2` has two spaces indent. + +```yaml + nvidia_accelerator_enabled: true + nvidia_gpu_nodes: + - node2 +``` + +> Note: The GPU node now only supports Ubuntu 16.04. diff --git a/content/zh/docs/project-user-guide/storage/volumes.md b/content/zh/docs/project-user-guide/storage/volumes.md new file mode 100644 index 000000000..b9b129818 --- /dev/null +++ b/content/zh/docs/project-user-guide/storage/volumes.md @@ -0,0 +1,10 @@ +--- +title: "Volumes" +keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus' +description: 'Create Volumes (PVCs)' + +linkTitle: "Volumes" +weight: 2110 +--- + +TBD diff --git a/content/zh/docs/quick-start/_index.md b/content/zh/docs/quick-start/_index.md new file mode 100644 index 000000000..7d0a17eee --- /dev/null +++ b/content/zh/docs/quick-start/_index.md @@ -0,0 +1,22 @@ +--- +title: "Quick Start" +description: "Help you to better understand KubeSphere with detailed graphics and contents" +layout: "single" + +linkTitle: "Quick Start" + +weight: 1500 + +icon: "/images/docs/docs.svg" + +--- + +## Installing KubeSphere and Kubernetes on Linux + +In this chapter, we will demonstrate how to use KubeKey to provision a new Kubernetes and KubeSphere cluster based on different infrastructures. Kubekey can help you to quickly build a production-ready cluster architecture on a set of machines from zero to one. It also helps you to easily scale the cluster and install pluggable components on existing architecture. + +## Most Popular Pages + +Below you will find some of the most common and helpful pages from this chapter. We highly recommend you to review them at first. + +{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} diff --git a/content/zh/docs/quick-start/all-in-one-on-linux.md b/content/zh/docs/quick-start/all-in-one-on-linux.md new file mode 100644 index 000000000..4237501c5 --- /dev/null +++ b/content/zh/docs/quick-start/all-in-one-on-linux.md @@ -0,0 +1,8 @@ +--- +title: "All-in-one on Linux" +keywords: 'kubesphere, kubernetes, docker, multi-tenant' +description: 'All-in-one on Linux' + +linkTitle: "All-in-one on Linux" +weight: 3010 +--- diff --git a/content/zh/docs/quick-start/composing-an-app.md b/content/zh/docs/quick-start/composing-an-app.md new file mode 100644 index 000000000..d7705622f --- /dev/null +++ b/content/zh/docs/quick-start/composing-an-app.md @@ -0,0 +1,8 @@ +--- +title: "Compose and deploy a Wordpress App" +keywords: 'kubesphere, kubernetes, docker, multi-tenant' +description: 'Compose and deploy a Wordpress App' + +linkTitle: "Compose and deploy a Wordpress App" +weight: 3050 +--- diff --git a/content/zh/docs/quick-start/create-workspace-and-project.md b/content/zh/docs/quick-start/create-workspace-and-project.md new file mode 100644 index 000000000..954f8648d --- /dev/null +++ b/content/zh/docs/quick-start/create-workspace-and-project.md @@ -0,0 +1,8 @@ +--- +title: "Create Workspace, Project, Account, Role" +keywords: 'kubesphere, kubernetes, docker, multi-tenant' +description: 'Create Workspace, Project, Account, and Role' + +linkTitle: "Create Workspace, Project, Account, Role" +weight: 3030 +--- diff --git a/content/zh/docs/quick-start/deploy-bookinfo-to-k8s.md b/content/zh/docs/quick-start/deploy-bookinfo-to-k8s.md new file mode 100644 index 000000000..032dac164 --- /dev/null +++ b/content/zh/docs/quick-start/deploy-bookinfo-to-k8s.md @@ -0,0 +1,8 @@ +--- +title: "Deploy a Bookinfo App" +keywords: 'kubesphere, kubernetes, docker, multi-tenant' +description: 'Deploy a Bookinfo App' + +linkTitle: "Deploy a Bookinfo App" +weight: 3040 +--- diff --git a/content/zh/docs/quick-start/enable-pluggable-compoents.md b/content/zh/docs/quick-start/enable-pluggable-compoents.md new file mode 100644 index 000000000..390d6dd9e --- /dev/null +++ b/content/zh/docs/quick-start/enable-pluggable-compoents.md @@ -0,0 +1,8 @@ +--- +title: "Enable Pluggable Components" +keywords: 'kubesphere, kubernetes, docker, multi-tenant' +description: 'Enable Pluggable Components' + +linkTitle: "Enable Pluggable Components" +weight: 3060 +--- diff --git a/content/zh/docs/quick-start/minimal-kubesphere-on-k8s.md b/content/zh/docs/quick-start/minimal-kubesphere-on-k8s.md new file mode 100644 index 000000000..36fd2ce80 --- /dev/null +++ b/content/zh/docs/quick-start/minimal-kubesphere-on-k8s.md @@ -0,0 +1,8 @@ +--- +title: "Minimal KubeSphere on Kubernetes" +keywords: 'kubesphere, kubernetes, docker, multi-tenant' +description: 'Install a Minimal KubeSphere on Kubernetes' + +linkTitle: "Minimal KubeSphere on Kubernetes" +weight: 3020 +--- diff --git a/content/zh/docs/release/_index.md b/content/zh/docs/release/_index.md new file mode 100644 index 000000000..ee376ec42 --- /dev/null +++ b/content/zh/docs/release/_index.md @@ -0,0 +1,22 @@ +--- +title: "Release Notes" +description: "Help you to better understand KubeSphere with detailed graphics and contents" +layout: "single" + +linkTitle: "Release Notes" + +weight: 1 + +icon: "/images/docs/docs.svg" + +--- + +## Installing KubeSphere and Kubernetes on Linux + +In this chapter, we will demonstrate how to use KubeKey to provision a new Kubernetes and KubeSphere cluster based on different infrastructures. Kubekey can help you to quickly build a production-ready cluster architecture on a set of machines from zero to one. It also helps you to easily scale the cluster and install pluggable components on existing architecture. + +## Most Popular Pages + +Below you will find some of the most common and helpful pages from this chapter. We highly recommend you to review them at first. + +{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} diff --git a/content/zh/docs/release/release-v200.md b/content/zh/docs/release/release-v200.md new file mode 100644 index 000000000..ba048fe22 --- /dev/null +++ b/content/zh/docs/release/release-v200.md @@ -0,0 +1,92 @@ +--- +title: "Release Notes For 2.0.0" +keywords: "kubernetes, docker, kubesphere, jenkins, istio, prometheus" +description: "KubeSphere Release Notes For 2.0.0" + +linkTitle: "Release Notes - 2.0.0" +weight: 500 +--- + +KubeSphere 2.0.0 was released on **May 18th, 2019**. + +## What's New in 2.0.0 + +### Component Upgrades + +- Support Kubernetes [Kubernetes 1.13.5](https://github.com/kubernetes/kubernetes/releases/tag/v1.13.5) +- Integrate [QingCloud Cloud Controller](https://github.com/yunify/qingcloud-cloud-controller-manager). After installing load balancer, QingCloud load balancer can be created through KubeSphere console and the backend workload is bound automatically.  +- Integrate [QingStor CSI v0.3.0](https://github.com/yunify/qingstor-csi/tree/v0.3.0) storage plugin and support physical NeonSAN storage system. Support SAN storage service with high availability and high performance. +- Integrate [QingCloud CSI v0.2.1](https://github.com/yunify/qingcloud-csi/tree/v0.2.1) storage plugin and support many types of volume to create QingCloud block services. +- Harbor is upgraded to 1.7.5. +- GitLab is upgraded to 11.8.1. +- Prometheus is upgraded to 2.5.0. + +### Microservice Governance + +- Integrate Istio 1.1.1 and support visualization of service mesh management. +- Enable the access to the project's external websites and the application traffic governance. +- Provide built-in sample microservice [Bookinfo Application](https://istio.io/docs/examples/bookinfo/). +- Support traffic governance. +- Support traffic images. +- Provide load balancing of microservice based on Istio. +- Support canary release. +- Enable blue-green deployment. +- Enable circuit breaking. +- Enable microservice tracing. + +### DevOps (CI/CD Pipeline) + +- CI/CD pipeline provides email notification and supports the email notification during construction. +- Enhance CI/CD graphical editing pipelines, and more pipelines for common plugins and execution conditions. +- Provide source code vulnerability scanning based on SonarQube 7.4. +- Support [Source to Image](https://github.com/kubesphere/s2ioperator) feature. + +### Monitoring + +- Provide Kubernetes component independent monitoring page including etcd, kube-apiserver and kube-scheduler. +- Optimize several monitoring algorithm. +- Optimize monitoring resources. Reduce Prometheus storage and the disk usage up to 80%. + +### Logging + +- Provide unified log console in terms of tenant. +- Enable accurate and fuzzy retrieval. +- Support real-time and history logs. +- Support combined log query based on namespace, workload, Pod, container, key words and time limit.   +- Support detail page of single and direct logs. Pods and containers can be switched. +- [FluentBit Operator](https://github.com/kubesphere/fluentbit-operator) supports logging gathering settings: ElasticSearch, Kafka and Fluentd can be added, activated or turned off as log collectors. Before sending to log collectors, you can configure filtering conditions for needed logs. + +### Alerting and Notifications + +- Email notifications are available for cluster nodes and workload resources.  +- Notification rules: combined multiple monitoring resources are available. Different warning levels, detection cycle, push times and threshold can be configured. +- Time and notifiers can be set. +- Enable notification repeating rules for different levels. + +### Security Enhancement + +- Fix RunC Container Escape Vulnerability [Runc container breakout](https://log.qingcloud.com/archives/5127) +- Fix Alpine Docker's image Vulnerability [Alpine container shadow breakout](https://www.alpinelinux.org/posts/Docker-image-vulnerability-CVE-2019-5021.html) +- Support single and multi-login configuration items. +- Verification code is required after multiple invalid logins. +- Enhance passwords' policy and prevent weak passwords. +- Others security enhancements. + +### Interface Optimization + +- Optimize multiple user experience of console, such as the switch between DevOps project and other projects. +- Optimize many Chinese-English webpages. + +### Others + +- Support Etcd backup and recovery. +- Support regular cleanup of the docker's image. + +## Bugs Fixes + +- Fix delay updates of the resource and deleted pages. +- Fix the left dirty data after deleting the HPA workload. +- Fix incorrect Job status display. +- Correct resource quota, Pod usage and storage metrics algorithm. +- Adjust CPU usage percentages. +- many more bugfix diff --git a/content/zh/docs/release/release-v201.md b/content/zh/docs/release/release-v201.md new file mode 100644 index 000000000..2407dce8a --- /dev/null +++ b/content/zh/docs/release/release-v201.md @@ -0,0 +1,19 @@ +--- +title: "Release Notes For 2.0.1" +keywords: "kubernetes, docker, kubesphere, jenkins, istio, prometheus" +description: "KubeSphere Release Notes For 2.0.1" + +linkTitle: "Release Notes - 2.0.1" +weight: 400 +--- + +KubeSphere 2.0.1 was released on **June 9th, 2019**. + +## Bug Fix + +- Fix the issue that CI/CD pipeline cannot recognize correct special characters in the code branch. +- Fix CI/CD pipeline's issue of being unable to check logs. +- Fix no-log data output problem caused by index document fragmentation abnormity during the log query. +- Fix prompt exceptions when searching for logs that do not exist. +- Fix the line-overlap problem on traffic governance topology and fixed invalid image strategy application. +- Many more bugfix diff --git a/content/zh/docs/release/release-v202.md b/content/zh/docs/release/release-v202.md new file mode 100644 index 000000000..3c8fec965 --- /dev/null +++ b/content/zh/docs/release/release-v202.md @@ -0,0 +1,40 @@ +--- +title: "Release Notes For 2.0.2" +keywords: "kubernetes, docker, kubesphere, jenkins, istio, prometheus" +description: "KubeSphere Release Notes For 2.0.2" + +linkTitle: "Release Notes - 2.0.2" +weight: 300 +--- + +KubeSphere 2.0.2 was released on July 9, 2019, which fixes known bugs and enhances existing feature. If you have installed versions of 1.0.x, 2.0.0 or 2.0.1, please download KubeSphere installer v2.0.2 to upgrade. + +## What's New in 2.0.2 + +### Enhanced Features + +- [API docs](/api-reference/api-docs/) are available on the official website. +- Block brute-force attacks. +- Standardize the maximum length of resource names. +- Upgrade the gateway of project (Ingress Controller) to the version of 0.24.1. Support Ingress grayscale release. + +## List of Fixed Bugs + +- Fix the issue that traffic topology displays resources outside of this project. +- Fix the extra service component issue from traffic topology under specific circumstances. +- Fix the execution issue when "Source to Image" reconstructs images under specific circumstances. +- Fix the page display problem when "Source to Image" job fails. +- Fix the log checking problem when Pod status is abnormal. +- Fix the issue that disk monitor cannot detect some types of volume mounting, such as LVM volume. +- Fix the problem of detecting deployed applications. +- Fix incorrect status of application component. +- Fix host node's number calculation errors. +- Fix input data loss caused by switching reference configuration buttons when adding environmental variables. +- Fix the rerun job issue that the Operator role cannot execute. +- Fix the initialization issue on IPv4 environment uuid. +- Fix the issue that the log detail page cannot be scrolled down to check past logs. +- Fix wrong APIServer addresses in KubeConfig files. +- Fix the issue that DevOps project's name cannot be changed. +- Fix the issue that container logs cannot specify query time. +- Fix the saving problem on relevant repository's secrets under certain circumstances. +- Fix the issue that application's service component creation page does not have image registry's secrets. diff --git a/content/zh/docs/release/release-v210.md b/content/zh/docs/release/release-v210.md new file mode 100644 index 000000000..ae876bee6 --- /dev/null +++ b/content/zh/docs/release/release-v210.md @@ -0,0 +1,155 @@ +--- +title: "Release Notes For 2.1.0" +keywords: "kubernetes, docker, kubesphere, jenkins, istio, prometheus" +description: "KubeSphere Release Notes For 2.1.0" + +linkTitle: "Release Notes - 2.1.0" +weight: 200 +--- + +KubeSphere 2.1.0 was released on Nov 11th, 2019, which fixes known bugs, adds some new features and brings some enhancement. If you have installed versions of 2.0.x, please upgrade it and enjoy the better user experience of v2.1.0. + +## Installer Enhancement + +- Decouple some components and make components including DevOps, service mesh, app store, logging, alerting and notification optional and pluggable +- Add Grafana (v5.2.4) as the optional component +- Upgrade Kubernetes to 1.15.5. It is also compatible with 1.14.x and 1.13.x +- Upgrade [OpenPitrix](https://openpitrix.io/) to v0.4.5 +- Upgrade the log forwarder Fluent Bit to v1.3.2 +- Upgrade Jenkins to v2.176.2 +- Upgrade Istio to 1.3.3 +- Optimize the high availability for core components + +## App Store + +### Features + +Support upload / test / review / deploy / publish/ classify / upgrade / deploy and delete apps, and provide nine built-in applications + +### Upgrade & Enhancement + +- The application repository configuration is moved from global to each workspace +- Support adding application repository to share applications in a workspace + +## Storage + +### Features + +- Support Local Volume with dynamic provisioning +- Provide the real-time monitoring feature for QingCloud block storage + +### Upgrade & Enhancement + +QingCloud CSI is adapted to CSI 1.1.0, supports upgrade, topology, create or delete a snapshot. It also supports creating PVC based on a snapshot + +### BUG Fixes + +Fix the StorageClass list display problem + +## Observability + +### Features + +- Support for collecting the file logs on the disk. It is used for the Pod which preserves the logs as the file on the disk +- Support integrating with external ElasticSearch 7.x +- Ability to search logs containinh Chinese words +- Add initContainer log display +- Ability to export logs +- Support for canceling the notification from alerting + +### UPGRADE & ENHANCEMENT + +- Improve the performance of log search +- Refine the hints when the logging service is abnormal +- Optimize the information when the monitoring metrics request is abnormal +- Support pod anti-affinity rule for Prometheus + +### BUG FIXES + +- Fix the mistaken highlights in the logs search result +- Fix log search not matching phrases correctly +- Fix the issue that log could not be retrieved for a deleted workload when it is searched by workload name +- Fix the issue where the results were truncated when the log is highlighted +- Fix some metrics exceptions: node `inode`, maximum pod tolerance +- Fix the issue with an incorrect number of alerting targets +- Fix filter failure problem of multi-metric monitoring +- Fix the problem of no logging and monitoring information on taint nodes (Adjust the toleration attributes of node-exporter and fluent-bit to deploy on all nodes by default, ignoring taints) + +## DevOps + +### Features + +- Add support for branch exchange and git log export in S2I +- Add B2I, ability to build Binary/WAR/JAR package and release to Kubernetes +- Support dependency cache for the pipeline, S2I, and B2I +- Support delete Kubernetes resource action in `kubernetesDeploy` step +- Multi-branch pipeline supports trigger other pipelines when create or delete the branch + +### Upgrades & Enhancement + +- Support BitBucket in the pipeline +- Support Cron script validation in the pipeline +- Support Jenkinsfile syntax validation +- Support custom the link in SonarQube +- Support event trigger build in the pipeline +- Optimize the agent node selection in the pipeline +- Accelerate the start speed of the pipeline +- Use dynamical volume as the work directory of the Agent in the pipeline, also contributes to Jenkins [#589](https://github.com/jenkinsci/kubernetes-plugin/pull/598) +- Optimize the Jenkins kubernetesDeploy plugin, add more resources and versions (v1, app/v1, extensions/v1beta1、apps/v1beta2、apps/v1beta1、autoscaling/v1、autoscaling/v2beta1、autoscaling/v2beta2、networking.k8s.io/v1、batch/v1beta1、batch/v2alpha1), also contributes to Jenkins [#614](https://github.com/jenkinsci/kubernetes-plugin/pull/614) +- Add support for PV, PVC, Network Policy in deploy step of the pipeline, also contributes to Jenkins [#87](https://github.com/jenkinsci/kubernetes-cd-plugin/pull/87)、[#88](https://github.com/jenkinsci/kubernetes-cd-plugin/pull/88) + +### Bug Fixes + +- Fix the issue that 400 bad request in GitHub Webhook +- incompatible change: DevOps Webhook's URL prefix is changed from `/webhook/xxx` to `/devops_webhook/xxx` + +## Authentication and authority + +### Features + +Support sync and authenticate with AD account + +### Upgrades & Enhancement + +- Reduce the LDAP component's RAM consumption +- Add protection against brute force attacks + +### Bug Fixes + +- Fix LDAP connection pool leak +- Fix the issue where users could not be added in the workspace +- Fix sensitive data transmission leaks + +## User Experience + +### Features + +Ability to wizard management of projects (namespace) that are not assigned to the workspace + +### Upgrades & Enhancement + +- Support bash-completion in web kubectl +- Optimize the host information display +- Add connection test of the email server +- Add prompt on resource list page +- Optimize the project overview page and project basic information +- Simplify the service creation process +- Simplify the workload creation process +- Support real-time status update in the resource list +- optimize YAML editing +- Support image search and image information display +- Add the pod list to the workload page +- Update the web terminal theme +- Support container switching in container terminal +- Optimize Pod information display, and add Pod scheduling information +- More detailed workload status display + +### Bug Fixes + +- Fix the issue where the default request resource of the project is displayed incorrectly +- Optimize the web terminal design, make it much easier to find +- Fix the Pod status update delay +- Fix the issue where a host could not be searched based on roles +- Fix DevOps project quantity error in workspace detail page +- Fix the issue with the workspace list pages not turning properly +- Fix the problem of inconsistent result ordering after query on workspace list page diff --git a/content/zh/docs/release/release-v211.md b/content/zh/docs/release/release-v211.md new file mode 100644 index 000000000..d8acba698 --- /dev/null +++ b/content/zh/docs/release/release-v211.md @@ -0,0 +1,122 @@ +--- +title: "Release Notes For 2.1.1" +keywords: "kubernetes, docker, kubesphere, jenkins, istio, prometheus" +description: "KubeSphere Release Notes For 2.1.1" + +linkTitle: "Release Notes - 2.1.1" +weight: 100 +--- + +KubeSphere 2.1.1 was released on Feb 23rd, 2020, which has fixed known bugs and brought some enhancements. For the users who have installed versions of 2.0.x or 2.1.0, make sure to read the user manual carefully about how to upgrade before doing that, and feel free to raise any questions on [GitHub](https://github.com/kubesphere/kubesphere/issues). + +## What's New in 2.1.1 + +## Installer + +### UPGRADE & ENHANCEMENT + +- Support Kubernetes v1.14.x、v1.15.x、v1.16.x、v1.17.x,also solve the issue of Kubernetes API Compatibility#[1829](https://github.com/kubesphere/kubesphere/issues/1829) +- Simplify the steps of installation on existing Kubernetes, and remove the step of specifying cluster's CA certification, also specifying Etcd certification is no longer mandatory step if users don't need Etcd monitoring metrics +- Backup the configuration of CoreDNS before upgrading + +### BUG FIXES + +- Fix the issue of importing apps to App Store + +## App Store + +### UPGRADE & ENHANCEMENT + +- Upgrade OpenPitrix to v0.4.8 + +### BUG FIXES + +- Fix the latest version display issue for the published app #[1130](https://github.com/kubesphere/kubesphere/issues/1130) +- Fix the column name display issue in app approval list page #[1498](https://github.com/kubesphere/kubesphere/issues/1498) +- Fix the searching issue by app name/workspace #[1497](https://github.com/kubesphere/kubesphere/issues/1497) +- Fix the issue of failing to create app with the same name of previously deleted app #[1821](https://github.com/kubesphere/kubesphere/pull/1821) #[1564](https://github.com/kubesphere/kubesphere/issues/1564) +- Fix the issue of failing to deploy apps in some cases #[1619](https://github.com/kubesphere/kubesphere/issues/1619) #[1730](https://github.com/kubesphere/kubesphere/issues/1730) + +## Storage + +### UPGRADE & ENHANCEMENT + +- Support CSI plugins of Alibaba Cloud and Tencent Cloud + +### BUG FIXES + +- Fix the paging issue of storage class list page #[1583](https://github.com/kubesphere/kubesphere/issues/1583) #[1591](https://github.com/kubesphere/kubesphere/issues/1591) +- Fix the issue that the value of imageFeatures parameter displays '2' when creating ceph storage class #[1593](https://github.com/kubesphere/kubesphere/issues/1593) +- Fix the issue that search filter fails to work in persistent volumes list page #[1582](https://github.com/kubesphere/kubesphere/issues/1582) +- Fix the display issue for abnormal persistent volume #[1581](https://github.com/kubesphere/kubesphere/issues/1581) +- Fix the display issue for the persistent volumes which associated storage class is deleted #[1580](https://github.com/kubesphere/kubesphere/issues/1580) #[1579](https://github.com/kubesphere/kubesphere/issues/1579) + +## Observability + +### UPGRADE & ENHANCEMENT + +- Upgrade Fluent Bit to v1.3.5 #[1505](https://github.com/kubesphere/kubesphere/issues/1505) +- Upgrade Kube-state-metrics to v1.7.2 +- Upgrade Elastic Curator to v5.7.6 #[517](https://github.com/kubesphere/ks-installer/issues/517) +- Fluent Bit Operator support to detect the location of soft linked docker log folder dynamically on host machines +- Fluent Bit Operator support to manage the instance of Fluent Bit by declarative configuration through updating the ConfigMap of Operator +- Fix the issue of sort orders in alert list page #[1397](https://github.com/kubesphere/kubesphere/issues/1397) +- Adjust the metric of container memory usage with 'container_memory_working_set_bytes' + +### BUG FIXES + +- Fix the lag issue of container logs #[1650](https://github.com/kubesphere/kubesphere/issues/1650) +- Fix the display issue that some replicas of workload have no logs on container detail log page #[1505](https://github.com/kubesphere/kubesphere/issues/1505) +- Fix the compatibility issue of Curator to support ElasticSearch 7.x #[517](https://github.com/kubesphere/ks-installer/issues/517) +- Fix the display issue of container log page during container initialization #[1518](https://github.com/kubesphere/kubesphere/issues/1518) +- Fix the blank node issue when these nodes are resized #[1464](https://github.com/kubesphere/kubesphere/issues/1464) +- Fix the display issue of components status in monitor center, to keep them up-to date #[1858](https://github.com/kubesphere/kubesphere/issues/1858) +- Fix the wrong monitoring targets number in alert detail page #[61](https://github.com/kubesphere/console/issues/61) + +## DevOps + +### BUG FIXES + +- Fix the issue of UNSTABLE state not visible in the pipeline #[1428](https://github.com/kubesphere/kubesphere/issues/1428) +- Fix the format issue of KubeConfig in DevOps pipeline #[1529](https://github.com/kubesphere/kubesphere/issues/1529) +- Fix the image repo compatibility issue in B2I, to support image repo of Alibaba Cloud #[1500](https://github.com/kubesphere/kubesphere/issues/1500) +- Fix the paging issue in DevOps pipelines' branches list page #[1517](https://github.com/kubesphere/kubesphere/issues/1517) +- Fix the issue of failing to display pipeline configuration after modifying it #[1522](https://github.com/kubesphere/kubesphere/issues/1522) +- Fix the issue of failing to download generated artifact in S2I job #[1547](https://github.com/kubesphere/kubesphere/issues/1547) +- Fix the issue of [data loss occasionally after restarting Jenkins]( https://kubesphere.com.cn/forum/d/283-jenkins) +- Fix the issue that only 'PR-HEAD' is fetched when binding pipeline with GitHub #[1780](https://github.com/kubesphere/kubesphere/issues/1780) +- Fix 414 issue when updating DevOps credential #[1824](https://github.com/kubesphere/kubesphere/issues/1824) +- Fix wrong s2ib/s2ir naming issue from B2I/S2I #[1840](https://github.com/kubesphere/kubesphere/issues/1840) +- Fix the issue of failing to drag and drop tasks on pipeline editing page #[62](https://github.com/kubesphere/console/issues/62) + +## Authentication and Authorization + +### UPGRADE & ENHANCEMENT + +- Generate client certification through CSR #[1449](https://github.com/kubesphere/kubesphere/issues/1449) + +### BUG FIXES + +- Fix content loss issue in KubeConfig token file #[1529](https://github.com/kubesphere/kubesphere/issues/1529) +- Fix the issue that users with different permission fail to log in on the same browser #[1600](https://github.com/kubesphere/kubesphere/issues/1600) + +## User Experience + +### UPGRADE & ENHANCEMENT + +- Support to edit SecurityContext in workload editing page #[1530](https://github.com/kubesphere/kubesphere/issues/1530) +- Support to configure init container in workload editing page #[1488](https://github.com/kubesphere/kubesphere/issues/1488) +- Add support of startupProbe, also add periodSeconds, successThreshold, failureThreshold parameters in probe editing page #[1487](https://github.com/kubesphere/kubesphere/issues/1487) +- Optimize the status update display of Pods #[1187](https://github.com/kubesphere/kubesphere/issues/1187) +- Optimize the error message report on console #[43](https://github.com/kubesphere/console/issues/43) + +### BUG FIXES + +- Fix the status display issue for the Pods that are not under running status #[1187](https://github.com/kubesphere/kubesphere/issues/1187) +- Fix the issue that the added annotation can't be deleted when creating service of QingCloud LoadBalancer #[1395](https://github.com/kubesphere/kubesphere/issues/1395) +- Fix the display issue when selecting workload on service editing page #[1596](https://github.com/kubesphere/kubesphere/issues/1596) +- Fix the issue of failing to edit configuration file when editing 'Job' #[1521](https://github.com/kubesphere/kubesphere/issues/1521) +- Fix the issue of failing to update the service of 'StatefulSet' #[1513](https://github.com/kubesphere/kubesphere/issues/1513) +- Fix the issue of image searching for QingCloud and Alibaba Cloud image repos #[1627](https://github.com/kubesphere/kubesphere/issues/1627) +- Fix resource ordering issue with the same creation timestamp #[1750](https://github.com/kubesphere/kubesphere/pull/1750) +- Fix the issue of failing to edit configuration file when editing service #[41](https://github.com/kubesphere/console/issues/41) diff --git a/content/zh/docs/release/release-v300.md b/content/zh/docs/release/release-v300.md new file mode 100644 index 000000000..98c787c91 --- /dev/null +++ b/content/zh/docs/release/release-v300.md @@ -0,0 +1,10 @@ +--- +title: "Release Notes For 3.0.0" +keywords: "kubernetes, docker, kubesphere, jenkins, istio, prometheus" +description: "KubeSphere Release Notes For 3.0.0" + +linkTitle: "Release Notes - 3.0.0" +weight: 50 +--- + +TBD diff --git a/content/zh/docs/upgrade/_index.md b/content/zh/docs/upgrade/_index.md new file mode 100644 index 000000000..6ffe04694 --- /dev/null +++ b/content/zh/docs/upgrade/_index.md @@ -0,0 +1,22 @@ +--- +title: "Upgrade" +description: "Upgrade KubeSphere and Kubernetes" +layout: "single" + +linkTitle: "Upgrade" + +weight: 4000 + +icon: "/images/docs/docs.svg" + +--- + +## Installing KubeSphere and Kubernetes on Linux + +In this chapter, we will demonstrate how to use KubeKey to provision a new Kubernetes and KubeSphere cluster based on different infrastructures. Kubekey can help you to quickly build a production-ready cluster architecture on a set of machines from zero to one. It also helps you to easily scale the cluster and install pluggable components on existing architecture. + +## Most Popular Pages + +Below you will find some of the most common and helpful pages from this chapter. We highly recommend you to review them at first. + +{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} diff --git a/content/zh/docs/upgrade/release-v210.md b/content/zh/docs/upgrade/release-v210.md new file mode 100644 index 000000000..5df5e5d44 --- /dev/null +++ b/content/zh/docs/upgrade/release-v210.md @@ -0,0 +1,155 @@ +--- +title: "Upgrade KubeSphere Only" +keywords: "kubernetes, docker, kubesphere, jenkins, istio, prometheus" +description: "Upgrade KubeSphere without Kubernetes" + +linkTitle: "Upgrade KubeSphere Only" +weight: 200 +--- + +KubeSphere 2.1.0 was released on Nov 11th, 2019, which fixes known bugs, adds some new features and brings some enhancement. If you have installed versions of 2.0.x, please upgrade it and enjoy the better user experience of v2.1.0. + +## Installer Enhancement + +- Decouple some components and make components including DevOps, service mesh, app store, logging, alerting and notification optional and pluggable +- Add Grafana (v5.2.4) as the optional component +- Upgrade Kubernetes to 1.15.5. It is also compatible with 1.14.x and 1.13.x +- Upgrade [OpenPitrix](https://openpitrix.io/) to v0.4.5 +- Upgrade the log forwarder Fluent Bit to v1.3.2 +- Upgrade Jenkins to v2.176.2 +- Upgrade Istio to 1.3.3 +- Optimize the high availability for core components + +## App Store + +### Features + +Support upload / test / review / deploy / publish/ classify / upgrade / deploy and delete apps, and provide nine built-in applications + +### Upgrade & Enhancement + +- The application repository configuration is moved from global to each workspace +- Support adding application repository to share applications in a workspace + +## Storage + +### Features + +- Support Local Volume with dynamic provisioning +- Provide the real-time monitoring feature for QingCloud block storage + +### Upgrade & Enhancement + +QingCloud CSI is adapted to CSI 1.1.0, supports upgrade, topology, create or delete a snapshot. It also supports creating PVC based on a snapshot + +### BUG Fixes + +Fix the StorageClass list display problem + +## Observability + +### Features + +- Support for collecting the file logs on the disk. It is used for the Pod which preserves the logs as the file on the disk +- Support integrating with external ElasticSearch 7.x +- Ability to search logs containinh Chinese words +- Add initContainer log display +- Ability to export logs +- Support for canceling the notification from alerting + +### UPGRADE & ENHANCEMENT + +- Improve the performance of log search +- Refine the hints when the logging service is abnormal +- Optimize the information when the monitoring metrics request is abnormal +- Support pod anti-affinity rule for Prometheus + +### BUG FIXES + +- Fix the mistaken highlights in the logs search result +- Fix log search not matching phrases correctly +- Fix the issue that log could not be retrieved for a deleted workload when it is searched by workload name +- Fix the issue where the results were truncated when the log is highlighted +- Fix some metrics exceptions: node `inode`, maximum pod tolerance +- Fix the issue with an incorrect number of alerting targets +- Fix filter failure problem of multi-metric monitoring +- Fix the problem of no logging and monitoring information on taint nodes (Adjust the toleration attributes of node-exporter and fluent-bit to deploy on all nodes by default, ignoring taints) + +## DevOps + +### Features + +- Add support for branch exchange and git log export in S2I +- Add B2I, ability to build Binary/WAR/JAR package and release to Kubernetes +- Support dependency cache for the pipeline, S2I, and B2I +- Support delete Kubernetes resource action in `kubernetesDeploy` step +- Multi-branch pipeline supports trigger other pipelines when create or delete the branch + +### Upgrades & Enhancement + +- Support BitBucket in the pipeline +- Support Cron script validation in the pipeline +- Support Jenkinsfile syntax validation +- Support custom the link in SonarQube +- Support event trigger build in the pipeline +- Optimize the agent node selection in the pipeline +- Accelerate the start speed of the pipeline +- Use dynamical volume as the work directory of the Agent in the pipeline, also contributes to Jenkins [#589](https://github.com/jenkinsci/kubernetes-plugin/pull/598) +- Optimize the Jenkins kubernetesDeploy plugin, add more resources and versions (v1, app/v1, extensions/v1beta1、apps/v1beta2、apps/v1beta1、autoscaling/v1、autoscaling/v2beta1、autoscaling/v2beta2、networking.k8s.io/v1、batch/v1beta1、batch/v2alpha1), also contributes to Jenkins [#614](https://github.com/jenkinsci/kubernetes-plugin/pull/614) +- Add support for PV, PVC, Network Policy in deploy step of the pipeline, also contributes to Jenkins [#87](https://github.com/jenkinsci/kubernetes-cd-plugin/pull/87)、[#88](https://github.com/jenkinsci/kubernetes-cd-plugin/pull/88) + +### Bug Fixes + +- Fix the issue that 400 bad request in GitHub Webhook +- incompatible change: DevOps Webhook's URL prefix is changed from `/webhook/xxx` to `/devops_webhook/xxx` + +## Authentication and authority + +### Features + +Support sync and authenticate with AD account + +### Upgrades & Enhancement + +- Reduce the LDAP component's RAM consumption +- Add protection against brute force attacks + +### Bug Fixes + +- Fix LDAP connection pool leak +- Fix the issue where users could not be added in the workspace +- Fix sensitive data transmission leaks + +## User Experience + +### Features + +Ability to wizard management of projects (namespace) that are not assigned to the workspace + +### Upgrades & Enhancement + +- Support bash-completion in web kubectl +- Optimize the host information display +- Add connection test of the email server +- Add prompt on resource list page +- Optimize the project overview page and project basic information +- Simplify the service creation process +- Simplify the workload creation process +- Support real-time status update in the resource list +- optimize YAML editing +- Support image search and image information display +- Add the pod list to the workload page +- Update the web terminal theme +- Support container switching in container terminal +- Optimize Pod information display, and add Pod scheduling information +- More detailed workload status display + +### Bug Fixes + +- Fix the issue where the default request resource of the project is displayed incorrectly +- Optimize the web terminal design, make it much easier to find +- Fix the Pod status update delay +- Fix the issue where a host could not be searched based on roles +- Fix DevOps project quantity error in workspace detail page +- Fix the issue with the workspace list pages not turning properly +- Fix the problem of inconsistent result ordering after query on workspace list page diff --git a/content/zh/docs/upgrade/release-v211.md b/content/zh/docs/upgrade/release-v211.md new file mode 100644 index 000000000..34f244b9b --- /dev/null +++ b/content/zh/docs/upgrade/release-v211.md @@ -0,0 +1,122 @@ +--- +title: "Upgrade KubeSphere and Kubernetes" +keywords: "kubernetes, docker, kubesphere, jenkins, istio, prometheus" +description: "Upgrade KubeSphere and Kubernetes in Linux machines" + +linkTitle: "Upgrade KubeSphere and Kubernetes" +weight: 100 +--- + +KubeSphere 2.1.1 was released on Feb 23rd, 2020, which has fixed known bugs and brought some enhancements. For the users who have installed versions of 2.0.x or 2.1.0, make sure to read the user manual carefully about how to upgrade before doing that, and feel free to raise any questions on [GitHub](https://github.com/kubesphere/kubesphere/issues). + +## What's New in 2.1.1 + +## Installer + +### UPGRADE & ENHANCEMENT + +- Support Kubernetes v1.14.x、v1.15.x、v1.16.x、v1.17.x,also solve the issue of Kubernetes API Compatibility#[1829](https://github.com/kubesphere/kubesphere/issues/1829) +- Simplify the steps of installation on existing Kubernetes, and remove the step of specifying cluster's CA certification, also specifying Etcd certification is no longer mandatory step if users don't need Etcd monitoring metrics +- Backup the configuration of CoreDNS before upgrading + +### BUG FIXES + +- Fix the issue of importing apps to App Store + +## App Store + +### UPGRADE & ENHANCEMENT + +- Upgrade OpenPitrix to v0.4.8 + +### BUG FIXES + +- Fix the latest version display issue for the published app #[1130](https://github.com/kubesphere/kubesphere/issues/1130) +- Fix the column name display issue in app approval list page #[1498](https://github.com/kubesphere/kubesphere/issues/1498) +- Fix the searching issue by app name/workspace #[1497](https://github.com/kubesphere/kubesphere/issues/1497) +- Fix the issue of failing to create app with the same name of previously deleted app #[1821](https://github.com/kubesphere/kubesphere/pull/1821) #[1564](https://github.com/kubesphere/kubesphere/issues/1564) +- Fix the issue of failing to deploy apps in some cases #[1619](https://github.com/kubesphere/kubesphere/issues/1619) #[1730](https://github.com/kubesphere/kubesphere/issues/1730) + +## Storage + +### UPGRADE & ENHANCEMENT + +- Support CSI plugins of Alibaba Cloud and Tencent Cloud + +### BUG FIXES + +- Fix the paging issue of storage class list page #[1583](https://github.com/kubesphere/kubesphere/issues/1583) #[1591](https://github.com/kubesphere/kubesphere/issues/1591) +- Fix the issue that the value of imageFeatures parameter displays '2' when creating ceph storage class #[1593](https://github.com/kubesphere/kubesphere/issues/1593) +- Fix the issue that search filter fails to work in persistent volumes list page #[1582](https://github.com/kubesphere/kubesphere/issues/1582) +- Fix the display issue for abnormal persistent volume #[1581](https://github.com/kubesphere/kubesphere/issues/1581) +- Fix the display issue for the persistent volumes which associated storage class is deleted #[1580](https://github.com/kubesphere/kubesphere/issues/1580) #[1579](https://github.com/kubesphere/kubesphere/issues/1579) + +## Observability + +### UPGRADE & ENHANCEMENT + +- Upgrade Fluent Bit to v1.3.5 #[1505](https://github.com/kubesphere/kubesphere/issues/1505) +- Upgrade Kube-state-metrics to v1.7.2 +- Upgrade Elastic Curator to v5.7.6 #[517](https://github.com/kubesphere/ks-installer/issues/517) +- Fluent Bit Operator support to detect the location of soft linked docker log folder dynamically on host machines +- Fluent Bit Operator support to manage the instance of Fluent Bit by declarative configuration through updating the ConfigMap of Operator +- Fix the issue of sort orders in alert list page #[1397](https://github.com/kubesphere/kubesphere/issues/1397) +- Adjust the metric of container memory usage with 'container_memory_working_set_bytes' + +### BUG FIXES + +- Fix the lag issue of container logs #[1650](https://github.com/kubesphere/kubesphere/issues/1650) +- Fix the display issue that some replicas of workload have no logs on container detail log page #[1505](https://github.com/kubesphere/kubesphere/issues/1505) +- Fix the compatibility issue of Curator to support ElasticSearch 7.x #[517](https://github.com/kubesphere/ks-installer/issues/517) +- Fix the display issue of container log page during container initialization #[1518](https://github.com/kubesphere/kubesphere/issues/1518) +- Fix the blank node issue when these nodes are resized #[1464](https://github.com/kubesphere/kubesphere/issues/1464) +- Fix the display issue of components status in monitor center, to keep them up-to date #[1858](https://github.com/kubesphere/kubesphere/issues/1858) +- Fix the wrong monitoring targets number in alert detail page #[61](https://github.com/kubesphere/console/issues/61) + +## DevOps + +### BUG FIXES + +- Fix the issue of UNSTABLE state not visible in the pipeline #[1428](https://github.com/kubesphere/kubesphere/issues/1428) +- Fix the format issue of KubeConfig in DevOps pipeline #[1529](https://github.com/kubesphere/kubesphere/issues/1529) +- Fix the image repo compatibility issue in B2I, to support image repo of Alibaba Cloud #[1500](https://github.com/kubesphere/kubesphere/issues/1500) +- Fix the paging issue in DevOps pipelines' branches list page #[1517](https://github.com/kubesphere/kubesphere/issues/1517) +- Fix the issue of failing to display pipeline configuration after modifying it #[1522](https://github.com/kubesphere/kubesphere/issues/1522) +- Fix the issue of failing to download generated artifact in S2I job #[1547](https://github.com/kubesphere/kubesphere/issues/1547) +- Fix the issue of [data loss occasionally after restarting Jenkins]( https://kubesphere.com.cn/forum/d/283-jenkins) +- Fix the issue that only 'PR-HEAD' is fetched when binding pipeline with GitHub #[1780](https://github.com/kubesphere/kubesphere/issues/1780) +- Fix 414 issue when updating DevOps credential #[1824](https://github.com/kubesphere/kubesphere/issues/1824) +- Fix wrong s2ib/s2ir naming issue from B2I/S2I #[1840](https://github.com/kubesphere/kubesphere/issues/1840) +- Fix the issue of failing to drag and drop tasks on pipeline editing page #[62](https://github.com/kubesphere/console/issues/62) + +## Authentication and Authorization + +### UPGRADE & ENHANCEMENT + +- Generate client certification through CSR #[1449](https://github.com/kubesphere/kubesphere/issues/1449) + +### BUG FIXES + +- Fix content loss issue in KubeConfig token file #[1529](https://github.com/kubesphere/kubesphere/issues/1529) +- Fix the issue that users with different permission fail to log in on the same browser #[1600](https://github.com/kubesphere/kubesphere/issues/1600) + +## User Experience + +### UPGRADE & ENHANCEMENT + +- Support to edit SecurityContext in workload editing page #[1530](https://github.com/kubesphere/kubesphere/issues/1530) +- Support to configure init container in workload editing page #[1488](https://github.com/kubesphere/kubesphere/issues/1488) +- Add support of startupProbe, also add periodSeconds, successThreshold, failureThreshold parameters in probe editing page #[1487](https://github.com/kubesphere/kubesphere/issues/1487) +- Optimize the status update display of Pods #[1187](https://github.com/kubesphere/kubesphere/issues/1187) +- Optimize the error message report on console #[43](https://github.com/kubesphere/console/issues/43) + +### BUG FIXES + +- Fix the status display issue for the Pods that are not under running status #[1187](https://github.com/kubesphere/kubesphere/issues/1187) +- Fix the issue that the added annotation can't be deleted when creating service of QingCloud LoadBalancer #[1395](https://github.com/kubesphere/kubesphere/issues/1395) +- Fix the display issue when selecting workload on service editing page #[1596](https://github.com/kubesphere/kubesphere/issues/1596) +- Fix the issue of failing to edit configuration file when editing 'Job' #[1521](https://github.com/kubesphere/kubesphere/issues/1521) +- Fix the issue of failing to update the service of 'StatefulSet' #[1513](https://github.com/kubesphere/kubesphere/issues/1513) +- Fix the issue of image searching for QingCloud and Alibaba Cloud image repos #[1627](https://github.com/kubesphere/kubesphere/issues/1627) +- Fix resource ordering issue with the same creation timestamp #[1750](https://github.com/kubesphere/kubesphere/pull/1750) +- Fix the issue of failing to edit configuration file when editing service #[41](https://github.com/kubesphere/console/issues/41) diff --git a/content/zh/docs/upgrade/release-v300.md b/content/zh/docs/upgrade/release-v300.md new file mode 100644 index 000000000..7a1cb4647 --- /dev/null +++ b/content/zh/docs/upgrade/release-v300.md @@ -0,0 +1,10 @@ +--- +title: "Overview" +keywords: "kubernetes, upgrade, kubesphere, v3.0.0" +description: "Upgrade KubeSphere" + +linkTitle: "Overview" +weight: 50 +--- + +TBD diff --git a/content/zh/docs/workspaces-administration/_index.md b/content/zh/docs/workspaces-administration/_index.md new file mode 100644 index 000000000..45396647b --- /dev/null +++ b/content/zh/docs/workspaces-administration/_index.md @@ -0,0 +1,22 @@ +--- +title: "Workspace Administration" +description: "Help you to better manage KubeSphere workspace" +layout: "single" + +linkTitle: "Workspace Administration" + +weight: 4200 + +icon: "/images/docs/docs.svg" + +--- + +## Installing KubeSphere and Kubernetes on Linux + +In this chapter, we will demonstrate how to use KubeKey to provision a new Kubernetes and KubeSphere cluster based on different infrastructures. Kubekey can help you to quickly build a production-ready cluster architecture on a set of machines from zero to one. It also helps you to easily scale the cluster and install pluggable components on existing architecture. + +## Most Popular Pages + +Below you will find some of the most common and helpful pages from this chapter. We highly recommend you to review them at first. + +{{< popularPage icon="/images/docs/bitmap.jpg" title="Install KubeSphere on AWS EC2" description="Provisioning a new Kubernetes and KubeSphere cluster based on AWS" link="" >}} diff --git a/content/zh/docs/workspaces-administration/release-v210.md b/content/zh/docs/workspaces-administration/release-v210.md new file mode 100644 index 000000000..9442d12ca --- /dev/null +++ b/content/zh/docs/workspaces-administration/release-v210.md @@ -0,0 +1,10 @@ +--- +title: "Role and Member Management" +keywords: "kubernetes, workspace, kubesphere, multitenancy" +description: "Role and Member Management in a Workspace" + +linkTitle: "Role and Member Management" +weight: 200 +--- + +TBD diff --git a/content/zh/docs/workspaces-administration/release-v211.md b/content/zh/docs/workspaces-administration/release-v211.md new file mode 100644 index 000000000..d74285d36 --- /dev/null +++ b/content/zh/docs/workspaces-administration/release-v211.md @@ -0,0 +1,10 @@ +--- +title: "Import Helm Repository" +keywords: "kubernetes, helm, kubesphere, application" +description: "Import Helm Repository into KubeSphere" + +linkTitle: "Import Helm Repository" +weight: 100 +--- + +TBD diff --git a/content/zh/docs/workspaces-administration/release-v300.md b/content/zh/docs/workspaces-administration/release-v300.md new file mode 100644 index 000000000..dae816590 --- /dev/null +++ b/content/zh/docs/workspaces-administration/release-v300.md @@ -0,0 +1,10 @@ +--- +title: "Upload Helm-based Application" +keywords: "kubernetes, helm, kubesphere, openpitrix, application" +description: "Upload Helm-based Application" + +linkTitle: "Upload Helm-based Application" +weight: 50 +--- + +TBD