Merge branch 'master' into qy

This commit is contained in:
Sherlock113 2020-12-29 13:25:36 +08:00 committed by GitHub
commit c19f755b67
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
55 changed files with 495 additions and 472 deletions

View File

@ -221,7 +221,7 @@ List all your machines under `hosts` and add their detailed information as above
`internalAddress`: The private IP address of the instance.
- In this tutorial, port 22 is the default port of SSH so you do not need to add it in the yaml file. Otherwise, you need to add the port number after the IP address. For example:
- In this tutorial, port 22 is the default port of SSH so you do not need to add it in the YAML file. Otherwise, you need to add the port number after the IP address. For example:
```yaml
hosts:

View File

@ -2,25 +2,25 @@
title: "Deploy KubeSphere on Bare Metal"
keywords: 'Kubernetes, KubeSphere, bare-metal'
description: 'How to install KubeSphere on bare metal.'
linkTitle: "Deploy KubeSphere on Bare Metal"
weight: 3320
---
## Introduction
In addition to the deployment on cloud, KubeSphere can also be installed on bare metal. As the virtualization layer is removed, the infrastructure overhead is drastically reduced, which brings more compute and storage resources to app deployments. As a result, hardware efficiency is improved. Refer to the example below of how to deploy KubeSphere on bare metal.
In addition to the deployment on cloud, KubeSphere can also be installed on bare metal. As the virtualization layer is removed, the infrastructure overhead is drastically reduced, which brings more compute and storage resources to app deployments. As a result, hardware efficiency is improved. Refer to the example below to deploy KubeSphere on bare metal.
## Prerequisites
- Please make sure that you already know how to install KubeSphere with a multi-node cluster based on the tutorial [Multi-node Installation](../../../installing-on-linux/introduction/multioverview/).
- Make sure you already know how to install KubeSphere on a multi-node cluster based on the tutorial [Multi-Node Installation](../../../installing-on-linux/introduction/multioverview/).
- Server and network redundancy in your environment.
- Considering data persistence, for a production environment, it is recommended you prepare persistent storage and create a StorageClass in advance. For development and testing, you can use the integrated OpenEBS to provision LocalPV as the storage service directly.
- For a production environment, it is recommended that you prepare persistent storage and create a StorageClass in advance. For development and testing, you can use the integrated OpenEBS to provision LocalPV as the storage service directly.
## Prepare Linux Hosts
This tutorial uses 3 physical machines of **DELL 620 Intel (R) Xeon (R) CPU E5-2640 v2 @ 2.00GHz (32G memory)**, on which **CentOS Linux release 7.6.1810 (Core)** will be installed for the minimal deployment of KubeSphere.
### CentOS Installation
### Install CentOS
Download and install the [image](http://mirror1.es.uci.edu/centos/7.6.1810/isos/x86_64/CentOS-7-x86_64-DVD-1810.iso) first. Make sure you allocate at least 200 GB to the root directory where it stores docker images (you can skip this if you are installing KubeSphere for testing).
@ -35,108 +35,108 @@ Here is a list of the three hosts for your reference.
|192.168.60.153|worker1|worker|
|192.168.60.154|worker2|worker|
### NIC Setting
### NIC settings
1. Clear NIC configurations.
```bash
ifdown em1
```
```bash
ifdown em2
```
```bash
rm -rf /etc/sysconfig/network-scripts/ifcfg-em1
```
```bash
rm -rf /etc/sysconfig/network-scripts/ifcfg-em2
```
```bash
ifdown em1
```
```bash
ifdown em2
```
```bash
rm -rf /etc/sysconfig/network-scripts/ifcfg-em1
```
```bash
rm -rf /etc/sysconfig/network-scripts/ifcfg-em2
```
2. Create the NIC bonding.
```bash
nmcli con add type bond con-name bond0 ifname bond0 mode 802.3ad ip4 192.168.60.152/24 gw4 192.168.60.254
```
```bash
nmcli con add type bond con-name bond0 ifname bond0 mode 802.3ad ip4 192.168.60.152/24 gw4 192.168.60.254
```
3. Set the bonding mode.
```bash
nmcli con mod id bond0 bond.options mode=802.3ad,miimon=100,lacp_rate=fast,xmit_hash_policy=layer2+3
```
```bash
nmcli con mod id bond0 bond.options mode=802.3ad,miimon=100,lacp_rate=fast,xmit_hash_policy=layer2+3
```
4. Bind the physical NIC.
```bash
nmcli con add type bond-slave ifname em1 con-name em1 master bond0
```
```bash
nmcli con add type bond-slave ifname em1 con-name em1 master bond0
```
```bash
nmcli con add type bond-slave ifname em2 con-name em2 master bond0
```
```bash
nmcli con add type bond-slave ifname em2 con-name em2 master bond0
```
5. Change the NIC mode.
```bash
vi /etc/sysconfig/network-scripts/ifcfg-bond0
BOOTPROTO=static
```
```bash
vi /etc/sysconfig/network-scripts/ifcfg-bond0
BOOTPROTO=static
```
6. Restart Network Manager.
```bash
systemctl restart NetworkManager
```
```bash
systemctl restart NetworkManager
```
```bash
nmcli con # Display NIC information
```
```bash
nmcli con # Display NIC information
```
7. Change the host name and DNS.
```bash
hostnamectl set-hostname worker-1
```
```bash
hostnamectl set-hostname worker-1
```
```bash
vim /etc/resolv.conf
```
```bash
vim /etc/resolv.conf
```
### Time Setting
### Time settings
1. Synchronize time.
```bash
yum install -y chrony
```
```bash
systemctl enable chronyd
```
```bash
systemctl start chronyd
```
```bash
timedatectl set-ntp true
```
```bash
yum install -y chrony
```
```bash
systemctl enable chronyd
```
```bash
systemctl start chronyd
```
```bash
timedatectl set-ntp true
```
2. Set the time zone.
```bash
timedatectl set-timezone Asia/Shanghai
```
```bash
timedatectl set-timezone Asia/Shanghai
```
3. Check if the ntp-server is available.
```bash
chronyc activity -v
```
```bash
chronyc activity -v
```
### Firewall Setting
### Firewall settings
Execute the following commands to stop and disable the FirewallD service:
@ -156,7 +156,7 @@ systemctl stop firewalld
systemctl disable firewalld
```
### Package Update and Dependencies
### Package updates and dependencies
Execute the following commands to update system packages and install dependencies.
@ -244,7 +244,7 @@ Make `kk` executable:
chmod +x kk
```
## Create a Multi-node Cluster
## Create a Multi-Node Cluster
With KubeKey, you can install Kubernetes and KubeSphere together. You have the option to create a multi-node cluster by customizing parameters in the configuration file.
@ -298,7 +298,7 @@ Create a cluster using the configuration file you customized above:
./kk create cluster -f config-sample.yaml
```
#### Verify the Multi-node Installation
#### Verify the installation
After the installation finishes, you can inspect the logs of installation by executing the command below:
@ -306,7 +306,7 @@ After the installation finishes, you can inspect the logs of installation by exe
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
```
If you can see the welcome log return, it means the installation is successful. Your cluster is up and running.
If you can see the welcome log return, it means the installation is successful.
```bash
**************************************************
@ -328,74 +328,74 @@ https://kubesphere.io 20xx-xx-xx xx:xx:xx
#####################################################
```
#### Log in the Console
#### Log in the console
You will be able to use default account and password `admin/P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after login.
#### Enable Pluggable Components (Optional)
#### Enable pluggable components (Optional)
The example above demonstrates the process of a default minimal installation. To enable other components in KubeSphere, see [Enable Pluggable Components](../../../pluggable-components/) for more details.
## System Improvements
- Update your system.
```bash
yum update
```
```bash
yum update
```
- Add the required options to the kernel boot arguments:
```bash
sudo /sbin/grubby --update-kernel=ALL --args='cgroup_enable=memory cgroup.memory=nokmem swapaccount=1'
```
```bash
sudo /sbin/grubby --update-kernel=ALL --args='cgroup_enable=memory cgroup.memory=nokmem swapaccount=1'
```
- Enable the `overlay2` kernel module.
```bash
echo "overlay2" | sudo tee -a /etc/modules-load.d/overlay.conf
```
```bash
echo "overlay2" | sudo tee -a /etc/modules-load.d/overlay.conf
```
- Refresh the dynamically generated grub2 configuration.
```bash
sudo grub2-set-default 0
```
```bash
sudo grub2-set-default 0
```
- Adjust kernel parameters and make the change effective.
```bash
cat <<EOF | sudo tee -a /etc/sysctl.conf
vm.max_map_count = 262144
fs.may_detach_mounts = 1
net.ipv4.ip_forward = 1
vm.swappiness=1
kernel.pid_max =1000000
fs.inotify.max_user_instances=524288
EOF
sudo sysctl -p
```
```bash
cat <<EOF | sudo tee -a /etc/sysctl.conf
vm.max_map_count = 262144
fs.may_detach_mounts = 1
net.ipv4.ip_forward = 1
vm.swappiness=1
kernel.pid_max =1000000
fs.inotify.max_user_instances=524288
EOF
sudo sysctl -p
```
- Adjust system limits.
```bash
vim /etc/security/limits.conf
* soft nofile 1024000
* hard nofile 1024000
* soft memlock unlimited
* hard memlock unlimited
root soft nofile 1024000
root hard nofile 1024000
root soft memlock unlimited
```
```bash
vim /etc/security/limits.conf
* soft nofile 1024000
* hard nofile 1024000
* soft memlock unlimited
* hard memlock unlimited
root soft nofile 1024000
root hard nofile 1024000
root soft memlock unlimited
```
- Remove the previous limit configuration.
```bash
sudo rm /etc/security/limits.d/20-nproc.conf
```
```bash
sudo rm /etc/security/limits.d/20-nproc.conf
```
- Root the system.
```bash
reboot
```
```bash
reboot
```

View File

@ -2,12 +2,22 @@
title: "KubeSphere Federation"
keywords: 'Kubernetes, KubeSphere, federation, multicluster, hybrid-cloud'
description: 'Overview'
linkTitle: "KubeSphere Federation"
weight: 5120
---
The multi-cluster feature relates to the network connection among multiple clusters. Therefore, it is important to understand the topological relations of clusters as the workload can be reduced.
The multi-cluster feature relates to the network connection among multiple clusters. Therefore, it is important to understand the topological relations of clusters.
Before you use the multi-cluster feature, you need to create a Host Cluster (hereafter referred to as **H** Cluster), which is actually a KubeSphere cluster with the multi-cluster feature enabled. All the clusters managed by the H Cluster are called Member Cluster (hereafter referred to as **M** Cluster). They are common KubeSphere clusters that do not have the multi-cluster feature enabled. There can only be one H Cluster while multiple M Clusters can exist at the same time. In a multi-cluster architecture, the network between the H Cluster and the M Cluster can be connected directly or through an agent. The network between M Clusters can be set in a completely isolated environment.
## How the Multi-Cluster Architecture Works
![Kubernetes Federation in KubeSphere](https://ap3.qingstor.com/kubesphere-website/docs/20200907232319.png)
Before you use the central control plane of KubeSphere to management multiple clusters, you need to create a Host Cluster, also known as **H** Cluster. The H Cluster, essentially, is a KubeSphere cluster with the multi-cluster feature enabled. It provides you with the control plane for unified management of Member Clusters, also known as **M** Cluster. M Clusters are common KubeSphere clusters without the central control plane. Namely, tenants with necessary permissions (usually cluster administrators) can access the control plane from the H Cluster to manage all M Clusters, such as viewing and editing resources on M Clusters. Conversely, if you access the web console of any M Cluster separately, you cannot see any resources on other clusters.
![centrol-control-plane](/images/docs/multicluster-management/introduction/kubesphere-federation/centrol-control-plane.png)
There can only be one H Cluster while multiple M Clusters can exist at the same time. In a multi-cluster architecture, the network between the H Cluster and M Clusters can be connected directly or through an agent. The network between M Clusters can be set in a completely isolated environment.
![kubesphere-federation](/images/docs/multicluster-management/introduction/kubesphere-federation/kubesphere-federation.png)
## Vendor Agnostic
KubeSphere features a powerful, inclusive central control plane so that you can manage any KubeSphere clusters in a unified way regardless of deployment environments or cloud providers.

View File

@ -2,7 +2,7 @@
title: "Overview"
keywords: 'Kubernetes, KubeSphere, multicluster, hybrid-cloud'
description: 'Overview'
linkTitle: "Overview"
weight: 5110
---
@ -10,6 +10,6 @@ Today, it's very common for organizations to run and manage multiple Kubernetes
The most common use cases of multi-cluster management include service traffic load balancing, development and production isolation, decoupling of data processing and data storage, cross-cloud backup and disaster recovery, flexible allocation of computing resources, low latency access with cross-region services, and vendor lock-in avoidance.
KubeSphere is developed to address multi-cluster and multi-cloud management challenges and implement the proceeding user scenarios, providing users with a unified control plane to distribute applications and its replicas to multiple clusters from public cloud to on-premises environments. KubeSphere also provides rich observability across multiple clusters including centralized monitoring, logging, events, and auditing logs.
KubeSphere is developed to address multi-cluster and multi-cloud management challenges, including the scenarios mentioned above. It provides users with a unified control plane to distribute applications and its replicas to multiple clusters from public cloud to on-premises environments. KubeSphere also boasts rich observability across multiple clusters including centralized monitoring, logging, events, and auditing logs.
![KubeSphere Multi-cluster Management](/images/docs/multi-cluster-overview.jpg)
![multi-cluster-overview](/images/docs/multicluster-management/introduction/overview/multi-cluster-overview.jpg)

View File

@ -1,83 +1,84 @@
---
title: "Deploy RabbitMQ on KubeSphere"
keywords: 'KubeSphere, RabbitMQ, Kubernetes, Installation'
description: 'How to deploy RabbitMQ on KubeSphere through App Store'
title: "在 KubeSphere 中部署 RabbitMQ"
keywords: 'KubeSphere, RabbitMQ, Kubernetes, 安装'
description: '如何通过应用商店在 KubeSphere 中部署 RabbitMQ'
link title: "Deploy RabbitMQ"
link title: "在 KubeSphere 中部署 RabbitMQ"
weight: 14290
---
[RabbitMQ](https://www.rabbitmq.com/) is the most widely deployed open-source message broker. It is lightweight and easy to deploy on premises and in the cloud. It supports multiple messaging protocols. RabbitMQ can be deployed in distributed and federated configurations to meet high-scale, high-availability requirements.
[RabbitMQ](https://www.rabbitmq.com/) 是部署最广泛的开源消息代理。它轻量且易于在本地和云上部署支持多种消息协议。RabbitMQ 可在分布和联邦的配置中部署,以满足大规模和高可用性需求。
This tutorial walks you through an example of how to deploy RabbitMQ from the App Store of KubeSphere.
本教程演示如何从 KubeSphere 的应用商店部署 RabbitMQ。
## Prerequisites
## 准备工作
- Please make sure you [enable the OpenPitrix system](https://kubesphere.io/docs/pluggable-components/app-store/).
- You need to create a workspace, a project, and a user account for this tutorial. The account needs to be a platform regular user and to be invited as the project operator with the `operator` role. In this tutorial, you log in as `project-regular` and work in the project `demo-project` in the workspace `demo-workspace`. For more information, see [Create Workspace, Project, Account and Role](../../../quick-start/create-workspace-and-project/).
- 您需要[启用 OpenPitrix 系统](../../../pluggable-components/app-store/)。
- 您需要创建一个企业空间、一个项目和一个用户帐户。该帐户必须是已邀请至项目的平台普通用户,并且在项目中的角色为 `operator`。在本教程中,您需要以 `project-regular` 用户登录,并在 `demo-workspace` 企业空间的 `demo-project` 项目中进行操作。有关更多信息,请参见[创建企业空间、项目、帐户和角色](../../../quick-start/create-workspace-and-project/)。
## Hands-on Lab
## 动手实验
### Step 1: Deploy RabbitMQ from App Store
### 步骤 1从应用商店部署 RabbitMQ
1. On the **Overview** page of the project `demo-project`, click **App Store** in the top left corner.
1. `demo-project` 的**概览**页面,点击左上角的**应用商店**。
![rabbitmq01](/images/docs/appstore/built-in-apps/rabbitmq-app/rabbitmq01.jpg)
![rabbitmq01](/images/docs/zh-cn/appstore/built-in-apps/rabbitmq-app/rabbitmq01.jpg)
2. Find RabbitMQ and click **Deploy** on the **App Info** page.
2. 找到 RabbitMQ在**应用信息**页面点击**部署**。
![find-rabbitmq](/images/docs/appstore/built-in-apps/rabbitmq-app/rabbitmq02.jpg)
![find-rabbitmq](/images/docs/zh-cn/appstore/built-in-apps/rabbitmq-app/rabbitmq02.jpg)
![click-deploy](/images/docs/appstore/built-in-apps/rabbitmq-app/rabbitmq021.jpg)
![click-deploy](/images/docs/zh-cn/appstore/built-in-apps/rabbitmq-app/rabbitmq021.jpg)
3. Set a name and select an app version. Make sure RabbitMQ is deployed in `demo-project` and click **Next**.
3. 设置应用名称和版本,确保 RabbitMQ 部署在 `demo-project` 项目中,然后点击**下一步**。
![rabbitmq03](/images/docs/appstore/built-in-apps/rabbitmq-app/rabbitmq03.jpg)
![rabbitmq03](/images/docs/zh-cn/appstore/built-in-apps/rabbitmq-app/rabbitmq03.jpg)
4. In **App Config**, you can use the default configuration directly or customize the configuration either by specifying fields in a form or editing the YAML file. Record the value of **Root Username** and the value of **Root Password**, which will be used later for login. Click **Deploy** to continue.
4. 在**应用配置**页面,您可以直接使用默认配置,也可以通过修改表单参数或编辑 YAML 文件自定义配置。您需要记录 **Root Username****Root Password** 的值,用于在后续步骤中登录系统。设置完成后点击**部署**。
![rabbitMQ11](/images/docs/appstore/built-in-apps/rabbitmq-app/rabbitMQ11.jpg)
![rabbitMQ11](/images/docs/zh-cn/appstore/built-in-apps/rabbitmq-app/rabbitMQ11.jpg)
![rabbitMQ04](/images/docs/appstore/built-in-apps/rabbitmq-app/rabbitMQ04.jpg)
![rabbitMQ04](/images/docs/zh-cn/appstore/built-in-apps/rabbitmq-app/rabbitMQ04.jpg)
{{< notice tip >}}
To see the manifest file, toggle the **YAML** switch.
如需查看清单文件,请点击 **YAML** 开关。
{{</ notice >}}
5. Wait until RabbitMQ is up and running.
5. 等待 RabbitMQ 创建完成并开始运行。
![check-if-rabbitmq-is-running](/images/docs/appstore/built-in-apps/rabbitmq-app/rabbitmq05.jpg)
![check-if-rabbitmq-is-running](/images/docs/zh-cn/appstore/built-in-apps/rabbitmq-app/rabbitmq05.jpg)
### Step 2: Access RabbitMQ Dashboard
### 步骤 2访问 RabbitMQ 主页
To access RabbitMQ outside the cluster, you need to expose the app through NodePort first.
要从集群外访问 RabbitMQ您需要先用 NodePort 暴露该应用。
1. Go to **Services** and click the service name of RabbitMQ.
1. 打开**服务**页面并点击 RabbitMQ 的服务名称。
![go-to-services](/images/docs/appstore/built-in-apps/rabbitmq-app/rabbitmq06.jpg)
![go-to-services](/images/docs/zh-cn/appstore/built-in-apps/rabbitmq-app/rabbitmq06.jpg)
2. Click **More** and select **Edit Internet Access** from the drop-down menu.
2. 点击**更多操作**,在下拉菜单中选择**编辑外网访问**。
![rabbitmq07](/images/docs/appstore/built-in-apps/rabbitmq-app/rabbitmq07.jpg)
![rabbitmq07](/images/docs/zh-cn/appstore/built-in-apps/rabbitmq-app/rabbitmq07.jpg)
3. Select **NodePort** for **Access Method** and click **OK**. For more information, see [Project Gateway](../../../project-administration/project-gateway/).
3. 将**访问方式**设置为 **NodePort** 并点击**确定**。有关更多信息,请参见[项目网关](../../../project-administration/project-gateway/)。
![rabbitmq08](/images/docs/appstore/built-in-apps/rabbitmq-app/rabbitmq08.jpg)
![rabbitmq08](/images/docs/zh-cn/appstore/built-in-apps/rabbitmq-app/rabbitmq08.jpg)
4. Under **Service Ports**, you can see ports are exposed.
4. 您可以在**服务端口**区域查看暴露的端口。
![rabbitmq09](/images/docs/appstore/built-in-apps/rabbitmq-app/rabbitmq09.jpg)
![rabbitmq09](/images/docs/zh-cn/appstore/built-in-apps/rabbitmq-app/rabbitmq09.jpg)
5. Access RabbitMQ **management** through `{$NodeIP}:{$Nodeport}`. Note that the username and password are those you set in **Step 1**.
![rabbitmq-dashboard](/images/docs/appstore/built-in-apps/rabbitmq-app/rabbitmq-dashboard.jpg)
5. `{$NodeIP}:{$Nodeport}` 地址以及步骤 1 中记录的用户名和密码访问 RabbitMQ 的 **management** 端口。
![rabbitmq-dashboard](/images/docs/zh-cn/appstore/built-in-apps/rabbitmq-app/rabbitmq-dashboard.jpg)
![rabbitma-dashboard-detail](/images/docs/appstore/built-in-apps/rabbitmq-app/rabbitma-dashboard-detail.jpg)
![rabbitma-dashboard-detail](/images/docs/zh-cn/appstore/built-in-apps/rabbitmq-app/rabbitma-dashboard-detail.jpg)
{{< notice note >}}
You may need to open the port in your security groups and configure related port forwarding rules depending on your where your Kubernetes cluster is deployed.
取决于您的 Kubernetes 集群的部署位置,您可能需要在安全组中放行端口并配置相关的端口转发规则。
{{</ notice >}}
6. For more information about RabbitMQ, refer to [the official documentation of RabbitMQ](https://www.rabbitmq.com/documentation.html).
6. 有关 RabbitMQ 的更多信息,请参考[ RabbitMQ 官方文档](https://www.rabbitmq.com/documentation.html)。

View File

@ -263,7 +263,9 @@ chmod +x kk
{{< notice note >}}
请确保 Kubernetes 版本和您下载的版本一致。
- 请确保 Kubernetes 版本和您下载的版本一致。
- 如果您在这一步的命令中不添加标志 `--with-kubesphere`,则不会部署 KubeSphere只能使用配置文件中的 `addons` 字段安装,或者在您后续使用 `./kk create cluster` 命令时再次添加这个标志。
{{</ notice >}}

View File

@ -97,7 +97,11 @@ chmod +x kk
{{< notice note >}}
在 KubeSphere 上充分测试过的 Kubernetes 版本v1.15.12、v1.16.13、v1.17.9(默认)以及 v1.18.6。
- 在 KubeSphere 上充分测试过的 Kubernetes 版本v1.15.12、v1.16.13、v1.17.9(默认)以及 v1.18.6。
- 如果您在这一步的命令中不添加标志 `--with-kubesphere`,则不会部署 KubeSphere只能使用配置文件中的 `addons` 字段安装,或者在您后续使用 `./kk create cluster` 命令时再次添加这个标志。
- 如果您添加标志 `--with-kubesphere` 时不指定 KubeSphere 版本,则会安装最新版本的 KubeSphere。
{{</ notice >}}

View File

@ -1,15 +1,14 @@
---
title: "多节点安装"
keywords: 'Multi-node, Installation, KubeSphere'
description: 'Multi-node Installation Overview'
keywords: '多节点, 安装, KubeSphere'
description: '说明如何多节点安装 KubeSphere'
linkTitle: "多节点安装"
weight: 3120
---
[All-in-one](../../../quick-start/all-in-one-on-linux/) 是为新用户体验 KubeSphere 而提供的快速且简单的安装方式,在正式环境中,单节点集群因受限于资源和计算能力的不足而无法满足大多数需求,因此不建议将单节点集群用于大规模数据处理。多节点安装环境通常包括至少一个主节点和多个工作节点,如果是生产环境则需要安装主节点高可用的方式
在生产环境中,单节点集群由于集群资源有限并且计算能力不足,无法满足大部分需求。因此,不建议在处理大规模数据时使用单节点集群。此外,这类集群只有一个节点,因此也不具有高可用性。相比之下,在应用程序部署和分发方面,多节点架构是最常见的首选架构
本节概述了多节点安装,包括概念,[KubeKey](https://github.com/kubesphere/kubekey/) 及安装步骤。有关主节点高可用安装的信息,请参阅[高可用安装配置](../ha-configuration/)或参阅在公有云上安装或在本地环境中安装,如[在阿里云 ECS 安装高可用 KubeSphere](../../public-cloud/install-kubesphere-on-ali-ecs/) 或 [在 VMware vSphere 部署高可用 KubeSphere](../../on-premises/install-kubesphere-on-vmware-vsphere/)。
本节概述了单主节点式多节点安装,包括概念、[KubeKey](https://github.com/kubesphere/kubekey/) 和操作步骤。有关高可用安装的信息,请参考[高可用配置](../../../installing-on-linux/introduction/ha-configuration/)、[在公有云上安装](../../../installing-on-linux/public-cloud/install-kubesphere-on-azure-vms/)和[在本地环境中安装](../../../installing-on-linux/on-premises/install-kubesphere-on-bare-metal/)。
## 视频演示
@ -19,93 +18,97 @@ weight: 3120
## 概念
多节点集群由至少一个主节点和一个工作节点组成,可以使用任何节点作为**任务箱**来执行安装任务。您可以在安装之前或之后根据需要添加其他节点(例如,为了实现高可用性)。
多节点集群由至少一个主节点和一个工作节点组成,可以使用任何节点作为**任务机**来执行安装任务。您可以在安装之前或之后根据需要新增节点(例如,为了实现高可用性)。
- **Master**:主节点,通常托管控制面,控制和管理整个系统。
- **Worker**:工作节点,运行在其上部署实际应用程序。
- **Master**:主节点,通常托管控制面,控制和管理整个系统。
- **Worker**:工作节点,运行部署在其之上的实际应用程序。
## 为什么选择 KubeKey
如果您不熟悉 Kubernetes 组件,可能会发现部署多节点 Kubernetes 集群并不容易。从版本 3.0.0 开始KubeSphere 使用了一个全新的安装工具 KubeKey替换以前基于 ansible 的安装程序,更加方便用户快速部署多节点集群。具体来说,下载 KubeKey 之后用户只需配置很少的信息如节点信息IP 地址和节点角色),然后一条命令即可安装。
如果您不熟悉 Kubernetes 组件,可能会发现部署一个功能强大的多节点 Kubernetes 集群并不容易。从 3.0.0 版本开始KubeSphere 使用全新安装程序 KubeKey替换以前基于 ansible 的安装程序。KubeKey 使用 Go 语言开发,让用户能够快速部署多节点架构。
对于没有现有 Kubernetes 集群的用户,下载 KubeKey 之后只需要用一些命令创建配置文件并在文件中添加节点信息例如IP 地址和节点角色),然后使用一行命令便可以开始安装,无需额外操作。
### 优势
- 之前基于 ansible 的安装程序具有许多软件依赖性,例如 Python。KubeKey 是使用 Go 语言开发的,可以消除各种环境中的问题,并确保安装成功。
- KubeKey 使用 Kubeadm 在节点上尽可能多地并行安装 Kubernetes 集群,以降低安装复杂性并提高效率。与较早的安装程序相比,它将大大节省安装时间。
- 借助 KubeKey 用户可以自由伸缩集群,包括将集群从单节点集群扩展到多节点集群,甚至是主节点高可用集群。
- KubeKey 未来计划将集群管理封装成一个对象,即 Cluster as an Object (CaaO)。
- 之前基于 ansible 的安装程序有许多软件依赖项,例如 Python。KubeKey 使用 Go 语言开发,可以消除各种环境中的问题,确保安装成功。
- KubeKey 使用 Kubeadm 在节点上尽可能多地并行安装 Kubernetes 集群,以降低安装复杂性并提高效率。与之前的安装程序相比,它将大大节省安装时间。
- 用户可以使用 KubeKey 将 All-in-One 集群(即单节点集群)扩展到多节点集群,甚至高可用集群。
- KubeKey 旨在将集群作为一个对象来安装,即 Cluster as an Object (CaaO)。
## 步骤1准备 Linux 主机
## 步骤 1准备 Linux 主机
安装之前请参阅下面对硬件和操作系统的要求准备至少三台主机,如果您只有两台主机的话请保证机器配置足够安装
请参见下表列出的硬件和操作系统要求。要开始本节演示中的多节点安装,您需要按照下列要求准备至少三台主机。如果计划的资源足够,也可以将 KubeSphere 安装在两个节点上
### 系统要求
| 系统 | 最低要求(每个节点) |
| --------------------------------------------------------------- | ------------------------------------------- |
| **Ubuntu** *16.04, 18.04* | CPU2 核内存4 G硬盘40 G |
| **Debian** *Buster, Stretch* | CPU2 核内存4 G硬盘40 G |
| **CentOS** *7.x* | CPU2 核内存4 G硬盘40 G |
| **Red Hat Enterprise Linux** *7* | CPU2 核内存4 G硬盘40 G |
| 系统 | 最低要求(每个节点) |
| ------------------------------------------------------------ | -------------------------------- |
| **Ubuntu** *16.04, 18.04* | CPU2 核内存4 G硬盘40 G |
| **Debian** *Buster, Stretch* | CPU2 核内存4 G硬盘40 G |
| **CentOS** *7*.x | CPU2 核内存4 G硬盘40 G |
| **Red Hat Enterprise Linux** *7* | CPU2 核内存4 G硬盘40 G |
| **SUSE Linux Enterprise Server** *15* **/openSUSE Leap** *15.2* | CPU2 核内存4 G硬盘40 G |
{{< notice note >}}
`/var/lib/docker`路径主要用于存储容器数据,通常在使用过程中数据量会逐渐增加,因此在生产环境中,建议将`/var/lib/docker`挂载在单独的数据盘上
`/var/lib/docker` 路径主要用于存储容器数据,在使用和操作过程中数据量会逐渐增加。因此,在生产环境中,建议为 `/var/lib/docker` 单独挂载一个硬盘
{{</ notice >}}
### 节点要求
- 所有节点必须可以通过 SSH 访问。
- 所有节点配置时钟同步。
- 所有节点必须可以使用`sudo`/`curl`/`openssl`。
- Docker 可以自己预先安装或由 `KubeKey` 统一安装。
- 所有节点必须都能通过 `SSH` 访问。
- 所有节点时间同步。
- 所有节点都应使用 `sudo`/`curl`/`openssl`。
- `docker` 可以由您自己安装或由 KubeKey 安装。
{{< notice note >}}
如果您的环境不能访问外网,则必须预先安装`docker`,然后用离线方式安装
如果您想在离线环境中部署 KubeSphere请务必提前安装 `docker`
{{</ notice >}}
### 软件依赖要求
### 依赖要求
不同版本的 Kubernetes 对系统软件要求有所不同,您需要根据自己的环境按照下面的要求预先安装依赖软件
KubeKey 可以一同安装 Kubernetes 和 KubeSphere。根据要安装的 Kubernetes 版本,需要安装的依赖项可能会不同。您可以参考下表,查看是否需要提前在节点上安装相关依赖项
| 依赖 | Kubernetes 版本 ≥ 1.18 | Kubernetes 版本 < 1.18 |
| ----------- | ---------------------- | --------------------- |
| `socat` | 必须 | 可选但建议 |
| `conntrack` | 必须 | 可选但建议 |
| `ebtables` | 可选但建议 | 可选但建议 |
| `ipset` | 可选但建议 | 可选但建议 |
| 依赖 | Kubernetes 版本 ≥ 1.18 | Kubernetes 版本 < 1.18 |
| ----------- | ---------------------- | ---------------------- |
| `socat` | 必须 | 可选但建议 |
| `conntrack` | 必须 | 可选但建议 |
| `ebtables` | 可选但建议 | 可选但建议 |
| `ipset` | 可选但建议 | 可选但建议 |
### 网络和 DNS 要求
- 确保`/etc/resolv.conf`中的 DNS 地址可用,否则,可能会导致集群中出现某些 DNS 问题。
- 如果您的网络配置使用防火墙或安全组,则必须确保基础结构组件可以通过特定端口相互通信。建议您关闭防火墙或遵循指南[端口要求](../port-firewall/)。
- 确保 `/etc/resolv.conf` 中的 DNS 地址可用,否则,可能会导致集群中的 DNS 出现问题。
- 如果您的网络配置使用防火墙或安全组,请务必确保基础设施组件可以通过特定端口相互通信。建议您关闭防火墙或遵循指南[端口要求](../../../installing-on-linux/introduction/port-firewall/)。
{{< notice tip >}}
- 建议您的操作系统是干净的(不安装任何其他软件),否则可能会发生冲突。
- 如果您在从 dockerhub.io 下载镜像时遇到问题,建议准备一个容器镜像(加速器)。请参阅[配置镜像 mirror 加速安装](../../faq/configure-booster/) 或 [Configure registry mirrors for the Docker daemon](https://docs.docker.com/registry/recipes/mirror/#configure-the-docker-daemon)。
- 建议您使用干净的操作系统(不安装任何其他软件)。否则,可能会有冲突。
- 如果您在从 `dockerhub.io` 下载镜像时遇到问题,建议准备一个容器镜像(加速器)。请参见[为安装配置加速器](../../../faq/installation/configure-booster/)或[为 Docker Daemon 配置仓库镜像](https://docs.docker.com/registry/recipes/mirror/#configure-the-docker-daemon)。
{{</ notice >}}
本示例包括以下三个主机,其中主节点用作**任务箱**
本示例包括以下三台主机,其中主节点充当任务机
| Host IP | Host Name | Role |
| ----------- | --------- | ------------ |
| 192.168.0.2 | master | master, etcd |
| 192.168.0.3 | node1 | worker |
| 192.168.0.4 | node2 | worker |
| 主机 IP | 主机名称 | 角色 |
| ----------- | -------- | ------------ |
| 192.168.0.2 | master | master, etcd |
| 192.168.0.3 | node1 | worker |
| 192.168.0.4 | node2 | worker |
## 步骤2下载 KubeKey
## 步骤 2下载 KubeKey
按照以下步骤下载 KubeKey。
{{< tabs >}}
{{< tab "如果您能正常访问 GitHub/Googleapis" >}}
从 [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) 下载 KubeKey 或直接使用以下命令。
从 [GitHub 发布页面](https://github.com/kubesphere/kubekey/releases)下载 KubeKey 或直接使用以下命令。
```bash
curl -sfL https://get-kk.kubesphere.io | VERSION=v1.0.1 sh -
@ -121,7 +124,7 @@ curl -sfL https://get-kk.kubesphere.io | VERSION=v1.0.1 sh -
export KKZONE=cn
```
执行以下命令下载 KubeKey
执行以下命令下载 KubeKey
```bash
curl -sfL https://get-kk.kubesphere.io | VERSION=v1.0.1 sh -
@ -129,7 +132,7 @@ curl -sfL https://get-kk.kubesphere.io | VERSION=v1.0.1 sh -
{{< notice note >}}
在您下载 KubeKey 后,如果您将其传至新的机器,且访问 Googleapis 同样受限,在您执行以下步骤之前请务必再次执行 `export KKZONE=cn` 命令。
下载 KubeKey 后,如果您将其传至新的机器,且访问 Googleapis 同样受限,请您在执行以下步骤之前务必再次执行 `export KKZONE=cn` 命令。
{{</ notice >}}
@ -141,7 +144,7 @@ curl -sfL https://get-kk.kubesphere.io | VERSION=v1.0.1 sh -
执行以上命令会下载最新版 KubeKey (v1.0.1),您可以修改命令中的版本号下载指定版本。
{{</ notice >}}
{{</ notice >}}
`kk` 添加可执行权限:
@ -149,45 +152,36 @@ curl -sfL https://get-kk.kubesphere.io | VERSION=v1.0.1 sh -
chmod +x kk
```
## 步骤3创建一个集群
## 步骤 3创建集群
对于多节点安装,需要通过指定配置文件来创建集群。
对于多节点安装,需要通过指定配置文件来创建集群。
### 1. 创建一个示例配置文件
### 1. 创建示例配置文件
命令:
命令
```bash
./kk create config [--with-kubernetes version] [--with-kubesphere version] [(-f | --file) path]
```
{{< notice info >}}
{{< notice note >}}
支持的 Kubernetes 版本:*v1.15.12*, *v1.16.13*, *v1.17.9* (默认), *v1.18.6*.
- 支持的 Kubernetes 版本:*v1.15.12*、*v1.16.13*、*v1.17.9*(默认)、*v1.18.6*。
- 如果您在这一步的命令中不添加标志 `--with-kubesphere`,则不会部署 KubeSphere只能使用配置文件中的 `addons` 字段安装,或者在您后续使用 `./kk create cluster` 命令时再次添加这个标志。
- 如果您添加标志 `--with-kubesphere` 时不指定 KubeSphere 版本,则会安装最新版本的 KubeSphere。
{{</ notice >}}
以下是一些示例供您参考:
- 可以使用默认配置创建示例配置文件,还可以使用其他文件名或其他文件夹指定待创建文件
- 可以使用默认配置创建示例配置文件,也可以为该文件指定其他文件名或其他文件夹
```bash
./kk create config [-f ~/myfolder/abc.yaml]
```
- 可以在`config-sample.yaml`中自定义持久性存储插件(例如 NFS ClientCeph RBD 和 GlusterFS
```bash
./kk create config --with-storage localVolume
```
{{< notice note >}}
默认情况下KubeKey 将安装 [OpenEBS](https://openebs.io/) 并配置 [LocalPV](https://kubernetes.io/docs/concepts/storage/volumes/#local) 为默认存储类,方便用户部署开发或测试环境。本示例也采取这种默认安装方式。对于生产环境,请使用 NFS/Ceph/GlusterFS/CSI 或商业产品作为持久性存储解决方案,则需要在 `config-sample.yaml``addons` 下配置存储信息。有关更多详细信息,请参见 [持久化存储配置](../storage-configuration/)。
{{</ notice >}}
- 可以指定要安装的 KubeSphere 版本(例如`--with-kubesphere v3.0.0`)。
- 您可以指定要安装的 KubeSphere 版本(例如 `--with-kubesphere v3.0.0`)。
```bash
./kk create config --with-kubesphere [version]
@ -195,11 +189,11 @@ chmod +x kk
### 2. 编辑配置文件
如果不更改名称,将创建默认文件 `config-sample.yaml`。编辑文件,这是具有一个主节点的多节点集群的配置文件示例
如果不更改名称,将创建默认文件 `config-sample.yaml`。编辑文件,以下是多节点集群配置文件的示例,它具有一个主节点
{{< notice note >}}
Kubernetes 相关的参数配置参见 [Kubernetes 集群配置](../vars/)。
要自定义 Kubernetes 相关参数,请参考 [Kubernetes 集群配置](../../../installing-on-linux/introduction/vars/)。
{{</ notice >}}
@ -223,24 +217,24 @@ spec:
port: "6443"
```
#### hosts
#### 主机
请在 `hosts` 下列出您所有安装机器的详细信息。
参照上方示例在 `hosts` 下列出您的所有机器并添加详细信息。
`name`:实例的主机名
`name`:实例的主机名。
`address`此 IP 地址用于您通过 SSH 从任务执行机连接至其他实例,可以是公共 IP 或私有 IP 地址,具体取决于安装环境。例如,部分云平台会为每个实例提供一个公共 IP 地址,用于通过 SSH 进行访问。在这种情况下,请在此字段填入该公共 IP 地址。
`address`任务机和其他实例通过 SSH 相互连接所使用的 IP 地址。根据您的环境,可以是公共 IP 地址或私有 IP 地址。例如,一些云平台为每个实例提供一个公共 IP 地址,用于通过 SSH 访问。在这种情况下,您可以在该字段填入这个公共 IP 地址。
`internalAddress`:实例的私有 IP 地址。
- 本教程中端口 22 是 SSH 的默认端口。如果您使用其他端口,请在 IP 地址后添加对应端口号,例如:
- 在本教程中,端口 22 是 SSH 的默认端口,因此您无需将它添加至 YAML 文件中。否则,您需要在 IP 地址后添加对应端口号。例如:
```yaml
hosts:
- {name: master, address: 192.168.0.2, internalAddress: 192.168.0.2, port: 8022, user: ubuntu, password: Testing123}
```
- 默认是 root 用户示例:
- 默认 root 用户示例:
```yaml
hosts:
@ -256,34 +250,34 @@ spec:
{{< notice tip >}}
在安装 KubeSphere 之前,建议您先使用 `hosts` 下所提供的信息(例如 IP 地址和密码)通过 SSH 的方式测试任务执行机和其他实例之间的网络连接。
在安装 KubeSphere 之前,您可以使用 `hosts`提供的信息(例如 IP 地址和密码)通过 SSH 的方式测试任务机和其他实例之间的网络连接。
{{</ notice >}}
#### roleGroups
- `etcd`etcd 节点名称
- `master`Master 节点名称
- `worker`Worker 节点名称
- `master`节点名称
- `worker`工作节点名称
#### controlPlaneEndpoint (仅用于 HA 安装)
#### controlPlaneEndpoint(仅适用于高可用安装)
`controlPlaneEndpoint`允许您为 HA 集群定义外部负载均衡器。当且仅当安装多个主节点时,才需要配置外部负载均衡器。请注意,地址和端口应在`config-sample.yaml`中以两个空格缩进,`address`应为 VIP。有关详细信息请参见 [HA 配置](../ha-configuration/)。
`controlPlaneEndpoint` 使您可以为高可用集群定义外部负载均衡器。当且仅当安装多个主节点时,才需要准备和配置外部负载均衡器。请注意,`config-sample.yaml` 中的地址和端口应缩进两个空格,`address` 应为 VIP。有关详细信息请参见[高可用配置](../../../installing-on-linux/introduction/ha-configuration/)。
#### addons
您可以在此配置文件`config-sample.yaml`里设置持久化存储,比如 NFS 客户端、Ceph RBD、GlusterFS 等,信息添加在`addons`下,详细说明请参阅[持久化存储配置](../storage-configuration)。
您可以在 `config-sample.yaml``addons` 字段下指定存储,从而自定义持久化存储插件,例如 NFS 客户端、Ceph RBD、GlusterFS 等。有关更多信息,请参见[持久化存储配置](../../../installing-on-linux/introduction/storage-configuration/)。
{{< notice note >}}
KubeSphere 默认情况下安装 [openEBS](https://openebs.io/) 的[本地卷](https://kubernetes.io/docs/concepts/storage/volumes/#local)作为存储类型,方便在开发或测试环境下快速安装。如果需要在生产环境安装,请使用 NFS/Ceph/GlusterFS/CSI 或者商业化存储
KubeSphere 会默认安装 [OpenEBS](https://openebs.io/),为开发和测试环境配置 [LocalPV](https://kubernetes.io/docs/concepts/storage/volumes/#local),方便新用户。在本多节点安装示例中,使用了默认存储类型(本地存储卷)。对于生产环境,请使用 NFS/Ceph/GlusterFS/CSI 或者商业存储产品作为持久化存储解决方案
{{</ notice >}}
{{< notice tip >}}
- 可以通过编辑配置文件的方式启用多集群功能。有关更多信息,请参阅[多集群管理](../../../multicluster-management/)。
- 也可以选择要安装的组件。有关更多信息,请参见[启用可插拔组件](../../../pluggable-components/)。有关完整的 config-sample.yaml 文件的示例,请参见[此文件](https://github.com/kubesphere/kubekey/blob/release-1.0/docs/config-example.md)。
- 您可以编辑配置文件,启用多集群功能。有关更多信息,请参见[多集群管理](../../../multicluster-management/)。
- 也可以选择要安装的组件。有关更多信息,请参见[启用可插拔组件](../../../pluggable-components/)。有关完整的 `config-sample.yaml` 文件的示例,请参见[此文件](https://github.com/kubesphere/kubekey/blob/release-1.0/docs/config-example.md)。
{{</ notice >}}
@ -297,7 +291,7 @@ KubeSphere 默认情况下安装 [openEBS](https://openebs.io/) 的[本地卷](h
{{< notice note >}}
如果使用其他名称,则需要将上面的`config-sample.yaml`更改为您自己的文件。
如果使用其他名称,则需要将上面的 `config-sample.yaml` 更改为您自己的文件。
{{</ notice >}}
@ -305,7 +299,7 @@ KubeSphere 默认情况下安装 [openEBS](https://openebs.io/) 的[本地卷](h
### 4. 验证安装
安装完成后,可以看到类似于如下内容:
安装完成后,您会看到如下内容:
```bash
#####################################################
@ -329,21 +323,21 @@ https://kubesphere.io 20xx-xx-xx xx:xx:xx
#####################################################
```
现在可以使用帐户和密码`admin/P@88w0rd`访问`http://{IP}:30880`KubeSphere Web 控制台。
现在,您可以通过 `http://{IP}:30880`(例如,您可以使用 EIP使用帐户和密码 `admin/P@88w0rd` 访问 KubeSphere Web 控制台。
{{< notice note >}}
如果公网要访问控制台,您可能需要将源端口转发到 Intranet IP 和端口,具体取决于您的云提供商的平台,还请确保在安全组中打开了端口 30880。
要访问控制台,您可能需要根据您的环境配置端口转发规则。还请确保在您的安全组中打开了端口 30880。
{{</ notice >}}
![kubesphere-login](https://ap3.qingstor.com/kubesphere-website/docs/login.png)
![登录](/images/docs/zh-cn/installing-on-linux/introduction/multi-node-installation/login.PNG)
## 启用 kubectl 自动补全
KubeKey 不会启用 kubectl 自动补全功能,请参阅下面的内容并将其打开:
KubeKey 不会启用 kubectl 自动补全功能,请参见以下内容并将其打开:
**先决条件**确保已安装 bash-autocompletion 并可以正常工作。
**准备工作**:请确保已安装 bash-autocompletion 并可以正常工作。
```bash
# Install bash-completion
@ -358,6 +352,5 @@ kubectl completion bash >/etc/bash_completion.d/kubectl
详细信息[见此](https://kubernetes.io/docs/tasks/tools/install-kubectl/#enabling-shell-autocompletion)。
## 示例
<script src="https://asciinema.org/a/364501.js" id="asciicast-364501" async></script>
## 演示
<script src="https://asciinema.org/a/364501.js" id="asciicast-364501" async></script>

View File

@ -2,25 +2,25 @@
title: "Deploy KubeSphere on Bare Metal"
keywords: 'Kubernetes, KubeSphere, bare-metal'
description: 'How to install KubeSphere on bare metal.'
linkTitle: "Deploy KubeSphere on Bare Metal"
weight: 3320
---
## Introduction
In addition to the deployment on cloud, KubeSphere can also be installed on bare metal. As the virtualization layer is removed, the infrastructure overhead is drastically reduced, which brings more compute and storage resources to app deployments. As a result, hardware efficiency is improved. Refer to the example below of how to deploy KubeSphere on bare metal.
In addition to the deployment on cloud, KubeSphere can also be installed on bare metal. As the virtualization layer is removed, the infrastructure overhead is drastically reduced, which brings more compute and storage resources to app deployments. As a result, hardware efficiency is improved. Refer to the example below to deploy KubeSphere on bare metal.
## Prerequisites
- Please make sure that you already know how to install KubeSphere with a multi-node cluster based on the tutorial [Multi-node Installation](../../../installing-on-linux/introduction/multioverview/).
- Make sure you already know how to install KubeSphere on a multi-node cluster based on the tutorial [Multi-Node Installation](../../../installing-on-linux/introduction/multioverview/).
- Server and network redundancy in your environment.
- Considering data persistence, for a production environment, it is recommended you prepare persistent storage and create a StorageClass in advance. For development and testing, you can use the integrated OpenEBS to provision LocalPV as the storage service directly.
- For a production environment, it is recommended that you prepare persistent storage and create a StorageClass in advance. For development and testing, you can use the integrated OpenEBS to provision LocalPV as the storage service directly.
## Prepare Linux Hosts
This tutorial uses 3 physical machines of **DELL 620 Intel (R) Xeon (R) CPU E5-2640 v2 @ 2.00GHz (32G memory)**, on which **CentOS Linux release 7.6.1810 (Core)** will be installed for the minimal deployment of KubeSphere.
### CentOS Installation
### Install CentOS
Download and install the [image](http://mirror1.es.uci.edu/centos/7.6.1810/isos/x86_64/CentOS-7-x86_64-DVD-1810.iso) first. Make sure you allocate at least 200 GB to the root directory where it stores docker images (you can skip this if you are installing KubeSphere for testing).
@ -35,108 +35,108 @@ Here is a list of the three hosts for your reference.
|192.168.60.153|worker1|worker|
|192.168.60.154|worker2|worker|
### NIC Setting
### NIC settings
1. Clear NIC configurations.
```bash
ifdown em1
```
```bash
ifdown em2
```
```bash
rm -rf /etc/sysconfig/network-scripts/ifcfg-em1
```
```bash
rm -rf /etc/sysconfig/network-scripts/ifcfg-em2
```
```bash
ifdown em1
```
```bash
ifdown em2
```
```bash
rm -rf /etc/sysconfig/network-scripts/ifcfg-em1
```
```bash
rm -rf /etc/sysconfig/network-scripts/ifcfg-em2
```
2. Create the NIC bonding.
```bash
nmcli con add type bond con-name bond0 ifname bond0 mode 802.3ad ip4 192.168.60.152/24 gw4 192.168.60.254
```
```bash
nmcli con add type bond con-name bond0 ifname bond0 mode 802.3ad ip4 192.168.60.152/24 gw4 192.168.60.254
```
3. Set the bonding mode.
```bash
nmcli con mod id bond0 bond.options mode=802.3ad,miimon=100,lacp_rate=fast,xmit_hash_policy=layer2+3
```
```bash
nmcli con mod id bond0 bond.options mode=802.3ad,miimon=100,lacp_rate=fast,xmit_hash_policy=layer2+3
```
4. Bind the physical NIC.
```bash
nmcli con add type bond-slave ifname em1 con-name em1 master bond0
```
```bash
nmcli con add type bond-slave ifname em1 con-name em1 master bond0
```
```bash
nmcli con add type bond-slave ifname em2 con-name em2 master bond0
```
```bash
nmcli con add type bond-slave ifname em2 con-name em2 master bond0
```
5. Change the NIC mode.
```bash
vi /etc/sysconfig/network-scripts/ifcfg-bond0
BOOTPROTO=static
```
```bash
vi /etc/sysconfig/network-scripts/ifcfg-bond0
BOOTPROTO=static
```
6. Restart Network Manager.
```bash
systemctl restart NetworkManager
```
```bash
systemctl restart NetworkManager
```
```bash
nmcli con # Display NIC information
```
```bash
nmcli con # Display NIC information
```
7. Change the host name and DNS.
```bash
hostnamectl set-hostname worker-1
```
```bash
hostnamectl set-hostname worker-1
```
```bash
vim /etc/resolv.conf
```
```bash
vim /etc/resolv.conf
```
### Time Setting
### Time settings
1. Synchronize time.
```bash
yum install -y chrony
```
```bash
systemctl enable chronyd
```
```bash
systemctl start chronyd
```
```bash
timedatectl set-ntp true
```
```bash
yum install -y chrony
```
```bash
systemctl enable chronyd
```
```bash
systemctl start chronyd
```
```bash
timedatectl set-ntp true
```
2. Set the time zone.
```bash
timedatectl set-timezone Asia/Shanghai
```
```bash
timedatectl set-timezone Asia/Shanghai
```
3. Check if the ntp-server is available.
```bash
chronyc activity -v
```
```bash
chronyc activity -v
```
### Firewall Setting
### Firewall settings
Execute the following commands to stop and disable the FirewallD service:
@ -156,7 +156,7 @@ systemctl stop firewalld
systemctl disable firewalld
```
### Package Update and Dependencies
### Package updates and dependencies
Execute the following commands to update system packages and install dependencies.
@ -244,7 +244,7 @@ Make `kk` executable:
chmod +x kk
```
## Create a Multi-node Cluster
## Create a Multi-Node Cluster
With KubeKey, you can install Kubernetes and KubeSphere together. You have the option to create a multi-node cluster by customizing parameters in the configuration file.
@ -256,11 +256,14 @@ Create a Kubernetes cluster with KubeSphere installed (e.g. `--with-kubesphere v
{{< notice note >}}
The following Kubernetes versions have been fully tested with KubeSphere: v1.15.12, v1.16.13, v1.17.9 (default) and v1.18.6.
- The following Kubernetes versions have been fully tested with KubeSphere: v1.15.12, v1.16.13, v1.17.9 (default) and v1.18.6.
- If you do not add the flag `--with-kubesphere` in the command above, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later.
- If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed.
{{</ notice >}}
A default file **config-sample.yaml** will be created. Modify it according to your environment.
A default file `config-sample.yaml` will be created. Modify it according to your environment.
```bash
vi config-sample.yaml
@ -295,7 +298,7 @@ Create a cluster using the configuration file you customized above:
./kk create cluster -f config-sample.yaml
```
#### Verify the Multi-node Installation
#### Verify the installation
After the installation finishes, you can inspect the logs of installation by executing the command below:
@ -303,7 +306,7 @@ After the installation finishes, you can inspect the logs of installation by exe
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
```
If you can see the welcome log return, it means the installation is successful. Your cluster is up and running.
If you can see the welcome log return, it means the installation is successful.
```bash
**************************************************
@ -325,74 +328,74 @@ https://kubesphere.io 20xx-xx-xx xx:xx:xx
#####################################################
```
#### Log in the Console
#### Log in the console
You will be able to use default account and password `admin/P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after login.
#### Enable Pluggable Components (Optional)
#### Enable pluggable components (Optional)
The example above demonstrates the process of a default minimal installation. To enable other components in KubeSphere, see [Enable Pluggable Components](../../../pluggable-components/) for more details.
## System Improvements
- Update your system.
```bash
yum update
```
```bash
yum update
```
- Add the required options to the kernel boot arguments:
```bash
sudo /sbin/grubby --update-kernel=ALL --args='cgroup_enable=memory cgroup.memory=nokmem swapaccount=1'
```
```bash
sudo /sbin/grubby --update-kernel=ALL --args='cgroup_enable=memory cgroup.memory=nokmem swapaccount=1'
```
- Enable the `overlay2` kernel module.
```bash
echo "overlay2" | sudo tee -a /etc/modules-load.d/overlay.conf
```
```bash
echo "overlay2" | sudo tee -a /etc/modules-load.d/overlay.conf
```
- Refresh the dynamically generated grub2 configuration.
```bash
sudo grub2-set-default 0
```
```bash
sudo grub2-set-default 0
```
- Adjust kernel parameters and make the change effective.
```bash
cat <<EOF | sudo tee -a /etc/sysctl.conf
vm.max_map_count = 262144
fs.may_detach_mounts = 1
net.ipv4.ip_forward = 1
vm.swappiness=1
kernel.pid_max =1000000
fs.inotify.max_user_instances=524288
EOF
sudo sysctl -p
```
```bash
cat <<EOF | sudo tee -a /etc/sysctl.conf
vm.max_map_count = 262144
fs.may_detach_mounts = 1
net.ipv4.ip_forward = 1
vm.swappiness=1
kernel.pid_max =1000000
fs.inotify.max_user_instances=524288
EOF
sudo sysctl -p
```
- Adjust system limits.
```bash
vim /etc/security/limits.conf
* soft nofile 1024000
* hard nofile 1024000
* soft memlock unlimited
* hard memlock unlimited
root soft nofile 1024000
root hard nofile 1024000
root soft memlock unlimited
```
```bash
vim /etc/security/limits.conf
* soft nofile 1024000
* hard nofile 1024000
* soft memlock unlimited
* hard memlock unlimited
root soft nofile 1024000
root hard nofile 1024000
root soft memlock unlimited
```
- Remove the previous limit configuration.
```bash
sudo rm /etc/security/limits.d/20-nproc.conf
```
```bash
sudo rm /etc/security/limits.d/20-nproc.conf
```
- Root the system.
```bash
reboot
```
```bash
reboot
```

View File

@ -343,11 +343,15 @@ chmod +x kk
{{< notice note >}}
经过充分测试的 Kubernetes 版本有v1.15.12v1.16.13v1.17.9 (默认)v1.18.6,您可以根据需要指定版本。
- 经过充分测试的 Kubernetes 版本有v1.15.12v1.16.13v1.17.9默认v1.18.6,您可以根据需要指定版本。
- 如果您在这一步的命令中不添加标志 `--with-kubesphere`,则不会部署 KubeSphere只能使用配置文件中的 `addons` 字段安装,或者在您后续使用 `./kk create cluster` 命令时再次添加这个标志。
- 如果您添加标志 `--with-kubesphere` 时不指定 KubeSphere 版本,则会安装最新版本的 KubeSphere。
{{</ notice >}}
#### 集群节点配置
默认文件 `config-sample.yaml` 创建后,根据您的环境修改该文件。
```bash
vi ~/config-sample.yaml

View File

@ -151,7 +151,11 @@ chmod +x kk
```
{{< notice note >}}
These Kubernetes versions have been fully tested with KubeSphere: v1.15.12, v1.16.13, v1.17.9 (default), and v1.18.6.
- These Kubernetes versions have been fully tested with KubeSphere: v1.15.12, v1.16.13, v1.17.9 (default), and v1.18.6.
- If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later.
- If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed.
{{</ notice >}}

View File

@ -178,6 +178,7 @@ Create an example configuration file with default configurations. Here Kubernete
- These Kubernetes versions have been fully tested with KubeSphere: v1.15.12, v1.16.13, v1.17.9 (default), and v1.18.6.
- If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later.
- If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed.
{{</ notice >}}

View File

@ -1,72 +1,72 @@
---
title: "Project Gateway"
keywords: 'KubeSphere, Kubernetes, project, gateway, NodePort, LoadBalancer'
description: 'How to set a project gateway in KubeSphere.'
linkTitle: "Project Gateway"
title: "项目网关"
keywords: 'KubeSphere, Kubernetes, 项目, 网关, NodePort, LoadBalancer'
description: '如何在 KubeSphere 中设置项目网关。'
linkTitle: "项目网关"
weight: 13500
---
A gateway in a KubeSphere project is an [NGINX Ingress controller](https://www.nginx.com/products/nginx/kubernetes-ingress-controller). KubeSphere has a builtin configuration for HTTP load balancing, called [Routes](../../project-user-guide/application-workloads/ingress/). A Route defines rules for external connections to Services within a cluster. Users who need to provide external access to their Services create a Route resource that defines rules, including the URI path, backing service name, and other information.
KubeSphere 项目中的网关是一个[ NGINX Ingress 控制器](https://www.nginx.com/products/nginx/kubernetes-ingres-controller)。KubeSphere 内置的用于 HTTP 负载均衡的机制称为[路由](../../project-user-guide/application-workloads/ingress/),它定义了从外部到集群服务的连接规则。如需允许从外部访问服务,用户可创建路由资源来定义 URI 路径、后端服务名称等信息。
In KubeSphere 3.0, a project gateway works independently for itself. In other words, every project has its own Ingress controller. In the next release, KubeSphere will provide a cluster-scope gateway in addition to the project-scope gateway, allowing all projects to share the same gateway.
在 KubeSphere 3.0,项目网关单独运行,即每个项目都有自己的 Ingress 控制器。在下一个发布版本中KubeSphere 除了提供项目范围的网关外,还将提供集群范围的网关,使得所有项目都能共享相同的网关。
This tutorial demonstrates how to set a gateway in KubeSphere for the external access to Services and Routes.
本教程演示如何在 KubeSphere 中设置网关以从外部访问服务和路由。
## Prerequisites
## 准备工作
You need to create a workspace, a project and an account (`project-admin`). The account must be invited to the project with the role of `admin` at the project level. For more information, see [Create Workspace, Project, Account and Role](../../../docs/quick-start/create-workspace-and-project).
您需要创建一个企业空间、一个项目和一个帐户 (`project-admin`)。该帐户必须被邀请至项目,并且在项目中的角色为 `admin`。有关更多信息,请参见[创建企业空间、项目、帐户和角色](../../../docs/quick-start/create-workspace-and-project)。
## Set a Gateway
## 设置网关
1. Log in the KubeSphere web console as `project-admin` and go to your project. In **Project Settings** from the navigation bar, select **Advanced Settings** and click **Set Gateway**.
1. `project-admin` 用户登录 KubeSphere Web 控制台,进入您的项目,从左侧导航栏进入**项目设置**下的**高级设置**页面,然后点击**设置网关**。
![set-project-gateway](/images/docs/project-administration/project-gateway/set-project-gateway.jpg)
![set-project-gateway](/images/docs/zh-cn/project-administration/project-gateway/set-project-gateway.jpg)
2. In the pop-up window, you can select two access modes for the gateway.
2. 在弹出的对话框中选择网关的访问方式。
![access-method](/images/docs/project-administration/project-gateway/access-method.png)
![access-method](/images/docs/zh-cn/project-administration/project-gateway/access-method.png)
**NodePort**: You can access Services with corresponding node ports through the gateway.
**NodePort**:通过网关访问服务对应的节点端口。
**LoadBalancer**: You can access Services with a single IP address through the gateway.
**LoadBalancer**:通过网关访问服务的单独 IP 地址。
3. You can also enable **Application Governance** on the **Set Gateway** page. You need to enable **Application Governance** so that you can use the Tracing feature and use [different grayscale release strategies](../../project-user-guide/grayscale-release/overview/). Once it is enabled, check whether an annotation (e.g. `nginx.ingress.kubernetes.io/service-upstream: true`) is added for your route (Ingress) if the route is inaccessible.
3. 在**设置网关**对话框,您可以启用**应用治理**以使用 Tracing 功能和[不同的灰度发布策略](../../project-user-guide/grayscale-release/overview/)。如果启用**应用治理**后无法访问路由,请在路由 (Ingress) 中添加注解(例如 `nginx.ingress.kubernetes.io/service-upstream: true`)。
4. After you select an access method, click **Save**.
4. 选择访问方式后点击**保存**。
## NodePort
If you select **NodePort**, KubeSphere will set a port for http and https requests respectively. You can access your Service at `EIP:NodePort` or `Hostname:NodePort`.
如果您选择 **NodePort**KubeSphere 将为 HTTP 请求和 HTTPS 请求分别设置一个端口。您可以用 `EIP:NodePort``Hostname:NodePort` 地址访问服务。
![nodeport](/images/docs/project-administration/project-gateway/nodeport.jpg)
![nodeport](/images/docs/zh-cn/project-administration/project-gateway/nodeport.jpg)
For example, to access your Service with an elastic IP address (EIP), visit:
例如,如果您的服务配置了的弹性 IP 地址 (EIP),请访问:
- `http://EIP:32734`
- `https://EIP:32471`
When you create a [Route](../../project-user-guide/application-workloads/ingress/) (Ingress), you can customize a host name to access your Service. For example, to access your Service with the host name set in your Route, visit:
当创建[路由](../../project-user-guide/application-workloads/ingress/) (Ingress) 时,您可以自定义主机名用于访问服务。例如,如果您的路由中配置了服务的主机名,请访问:
- `http://demo.kubesphere.io:32734`
- `https://demo.kubesphere.io:32471`
{{< notice note >}}
- You may need to open ports in your security groups and configure relevant port forwarding rules depending on your environment.
- 取决于您的环境,您可能需要在安全组中放行端口并配置相关的端口转发规则 。
- If you access your Service using the host name, make sure the domain name you set can be resolved to the IP address.
- **NodePort** is not recommended for a production environment. You can use **LoadBalancer** instead.
- 如果使用主机名访问服务,请确保您设置的域名可以解析为对应的 IP 地址。
- 在生产环境中不建议使用 **NodePort**,请使用 **LoadBalancer**
{{</ notice >}}
## LoadBalancer
You must configure a load balancer in advance before you select **LoadBalancer**. The IP address of the load balancer will be bound to the gateway to provide access to internal Services and Routes.
在选择 **LoadBalancer** 前,您必须先配置负载均衡器。负载均衡器的 IP 地址将与网关绑定以便内部的服务和路由可以访问。
![lb](/images/docs/project-administration/project-gateway/lb.png)
![lb](/images/docs/zh-cn/project-administration/project-gateway/lb.png)
{{< notice note >}}
Cloud providers often support load balancer plugins. If you install KubeSphere on major Kubernetes engines on their platforms, you may notice a load balancer is already available in the environment for you to use. If you install KubeSphere in a bare metal environment, you can use [Porter](https://github.com/kubesphere/porter) for load balancing.
云厂商通常支持负载均衡器插件。如果在主流的 Kubernetes Engine 上安装 KubeSphere您可能会发现环境中已有可用的负载均衡器。如果在裸金属环境中安装 KubeSphere您可以使用 [Porter](https://github.com/kubesphere/porter) 作为负载均衡器。
{{</ notice >}}

View File

@ -1,131 +1,131 @@
---
title: "Deploy Apps from App Templates"
keywords: 'Kubernetes, chart, helm, KubeSphere, application, app templates'
description: 'How to deploy apps from app templates in a private repository.'
linkTitle: "Deploy Apps from App Templates"
title: "从应用模板部署应用"
keywords: 'Kubernetes, chart, helm, KubeSphere, 应用程序, 应用模板'
description: '如何从私有应用仓库的应用模板部署应用。'
linkTitle: "从应用模板部署应用"
weight: 10120
---
When you deploy an app, you can select the app from the App Store which contains built-in apps of KubeSphere and [apps uploaded as Helm charts](../../../workspace-administration/upload-helm-based-application/). Alternatively, you can use apps from private app repositories added to KubeSphere to provide app templates.
部署应用时,您可选择使用应用商店。应用商店包含了 KubeSphere 的内置应用和[以 Helm Chart 形式上传的应用](../../../workspace-administration/upload-helm-based-application/)。此外,您还可以使用应用模板。应用模板可由添加至 KubeSphere 的私有应用仓库提供。
This tutorial demonstrates how to quickly deploy [Grafana](https://grafana.com/) using the app template from a private repository, which is based on QingStor object storage.
本教程演示如何使用私有应用仓库中的应用模板快速部署 [Grafana](https://grafana.com/)。该私有应用仓库基于 QingStor 对象存储。
## Prerequisites
## 准备工作
- You have enabled [OpenPitirx (App Store)](../../../pluggable-components/app-store).
- You have completed the tutorial of [Create Workspace, Project, Account and Role](../../../quick-start/create-workspace-and-project/). Namely, you must have a workspace, a project and two user accounts (`ws-admin` and `project-regular`). `ws-admin` must be granted the role of `workspace-admin` in the workspace and `project-regular` must be granted the role of `operator` in the project.
- 您需要启用 [OpenPitirx (App Store)](../../../pluggable-components/app-store)。
- 您需要先完成[创建企业空间、项目、帐户和角色](../../../quick-start/create-workspace-and-project/)教程。您必须创建一个企业空间、一个项目和两个用户帐户(`ws-admin ` 和 `project-regular`)。`ws-admin` 必须被授予企业空间中的 `workspace-admin` 角色, `project-regular` 必须被授予项目中的 `operator` 角色。
## Hands-on Lab
## 动手实验
### Step 1: Add an App Repository
### 步骤 1添加应用仓库
1. Log in the web console of KubeSphere as `ws-admin`. In your workspace, go to **App Repos** under **Apps Management**, and then click **Add Repo**.
1. `ws-admin` 用户登录 KubeSphere 的 Web 控制台。在您的企业空间中,进入**应用管理**下的**应用仓库**页面,并点击**添加仓库**。
![add-app-repo](/images/docs/project-user-guide/applications/deploy-apps-from-app-templates/add-app-repo.jpg)
![add-app-repo](/images/docs/zh-cn/project-user-guide/applications/deploy-apps-from-app-templates/add-app-repo.jpg)
2. In the dialogue that appears, enter `test-repo` for the app repository name and `https://helm-chart-repo.pek3a.qingstor.com/kubernetes-charts/` for the repository URL. Click **Validate** to verify the URL and click **OK** to continue.
2. 在弹出的对话框中,将应用仓库名称设置为 `test-repo`,将应用仓库的 URL 设置为 `https://helm-chart-repo.pek3a.qingstor.com/kubernetes-charts/`,点击**验证**对 URL 进行验证,再点击**确定**进入下一步。
![input-repo-info](/images/docs/project-user-guide/applications/deploy-apps-from-app-templates/input-repo-info.jpg)
![input-repo-info](/images/docs/zh-cn/project-user-guide/applications/deploy-apps-from-app-templates/input-repo-info.jpg)
3. Your repository displays in the list after successfully imported to KubeSphere.
3. 应用仓库导入成功后会显示在如下图所示的列表中。
![repository-list](/images/docs/project-user-guide/applications/deploy-apps-from-app-templates/repository-list.jpg)
![repository-list](/images/docs/zh-cn/project-user-guide/applications/deploy-apps-from-app-templates/repository-list.jpg)
{{< notice note >}}
For more information about dashboard properties as you add a private repository, see [Import Helm Repository](../../../workspace-administration/app-repository/import-helm-repository/).
有关添加私有仓库时的更多参数信息,请参见[导入 Helm 仓库](../../../workspace-administration/app-repository/import-helm-repository/)。
{{</ notice >}}
### Step 2: Deploy Grafana from App Templates
### 步骤 2从应用模板部署应用
1. Log out of KubeSphere and log back in as `project-regular`. In your project, choose **Applications** under **Application Workloads** and click **Deploy New Application**.
1. 登出 KubeSphere 并以 `project-regular` 用户重新登录。在您的项目中,进入**应用负载**下的**应用**页面,再点击**部署新应用**。
![create-new-app](/images/docs/project-user-guide/applications/deploy-apps-from-app-templates/create-new-app.jpg)
![create-new-app](/images/docs/zh-cn/project-user-guide/applications/deploy-apps-from-app-templates/create-new-app.jpg)
2. Select **From App Templates** from the pop-up dialogue.
2. 在弹出的对话框中选择**来自应用模板**。
![select-app-templates](/images/docs/project-user-guide/applications/deploy-apps-from-app-templates/select-app-templates.jpg)
![select-app-templates](/images/docs/zh-cn/project-user-guide/applications/deploy-apps-from-app-templates/select-app-templates.jpg)
**From App Store**: Choose built-in apps and apps uploaded individually as Helm charts.
**来自应用商店**:选择内置的应用和以 Helm Chart 形式单独上传的应用。
**From App Templates**: Choose apps from private app repositories and the workspace app pool.
**来自应用模板**:从私有应用仓库和企业空间应用池选择应用。
3. Select `test-repo` from the drop-down list, which is the private app repository just uploaded.
3. 从下拉列表中选择之前添加的私有应用仓库 `test-repo`
![private-app-template](/images/docs/project-user-guide/applications/deploy-apps-from-app-templates/private-app-template.jpg)
![private-app-template](/images/docs/zh-cn/project-user-guide/applications/deploy-apps-from-app-templates/private-app-template.jpg)
{{< notice note >}}
The option **From workspace** in the list represents the workspace app pool, which contains apps uploaded as Helm charts. They are also part of app templates.
下拉列表中的**来自企业空间**选项表示企业空间应用池,包含以 Helm Chart 形式上传的应用。这些应用也属于应用模板。
{{</ notice >}}
4. Input `Grafana` in the search bar to find the app, and then click it to deploy it.
4. 在搜索框中输入 `grafana` 找到该应用,点击搜索结果进行部署。
![search-grafana](/images/docs/project-user-guide/applications/deploy-apps-from-app-templates/search-grafana.jpg)
![search-grafana](/images/docs/zh-cn/project-user-guide/applications/deploy-apps-from-app-templates/search-grafana.jpg)
{{< notice note >}}
The app repository used in this tutorial is synchronized from the Google Helm repository. Some apps in it may not be deployed successfully as their Helm charts are maintained by different organizations.
本教程使用的应用仓库与 Google Helm 仓库同步。由于其中的 Helm Chart 由不同的组织维护,部分应用可能无法部署成功。
{{</ notice >}}
5. You can view its app information and configuration files. Under **Versions**, select a version number from the list and click **Deploy**.
5. 您可以查看应用信息和配置文件,在**版本**下拉列表中选择版本,然后点击部署。
![deploy-grafana](/images/docs/project-user-guide/applications/deploy-apps-from-app-templates/deploy-grafana.jpg)
![deploy-grafana](/images/docs/zh-cn/project-user-guide/applications/deploy-apps-from-app-templates/deploy-grafana.jpg)
6. Set an app name and confirm the version and deployment location. Click **Next** to continue.
6. 设置应用名称,确认应用版本和部署位置,点击**下一步**。
![confirm-info](/images/docs/project-user-guide/applications/deploy-apps-from-app-templates/confirm-info.jpg)
7. In **App Config**, you can manually edit the manifest file or click **Deploy** directly.
![confirm-info](/images/docs/zh-cn/project-user-guide/applications/deploy-apps-from-app-templates/confirm-info.jpg)
![app-config](/images/docs/project-user-guide/applications/deploy-apps-from-app-templates/app-config.jpg)
7. 在**应用配置**页面,您可以手动编辑清单文件或直接点击部署。
8. Wait for Grafana to be up and running.
![app-config](/images/docs/zh-cn/project-user-guide/applications/deploy-apps-from-app-templates/app-config.jpg)
### Step 3: Expose Grafana Service
8. 等待 Grafana 创建完成并开始运行。
To access Grafana outside the cluster, you need to expose the app through NodePort first.
### 步骤 3暴露 Grafana 服务
1. Go to **Services** and click the service name of Grafana.
要从集群外访问 Grafana您需要先用 NodePort 暴露该应用。
![grafana-services](/images/docs/project-user-guide/applications/deploy-apps-from-app-templates/grafana-services.jpg)
1. 打开**服务**页面,点击 Grafana 的服务名称。
2. Click **More** and select **Edit Internet Access** from the drop-down menu.
![grafana-services](/images/docs/zh-cn/project-user-guide/applications/deploy-apps-from-app-templates/grafana-services.jpg)
![edit-access](/images/docs/project-user-guide/applications/deploy-apps-from-app-templates/edit-access.jpg)
2. 点击**更多操作**,在下拉菜单中选择**编辑外网访问**。
3. Select **NodePort** for **Access Method** and click **OK**. For more information, see [Project Gateway](../../../project-administration/project-gateway/).
![edit-access](/images/docs/zh-cn/project-user-guide/applications/deploy-apps-from-app-templates/edit-access.jpg)
![nodeport](/images/docs/project-user-guide/applications/deploy-apps-from-app-templates/nodeport.jpg)
3. 将**访问方式**设置为 **NodePort** 并点击**确定**。有关更多信息,请参见[项目网关](../../../project-administration/project-gateway/)。
4. Under **Service Ports**, you can see the port is exposed.
![nodeport](/images/docs/zh-cn/project-user-guide/applications/deploy-apps-from-app-templates/nodeport.jpg)
![exposed-port](/images/docs/project-user-guide/applications/deploy-apps-from-app-templates/exposed-port.jpg)
4. 您可以在**服务端口**区域查看暴露的端口。
### Step 4: Access Grafana
![exposed-port](/images/docs/zh-cn/project-user-guide/applications/deploy-apps-from-app-templates/exposed-port.jpg)
1. To access the Grafana dashboard, you need the username and password. Navigate to **Secrets** and click the item that has the same name as the app name.
### 步骤 4访问 Grafana
![grafana-secret](/images/docs/project-user-guide/applications/deploy-apps-from-app-templates/grafana-secret.jpg)
1. 您需要获取用户名和密码才能登录 Grafana 主页。导航至**密钥**页面,点击与应用名称相同的条目。
2. On the detail page, click the eye icon first and you can see the username and password.
![grafana-secret](/images/docs/zh-cn/project-user-guide/applications/deploy-apps-from-app-templates/grafana-secret.jpg)
![secret-page](/images/docs/project-user-guide/applications/deploy-apps-from-app-templates/secret-page.jpg)
2. 在详情页面,点击眼睛图标查看用户名和密码。
![click-eye-icon](/images/docs/project-user-guide/applications/deploy-apps-from-app-templates/click-eye-icon.jpg)
![secret-page](/images/docs/zh-cn/project-user-guide/applications/deploy-apps-from-app-templates/secret-page.jpg)
2. Access Grafana through `${Node IP}:${NODEPORT}`.
![click-eye-icon](/images/docs/zh-cn/project-user-guide/applications/deploy-apps-from-app-templates/click-eye-icon.jpg)
![grafana-UI](/images/docs/project-user-guide/applications/deploy-apps-from-app-templates/grafana-UI.jpg)
2. 用 `${Node IP}:${NODEPORT}` 地址访问 Grafana。
![home-page](/images/docs/project-user-guide/applications/deploy-apps-from-app-templates/home-page.jpg)
![grafana-UI](/images/docs/zh-cn/project-user-guide/applications/deploy-apps-from-app-templates/grafana-UI.jpg)
![home-page](/images/docs/zh-cn/project-user-guide/applications/deploy-apps-from-app-templates/home-page.jpg)
{{< notice note >}}
You may need to open the port in your security groups and configure related port forwarding rules depending on your where your Kubernetes cluster is deployed.
取决于您的 Kubernetes 集群的部署位置,您可能需要在安全组中放行端口并配置相关的端口转发规则。
{{</ notice >}}

View File

@ -150,6 +150,7 @@ chmod +x kk
- 支持的 Kubernetes 版本: *v1.15.12*, *v1.16.13*, *v1.17.9* (默认), *v1.18.6*.
- 一般来说,对于 all-in-one 安装,您无需更改任何配置。
- 如果您在这一步的命令中不添加标志 `--with-kubesphere`,则不会部署 KubeSphere。KubeKey 将只安装 Kubernetes。如果您添加标志 `--with-kubesphere` 时不指定 KubeSphere 版本,则会安装最新版本的 KubeSphere。
- KubeKey 会默认安装 [OpenEBS](https://openebs.io/) 为开发和测试环境提供 LocalPV这对用户来说是非常方便的。对于其它的 storage classes参考 [持久化存储配置](../../installing-on-linux/introduction/storage-configuration/)。
{{</ notice >}}

Binary file not shown.

After

Width:  |  Height:  |  Size: 276 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 150 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 96 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 230 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 205 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 275 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 81 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 293 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 312 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 151 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 175 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 244 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 252 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 257 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 205 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 266 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 83 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 247 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 211 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 231 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 150 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 200 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 157 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 159 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 203 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 278 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 133 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 222 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 345 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 225 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 567 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 159 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 201 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 398 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 182 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 185 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 164 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 213 KiB