| OS | +Minimum Requirements | +
|---|---|
| Ubuntu 16.04, 18.04 | +2 CPU cores, 2 GB memory, and 40 GB disk space | +
| Debian Buster, Stretch | +2 CPU cores, 2 GB memory, and 40 GB disk space | +
| CentOS 7.x | +2 CPU cores, 2 GB memory, and 40 GB disk space | +
| Red Hat Enterprise Linux 7 | +2 CPU cores, 2 GB memory, and 40 GB disk space | +
| SUSE Linux Enterprise Server 15/openSUSE Leap 15.2 | +2 CPU cores, 2 GB memory, and 40 GB disk space | +
| Dependency | +Kubernetes Version ≥ 1.18 | +Kubernetes Version < 1.18 | +
|---|---|---|
socat |
+ Required | +Optional but recommended | +
conntrack |
+ Required | +Optional but recommended | +
ebtables |
+ Optional but recommended | +Optional but recommended | +
ipset |
+ Optional but recommended | +Optional but recommended | +
version |
- The Kubernetes version to be installed. If you do not specify a Kubernetes version, {{< contentLink "docs/installing-on-linux/introduction/kubekey" "KubeKey" >}} v1.1.0 will install Kubernetes v1.19.8 by default. For more information, see {{< contentLink "docs/installing-on-linux/introduction/kubekey/#support-matrix" "Support Matrix" >}}. | +The Kubernetes version to be installed. If you do not specify a Kubernetes version, {{< contentLink "docs/installing-on-linux/introduction/kubekey" "KubeKey" >}} v1.2.1 will install Kubernetes v1.21.5 by default. For more information, see {{< contentLink "docs/installing-on-linux/introduction/kubekey/#support-matrix" "Support Matrix" >}}. |
imageRepo |
@@ -116,10 +116,11 @@ The below table describes the above parameters in detail.
on the right and then select **Edit YAML** to edit `ks-installer`.
- 
-
5. In the YAML file of `ks-installer`, change the value of `jwtSecret` to the corresponding value shown above and set the value of `clusterRole` to `member`. Click **Update** to save your changes.
```yaml
@@ -63,20 +59,12 @@ Log in to the web console of Alibaba Cloud. Go to **Clusters** under **Container

-### Step 3: Import the ACK Member Cluster
+### Step 3: Import the ACK member cluster
-1. Log in to the KubeSphere console on your Host Cluster as `admin`. Click **Platform** in the upper-left corner and then select **Cluster Management**. On the **Cluster Management** page, click **Add Cluster**.
-
- 
+1. Log in to the KubeSphere console on your host cluster as `admin`. Click **Platform** in the upper-left corner and then select **Cluster Management**. On the **Cluster Management** page, click **Add Cluster**.
2. Enter the basic information based on your needs and click **Next**.
- 
+3. In **Connection Method**, select **Direct connection**. Fill in the kubeconfig file of the ACK member cluster and then click **Create**.
-3. In **Connection Method**, select **Direct Connection**. Fill in the kubeconfig file of the ACK Member Cluster and then click **Create**.
-
- 
-
-4. Wait for cluster initialization to finish.
-
- 
\ No newline at end of file
+4. Wait for cluster initialization to finish.
\ No newline at end of file
diff --git a/content/en/docs/multicluster-management/import-cloud-hosted-k8s/import-aws-eks.md b/content/en/docs/multicluster-management/import-cloud-hosted-k8s/import-aws-eks.md
index 8cac5d377..1882a0ac9 100644
--- a/content/en/docs/multicluster-management/import-cloud-hosted-k8s/import-aws-eks.md
+++ b/content/en/docs/multicluster-management/import-cloud-hosted-k8s/import-aws-eks.md
@@ -10,18 +10,18 @@ This tutorial demonstrates how to import an AWS EKS cluster through the [direct
## Prerequisites
-- You have a Kubernetes cluster with KubeSphere installed, and prepared this cluster as the Host Cluster. For more information about how to prepare a Host Cluster, refer to [Prepare a Host Cluster](../../../multicluster-management/enable-multicluster/direct-connection/#prepare-a-host-cluster).
-- You have an EKS cluster to be used as the Member Cluster.
+- You have a Kubernetes cluster with KubeSphere installed, and prepared this cluster as the host cluster. For more information about how to prepare a host cluster, refer to [Prepare a host cluster](../../../multicluster-management/enable-multicluster/direct-connection/#prepare-a-host-cluster).
+- You have an EKS cluster to be used as the member cluster.
## Import an EKS Cluster
-### Step 1: Deploy KubeSphere on your EKS Cluster
+### Step 1: Deploy KubeSphere on your EKS cluster
You need to deploy KubeSphere on your EKS cluster first. For more information about how to deploy KubeSphere on EKS, refer to [Deploy KubeSphere on AWS EKS](../../../installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-eks/#install-kubesphere-on-eks).
-### Step 2: Prepare the EKS Member Cluster
+### Step 2: Prepare the EKS member cluster
-1. In order to manage the Member Cluster from the Host Cluster, you need to make `jwtSecret` the same between them. Therefore, get it first by executing the following command on your Host Cluster.
+1. In order to manage the member cluster from the host cluster, you need to make `jwtSecret` the same between them. Therefore, get it first by executing the following command on your host cluster.
```bash
kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep jwtSecret
@@ -37,12 +37,8 @@ You need to deploy KubeSphere on your EKS cluster first. For more information ab
3. Go to **CRDs**, enter `ClusterConfiguration` in the search bar, and then press **Enter** on your keyboard. Click **ClusterConfiguration** to go to its detail page.
- 
-
4. Click
on the right and then select **Edit YAML** to edit `ks-installer`.
- 
-
5. In the YAML file of `ks-installer`, change the value of `jwtSecret` to the corresponding value shown above and set the value of `clusterRole` to `member`. Click **Update** to save your changes.
```yaml
@@ -164,20 +160,12 @@ You need to deploy KubeSphere on your EKS cluster first. For more information ab
ip-10-0-8-148.cn-north-1.compute.internal Ready
on the right and then select **Edit YAML** to edit `ks-installer`.
- 
-
5. In the YAML file of `ks-installer`, change the value of `jwtSecret` to the corresponding value shown above and set the value of `clusterRole` to `member`.
```yaml
@@ -109,20 +105,12 @@ You need to deploy KubeSphere on your GKE cluster first. For more information ab
token: eyJhbGciOiJSUzI1NiIsImtpZCI6InNjOFpIb3RrY3U3bGNRSV9NWV8tSlJzUHJ4Y2xnMDZpY3hhc1BoVy0xTGsifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlc3BoZXJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlc3BoZXJlLXRva2VuLXpocmJ3Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Imt1YmVzcGhlcmUiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIyMGFmZGI1Ny01MTBkLTRjZDgtYTAwYS1hNDQzYTViNGM0M2MiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXNwaGVyZS1zeXN0ZW06a3ViZXNwaGVyZSJ9.ic6LaS5rEQ4tXt_lwp7U_C8rioweP-ZdDjlIZq91GOw9d6s5htqSMQfTeVlwTl2Bv04w3M3_pCkvRzMD0lHg3mkhhhP_4VU0LIo4XeYWKvWRoPR2kymLyskAB2Khg29qIPh5ipsOmGL9VOzD52O2eLtt_c6tn-vUDmI_Zw985zH3DHwUYhppGM8uNovHawr8nwZoem27XtxqyBkqXGDD38WANizyvnPBI845YqfYPY5PINPYc9bQBFfgCovqMZajwwhcvPqS6IpG1Qv8TX2lpuJIK0LLjiKaHoATGvHLHdAZxe_zgAC2cT_9Ars3HIN4vzaSX0f-xP--AcRgKVSY9g
```
-### Step 4: Import the GKE Member Cluster
+### Step 4: Import the GKE member cluster
-1. Log in to the KubeSphere console on your Host Cluster as `admin`. Click **Platform** in the upper-left corner and then select **Cluster Management**. On the **Cluster Management** page, click **Add Cluster**.
-
- 
+1. Log in to the KubeSphere console on your host cluster as `admin`. Click **Platform** in the upper-left corner and then select **Cluster Management**. On the **Cluster Management** page, click **Add Cluster**.
2. Enter the basic information based on your needs and click **Next**.
- 
+3. In **Connection Method**, select **Direct connection**. Fill in the new kubeconfig file of the GKE member cluster and then click **Create**.
-3. In **Connection Method**, select **Direct Connection**. Fill in the new kubeconfig file of the GKE Member Cluster and then click **Create**.
-
- 
-
-4. Wait for cluster initialization to finish.
-
- 
\ No newline at end of file
+4. Wait for cluster initialization to finish.
\ No newline at end of file
diff --git a/content/en/docs/multicluster-management/import-on-prem-k8s/_index.md b/content/en/docs/multicluster-management/import-on-prem-k8s/_index.md
deleted file mode 100644
index 8d0aeb228..000000000
--- a/content/en/docs/multicluster-management/import-on-prem-k8s/_index.md
+++ /dev/null
@@ -1,7 +0,0 @@
----
-linkTitle: "Import On-premises Kubernetes Clusters"
-weight: 5400
-
-_build:
- render: false
----
diff --git a/content/en/docs/multicluster-management/import-on-prem-k8s/import-kubeadm-k8s.md b/content/en/docs/multicluster-management/import-on-prem-k8s/import-kubeadm-k8s.md
deleted file mode 100644
index 9370f4355..000000000
--- a/content/en/docs/multicluster-management/import-on-prem-k8s/import-kubeadm-k8s.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: "Import Kubeadm Kubernetes Cluster"
-keywords: 'kubernetes, kubesphere, multicluster, kubeadm'
-description: 'Learn how to import a Kubernetes cluster created with kubeadm.'
-
-
-weight: 5410
----
-
-TBD
diff --git a/content/en/docs/multicluster-management/introduction/kubefed-in-kubesphere.md b/content/en/docs/multicluster-management/introduction/kubefed-in-kubesphere.md
index ffad56b3c..f4dcc4d82 100644
--- a/content/en/docs/multicluster-management/introduction/kubefed-in-kubesphere.md
+++ b/content/en/docs/multicluster-management/introduction/kubefed-in-kubesphere.md
@@ -1,7 +1,7 @@
---
title: "KubeSphere Federation"
keywords: 'Kubernetes, KubeSphere, federation, multicluster, hybrid-cloud'
-description: 'Understand the fundamental concept of Kubernetes federation in KubeSphere, including M clusters and H clusters.'
+description: 'Understand the fundamental concept of Kubernetes federation in KubeSphere, including member clusters and host clusters.'
linkTitle: "KubeSphere Federation"
weight: 5120
---
@@ -10,11 +10,11 @@ The multi-cluster feature relates to the network connection among multiple clust
## How the Multi-cluster Architecture Works
-Before you use the central control plane of KubeSphere to management multiple clusters, you need to create a Host Cluster, also known as **H** Cluster. The H Cluster, essentially, is a KubeSphere cluster with the multi-cluster feature enabled. It provides you with the control plane for unified management of Member Clusters, also known as **M** Cluster. M Clusters are common KubeSphere clusters without the central control plane. Namely, tenants with necessary permissions (usually cluster administrators) can access the control plane from the H Cluster to manage all M Clusters, such as viewing and editing resources on M Clusters. Conversely, if you access the web console of any M Cluster separately, you cannot see any resources on other clusters.
+Before you use the central control plane of KubeSphere to management multiple clusters, you need to create a host cluster, also known as **host** cluster. The host cluster, essentially, is a KubeSphere cluster with the multi-cluster feature enabled. It provides you with the control plane for unified management of member clusters, also known as **member** cluster. Member clusters are common KubeSphere clusters without the central control plane. Namely, tenants with necessary permissions (usually cluster administrators) can access the control plane from the host cluster to manage all member clusters, such as viewing and editing resources on member clusters. Conversely, if you access the web console of any member cluster separately, you cannot see any resources on other clusters.
-
+There can only be one host cluster while multiple member clusters can exist at the same time. In a multi-cluster architecture, the network between the host cluster and member clusters can be [connected directly](../../enable-multicluster/direct-connection/) or [through an agent](../../enable-multicluster/agent-connection/). The network between member clusters can be set in a completely isolated environment.
-There can only be one H Cluster while multiple M Clusters can exist at the same time. In a multi-cluster architecture, the network between the H Cluster and M Clusters can be connected directly or through an agent. The network between M Clusters can be set in a completely isolated environment.
+If you are using on-premises Kubernetes clusters built through kubeadm, install KubeSphere on your Kubernetes clusters by referring to [Air-gapped Installation on Kubernetes](../../../installing-on-kubernetes/on-prem-kubernetes/install-ks-on-linux-airgapped/), and then enable KubeSphere multi-cluster management through direct connection or agent connection.

@@ -38,12 +38,12 @@ Before you enable multi-cluster management, make sure you have enough resources
{{< notice note >}}
- The request and limit of CPU and memory resources all refer to single replica.
-- After the multi-cluster feature is enabled, tower and controller-manager will be installed on the H Cluster. If you use [agent connection](../../../multicluster-management/enable-multicluster/agent-connection/), only tower is needed for M Clusters. If you use [direct connection](../../../multicluster-management/enable-multicluster/direct-connection/), no additional component is needed for M Clusters.
+- After the multi-cluster feature is enabled, tower and controller-manager will be installed on the host cluster. If you use [agent connection](../../../multicluster-management/enable-multicluster/agent-connection/), only tower is needed for member clusters. If you use [direct connection](../../../multicluster-management/enable-multicluster/direct-connection/), no additional component is needed for member clusters.
{{ notice >}}
## Use the App Store in a Multi-cluster Architecture
-Different from other components in KubeSphere, the [KubeSphere App Store](../../../pluggable-components/app-store/) serves as a global application pool for all clusters, including H Cluster and M Clusters. You only need to enable the App Store on the H Cluster and you can use functions related to the App Store on M Clusters directly (no matter whether the App Store is enabled on M Clusters or not), such as [app templates](../../../project-user-guide/application/app-template/) and [app repositories](../../../workspace-administration/app-repository/import-helm-repository/).
+Different from other components in KubeSphere, the [KubeSphere App Store](../../../pluggable-components/app-store/) serves as a global application pool for all clusters, including host cluster and member clusters. You only need to enable the App Store on the host cluster and you can use functions related to the App Store on member clusters directly (no matter whether the App Store is enabled on member clusters or not), such as [app templates](../../../project-user-guide/application/app-template/) and [app repositories](../../../workspace-administration/app-repository/import-helm-repository/).
-However, if you only enable the App Store on M Clusters without enabling it on the H Cluster, you will not be able to use the App Store on any cluster in the multi-cluster architecture.
\ No newline at end of file
+However, if you only enable the App Store on member clusters without enabling it on the host cluster, you will not be able to use the App Store on any cluster in the multi-cluster architecture.
\ No newline at end of file
diff --git a/content/en/docs/multicluster-management/unbind-cluster.md b/content/en/docs/multicluster-management/unbind-cluster.md
index b3326402a..9f5dae030 100644
--- a/content/en/docs/multicluster-management/unbind-cluster.md
+++ b/content/en/docs/multicluster-management/unbind-cluster.md
@@ -11,20 +11,16 @@ This tutorial demonstrates how to unbind a cluster from the central control plan
## Prerequisites
- You have enabled multi-cluster management.
-- You need an account granted a role including the authorization of **Cluster Management**. For example, you can log in to the console as `admin` directly or create a new role with the authorization and assign it to an account.
+- You need a user granted a role including the authorization of **Cluster Management**. For example, you can log in to the console as `admin` directly or create a new role with the authorization and assign it to a user.
## Unbind a Cluster
-1. Click **Platform** in the top-left corner and select **Cluster Management**.
+1. Click **Platform** in the upper-left corner and select **Cluster Management**.
-2. On the **Cluster Management** page, click the cluster that you want to remove from the central control plane.
-
- 
+2. On the **Cluster Management** page, click the cluster that you want to remove from the control plane.
3. Go to **Basic Information** under **Cluster Settings**, check **I confirm I want to unbind the cluster** and click **Unbind**.
- 
-
{{< notice note >}}
After you unbind the cluster, you cannot manage it from the control plane while Kubernetes resources on the cluster will not be deleted.
diff --git a/content/en/docs/pluggable-components/alerting.md b/content/en/docs/pluggable-components/alerting.md
index 0cce528fd..851e365b9 100644
--- a/content/en/docs/pluggable-components/alerting.md
+++ b/content/en/docs/pluggable-components/alerting.md
@@ -6,9 +6,9 @@ linkTitle: "KubeSphere Alerting"
weight: 6600
---
-Alerting is an important building block of observability, closely related to monitoring and logging. The alerting system in KubeSphere, coupled with the proactive failure notification system, allows users to know activities of interest based on alerting policies. When a predefined threshold of a certain metric is reached, an alert will be sent to preconfigured recipients. Therefore, you need to configure the notification method beforehand, including Email, Slack, DingTalk, WeCom and Webhook. With a highly functional alerting and notification system in place, you can quickly identify and resolve potential issues in advance before they affect your business.
+Alerting is an important building block of observability, closely related to monitoring and logging. The alerting system in KubeSphere, coupled with the proactive failure notification system, allows users to know activities of interest based on alerting policies. When a predefined threshold of a certain metric is reached, an alert will be sent to preconfigured recipients. Therefore, you need to configure the notification method beforehand, including Email, Slack, DingTalk, WeCom, and Webhook. With a highly functional alerting and notification system in place, you can quickly identify and resolve potential issues in advance before they affect your business.
-## Enable Alerting before Installation
+## Enable Alerting Before Installation
### Installing on Linux
@@ -39,9 +39,9 @@ If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/),
### Installing on Kubernetes
-As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable Alerting first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/cluster-configuration.yaml) file.
+As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable Alerting first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/cluster-configuration.yaml) file.
-1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/cluster-configuration.yaml) and edit it.
+1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/cluster-configuration.yaml) and edit it.
```bash
vi cluster-configuration.yaml
@@ -57,14 +57,14 @@ As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introdu
3. Execute the following commands to start installation:
```bash
- kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/kubesphere-installer.yaml
+ kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/kubesphere-installer.yaml
kubectl apply -f cluster-configuration.yaml
```
-## Enable Alerting after Installation
+## Enable Alerting After Installation
-1. Log in to the console as `admin`. Click **Platform** in the top-left corner and select **Cluster Management**.
+1. Log in to the console as `admin`. Click **Platform** in the upper-left corner and select **Cluster Management**.
2. Click **CRDs** and enter `clusterconfiguration` in the search bar. Click the result to view its detail page.
@@ -72,9 +72,9 @@ As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introdu
A Custom Resource Definition (CRD) allows users to create a new type of resources without adding another API server. They can use these resources like any other native Kubernetes objects.
{{ notice >}}
-3. In **Resource List**, click
on the right of `ks-installer` and select **Edit YAML**.
+3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
-4. In this YAML file, navigate to `alerting` and change `false` to `true` for `enabled`. After you finish, click **Update** in the bottom-right corner to save the configuration.
+4. In this YAML file, navigate to `alerting` and change `false` to `true` for `enabled`. After you finish, click **OK** in the lower-right corner to save the configuration.
```yaml
alerting:
@@ -89,14 +89,12 @@ A Custom Resource Definition (CRD) allows users to create a new type of resource
{{< notice note >}}
-You can find the web kubectl tool by clicking
in the bottom-right corner of the console.
+You can find the web kubectl tool by clicking
in the lower-right corner of the console.
{{ notice >}}
## Verify the Installation of the Component
If you can see **Alerting Messages** and **Alerting Policies** on the **Cluster Management** page, it means the installation is successful as the two parts won't display until the component is installed.
-
-
diff --git a/content/en/docs/pluggable-components/app-store.md b/content/en/docs/pluggable-components/app-store.md
index 279506b9d..1f8eb9601 100644
--- a/content/en/docs/pluggable-components/app-store.md
+++ b/content/en/docs/pluggable-components/app-store.md
@@ -6,15 +6,13 @@ linkTitle: "KubeSphere App Store"
weight: 6200
---
-As an open-source and app-centric container platform, KubeSphere provides users with a Helm-based App Store for application lifecycle management on the back of [OpenPitrix](https://github.com/openpitrix/openpitrix), an open-source web-based system to package, deploy and manage different types of apps. The KubeSphere App Store allows ISVs, developers and users to upload, test, deploy and release apps with just several clicks in a one-stop shop.
+As an open-source and app-centric container platform, KubeSphere provides users with a Helm-based App Store for application lifecycle management on the back of [OpenPitrix](https://github.com/openpitrix/openpitrix), an open-source web-based system to package, deploy and manage different types of apps. The KubeSphere App Store allows ISVs, developers, and users to upload, test, install, and release apps with just several clicks in a one-stop shop.
-Internally, the KubeSphere App Store can serve as a place for different teams to share data, middleware, and office applications. Externally, it is conducive to setting industry standards of building and delivery. By default, there are 17 built-in apps in the App Store. After you enable this feature, you can add more apps with app templates.
-
-
+Internally, the KubeSphere App Store can serve as a place for different teams to share data, middleware, and office applications. Externally, it is conducive to setting industry standards of building and delivery. After you enable this feature, you can add more apps with app templates.
For more information, see [App Store](../../application-store/).
-## Enable the App Store before Installation
+## Enable the App Store Before Installation
### Installing on Linux
@@ -46,9 +44,9 @@ If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/),
### Installing on Kubernetes
-As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable the KubeSphere App Store first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/cluster-configuration.yaml) file.
+As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable the KubeSphere App Store first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/cluster-configuration.yaml) file.
-1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/cluster-configuration.yaml) and edit it.
+1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/cluster-configuration.yaml) and edit it.
```bash
vi cluster-configuration.yaml
@@ -65,14 +63,14 @@ As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introdu
3. Execute the following commands to start installation:
```bash
- kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/kubesphere-installer.yaml
+ kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/kubesphere-installer.yaml
kubectl apply -f cluster-configuration.yaml
```
-## Enable the App Store after Installation
+## Enable the App Store After Installation
-1. Log in to the console as `admin`. Click **Platform** in the top-left corner and select **Cluster Management**.
+1. Log in to the console as `admin`. Click **Platform** in the upper-left corner and select **Cluster Management**.
2. Click **CRDs** and enter `clusterconfiguration` in the search bar. Click the result to view its detail page.
@@ -82,9 +80,9 @@ A Custom Resource Definition (CRD) allows users to create a new type of resource
{{ notice >}}
-3. In **Resource List**, click
on the right of `ks-installer` and select **Edit YAML**.
+3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
-4. In this YAML file, navigate to `openpitrix` and change `false` to `true` for `enabled`. After you finish, click **Update** in the bottom-right corner to save the configuration.
+4. In this YAML file, navigate to `openpitrix` and change `false` to `true` for `enabled`. After you finish, click **OK** in the lower-right corner to save the configuration.
```yaml
openpitrix:
@@ -100,25 +98,23 @@ A Custom Resource Definition (CRD) allows users to create a new type of resource
{{< notice note >}}
-You can find the web kubectl tool by clicking
in the bottom-right corner of the console.
+You can find the web kubectl tool by clicking
in the lower-right corner of the console.
{{ notice >}}
## Verify the Installation of the Component
-After you log in to the console, if you can see **App Store** in the top-left corner and 17 built-in apps in it, it means the installation is successful.
-
-
+After you log in to the console, if you can see **App Store** in the upper-left corner and apps in it, it means the installation is successful.
{{< notice note >}}
-- You can even access the App Store without logging in to the console by visiting `
on the right of `ks-installer` and select **Edit YAML**.
+3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
-4. In this YAML file, navigate to `auditing` and change `false` to `true` for `enabled`. After you finish, click **Update** in the bottom-right corner to save the configuration.
+4. In this YAML file, navigate to `auditing` and change `false` to `true` for `enabled`. After you finish, click **OK** in the lower-right corner to save the configuration.
```yaml
auditing:
@@ -116,7 +116,7 @@ A Custom Resource Definition (CRD) allows users to create a new type of resource
```
{{< notice note >}}
-By default, Elasticsearch will be installed internally if Auditing is enabled. For a production environment, it is highly recommended that you set the following values in this yaml file if you want to enable Auditing, especially `externalElasticsearchUrl` and `externalElasticsearchPort`. Once you provide the following information, KubeSphere will integrate your external Elasticsearch directly instead of installing an internal one.
+By default, Elasticsearch will be installed internally if Auditing is enabled. For a production environment, it is highly recommended that you set the following values in this yaml file if you want to enable Auditing, especially `externalElasticsearchHost` and `externalElasticsearchPort`. Once you provide the following information, KubeSphere will integrate your external Elasticsearch directly instead of installing an internal one.
{{ notice >}}
```yaml
@@ -127,7 +127,7 @@ By default, Elasticsearch will be installed internally if Auditing is enabled. F
elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes.
logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default.
elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-
in the bottom-right corner of the console.
+You can find the web kubectl tool by clicking
in the lower-right corner of the console.
{{ notice >}}
## Verify the Installation of the Component
@@ -148,9 +148,7 @@ You can find the web kubectl tool by clicking
on the right of `ks-installer` and select **Edit YAML**.
+3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
-4. In this YAML file, navigate to `devops` and change `false` to `true` for `enabled`. After you finish, click **Update** in the bottom-right corner to save the configuration.
+4. In this YAML file, navigate to `devops` and change `false` to `true` for `enabled`. After you finish, click **OK** in the lower-right corner to save the configuration.
```yaml
devops:
@@ -95,7 +95,7 @@ A Custom Resource Definition (CRD) allows users to create a new type of resource
{{< notice note >}}
-You can find the web kubectl tool by clicking
in the bottom-right corner of the console.
+You can find the web kubectl tool by clicking
in the lower-right corner of the console.
{{ notice >}}
@@ -105,9 +105,7 @@ You can find the web kubectl tool by clicking
on the right of `ks-installer` and select **Edit YAML**.
+3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
-4. In this YAML file, navigate to `events` and change `false` to `true` for `enabled`. After you finish, click **Update** in the bottom-right corner to save the configuration.
+4. In this YAML file, navigate to `events` and change `false` to `true` for `enabled`. After you finish, click **OK** in the lower-right corner to save the configuration.
```yaml
events:
@@ -121,7 +121,7 @@ A Custom Resource Definition (CRD) allows users to create a new type of resource
{{< notice note >}}
-By default, Elasticsearch will be installed internally if Events is enabled. For a production environment, it is highly recommended that you set the following values in this yaml file if you want to enable Events, especially `externalElasticsearchUrl` and `externalElasticsearchPort`. Once you provide the following information, KubeSphere will integrate your external Elasticsearch directly instead of installing an internal one.
+By default, Elasticsearch will be installed internally if Events is enabled. For a production environment, it is highly recommended that you set the following values in this yaml file if you want to enable Events, especially `externalElasticsearchHost` and `externalElasticsearchPort`. Once you provide the following information, KubeSphere will integrate your external Elasticsearch directly instead of installing an internal one.
{{ notice >}}
```yaml
@@ -132,7 +132,7 @@ By default, Elasticsearch will be installed internally if Events is enabled. For
elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes.
logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default.
elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-
in the bottom-right corner of the console.
+You can find the web kubectl tool by clicking
in the lower-right corner of the console.
{{ notice >}}
@@ -154,9 +154,7 @@ You can find the web kubectl tool by clicking
on the right of `ks-installer` and select **Edit YAML**.
+3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
4. In this YAML file, navigate to `kubeedge.enabled` and enable it by setting it to `true`.
@@ -91,13 +91,7 @@ A Custom Resource Definition (CRD) allows users to create a new type of resource
enabled: true # Change "false" to "true".
```
-5. Set the value of `kubeedge.cloudCore.cloudHub.advertiseAddress` to the public IP address of your cluster or an IP address that can be accessed by edge nodes. After you finish, click **Update** in the bottom-right corner to save the configuration.
-
- {{< notice note >}}
-
-The `kubeedge` section is not included in `cluster-configuration.yaml` if your cluster is upgraded from KubeSphere v3.0.0. For more information, see [how to enable KubeEdge after upgrade](#enable-kubeedge-after-upgrade).
-
- {{ notice >}}
+5. Set the value of `kubeedge.cloudCore.cloudHub.advertiseAddress` to the public IP address of your cluster or an IP address that can be accessed by edge nodes. After you finish, click **OK** in the lower-right corner to save the configuration.
6. You can use the web kubectl to check the installation process by executing the following command:
@@ -107,57 +101,16 @@ The `kubeedge` section is not included in `cluster-configuration.yaml` if your c
{{< notice note >}}
-You can find the web kubectl tool by clicking
in the bottom-right corner of the console.
+You can find the web kubectl tool by clicking
in the lower-right corner of the console.
{{ notice >}}
-## Enable KubeEdge after Upgrade
-
-If your KubeSphere v3.1.0 cluster is upgraded from KubeSphere v3.0.0, add the following content in `cluster-configuration.yaml` (i.e. the `clusterconfiguration` CRD) and enable `kubeedge` as shown [in the steps above](#enable-kubeedge-after-installation).
-
-```yaml
- kubeedge:
- enabled: false
- cloudCore:
- nodeSelector: {"node-role.kubernetes.io/worker": ""}
- tolerations: []
- cloudhubPort: "10000"
- cloudhubQuicPort: "10001"
- cloudhubHttpsPort: "10002"
- cloudstreamPort: "10003"
- tunnelPort: "10004"
- cloudHub:
- advertiseAddress:
- - ""
- nodeLimit: "100"
- service:
- cloudhubNodePort: "30000"
- cloudhubQuicNodePort: "30001"
- cloudhubHttpsNodePort: "30002"
- cloudstreamNodePort: "30003"
- tunnelNodePort: "30004"
- edgeWatcher:
- nodeSelector: {"node-role.kubernetes.io/worker": ""}
- tolerations: []
- edgeWatcherAgent:
- nodeSelector: {"node-role.kubernetes.io/worker": ""}
- tolerations: []
-```
-
-{{< notice warning >}}
-
-Do not add the `kubeedge` section in `cluster-configuration.yaml` before the upgrade.
-
-{{ notice >}}
-
## Verify the Installation of the Component
{{< tabs >}}
{{< tab "Verify the component on the dashboard" >}}
-On the **Cluster Management** page, verify that the section **Edge Nodes** has appeared under **Node Management**.
-
-
+On the **Cluster Management** page, verify that the **Edge Nodes** module has appeared under **Nodes**.
{{ tab >}}
diff --git a/content/en/docs/pluggable-components/logging.md b/content/en/docs/pluggable-components/logging.md
index 4de1186de..cc90148e7 100644
--- a/content/en/docs/pluggable-components/logging.md
+++ b/content/en/docs/pluggable-components/logging.md
@@ -6,11 +6,11 @@ linkTitle: "KubeSphere Logging System"
weight: 6400
---
-KubeSphere provides a powerful, holistic and easy-to-use logging system for log collection, query and management. It covers logs at varied levels, including tenants, infrastructure resources, and applications. Users can search logs from different dimensions, such as project, workload, Pod and keyword. Compared with Kibana, the tenant-based logging system of KubeSphere features better isolation and security among tenants as tenants can only view their own logs. Apart from KubeSphere's own logging system, the container platform also allows users to add third-party log collectors, such as Elasticsearch, Kafka and Fluentd.
+KubeSphere provides a powerful, holistic, and easy-to-use logging system for log collection, query, and management. It covers logs at varied levels, including tenants, infrastructure resources, and applications. Users can search logs from different dimensions, such as project, workload, Pod and keyword. Compared with Kibana, the tenant-based logging system of KubeSphere features better isolation and security among tenants as tenants can only view their own logs. Apart from KubeSphere's own logging system, the container platform also allows users to add third-party log collectors, such as Elasticsearch, Kafka, and Fluentd.
For more information, see [Log Query](../../toolbox/log-query/).
-## Enable Logging before Installation
+## Enable Logging Before Installation
### Installing on Linux
@@ -35,10 +35,14 @@ When you install KubeSphere on Linux, you need to create a configuration file, w
```yaml
logging:
enabled: true # Change "false" to "true".
+ containerruntime: docker
```
- {{< notice note >}}
-By default, KubeKey will install Elasticsearch internally if Logging is enabled. For a production environment, it is highly recommended that you set the following values in `config-sample.yaml` if you want to enable Logging, especially `externalElasticsearchUrl` and `externalElasticsearchPort`. Once you provide the following information before installation, KubeKey will integrate your external Elasticsearch directly instead of installing an internal one.
+ {{< notice info >}}To use containerd as the container runtime, change the value of the field `containerruntime` to `containerd`. If you upgraded to KubeSphere 3.2.1 from earlier versions, you have to manually add the field `containerruntime` under `logging` when enabling KubeSphere Logging system.
+
+ {{ notice >}}
+
+ {{< notice note >}}By default, KubeKey will install Elasticsearch internally if Logging is enabled. For a production environment, it is highly recommended that you set the following values in `config-sample.yaml` if you want to enable Logging, especially `externalElasticsearchHost` and `externalElasticsearchPort`. Once you provide the following information before installation, KubeKey will integrate your external Elasticsearch directly instead of installing an internal one.
{{ notice >}}
```yaml
@@ -49,7 +53,7 @@ By default, KubeKey will install Elasticsearch internally if Logging is enabled.
elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes.
logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default.
elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-
on the right of `ks-installer` and select **Edit YAML**.
+3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
-4. In this YAML file, navigate to `logging` and change `false` to `true` for `enabled`. After you finish, click **Update** in the bottom-right corner to save the configuration.
+4. In this YAML file, navigate to `logging` and change `false` to `true` for `enabled`. After you finish, click **OK** in the lower-right corner to save the configuration.
```yaml
logging:
enabled: true # Change "false" to "true".
+ containerruntime: docker
```
- {{< notice note >}}By default, Elasticsearch will be installed internally if Logging is enabled. For a production environment, it is highly recommended that you set the following values in this yaml file if you want to enable Logging, especially `externalElasticsearchUrl` and `externalElasticsearchPort`. Once you provide the following information, KubeSphere will integrate your external Elasticsearch directly instead of installing an internal one.
-
+ {{< notice info >}}To use containerd as the container runtime, change the value of the field `.logging.containerruntime` to `containerd`. If you upgraded to KubeSphere 3.2.1 from earlier versions, you have to manually add the field `containerruntime` under `logging` when enabling KubeSphere Logging system.
+
{{ notice >}}
-
+
+ {{< notice note >}}By default, Elasticsearch will be installed internally if Logging is enabled. For a production environment, it is highly recommended that you set the following values in this yaml file if you want to enable Logging, especially `externalElasticsearchHost` and `externalElasticsearchPort`. Once you provide the following information, KubeSphere will integrate your external Elasticsearch directly instead of installing an internal one.
+ {{ notice >}}
+
```yaml
es: # Storage backend for logging, tracing, events and auditing.
elasticsearchMasterReplicas: 1 # The total number of master nodes. Even numbers are not allowed.
@@ -133,7 +145,7 @@ A Custom Resource Definition (CRD) allows users to create a new type of resource
elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes.
logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default.
elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-
in the bottom-right corner of the console.
+You can find the web kubectl tool by clicking
in the lower-right corner of the console.
{{ notice >}}
@@ -155,9 +167,7 @@ You can find the web kubectl tool by clicking
on the right of `ks-installer` and select **Edit YAML**.
+3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
-4. In this YAML file, navigate to `metrics_server` and change `false` to `true` for `enabled`. After you finish, click **Update** in the bottom-right corner to save the configuration.
+4. In this YAML file, navigate to `metrics_server` and change `false` to `true` for `enabled`. After you finish, click **OK** in the lower-right corner to save the configuration.
```yaml
metrics_server:
@@ -94,7 +94,7 @@ A Custom Resource Definition (CRD) allows users to create a new type of resource
{{< notice note >}}
-You can find the web kubectl tool by clicking
in the bottom-right corner of the console.
+You can find the web kubectl tool by clicking
in the lower-right corner of the console.
{{ notice >}}
## Verify the Installation of the Component
diff --git a/content/en/docs/pluggable-components/network-policy.md b/content/en/docs/pluggable-components/network-policy.md
index 21ca33e7c..4843e1efe 100644
--- a/content/en/docs/pluggable-components/network-policy.md
+++ b/content/en/docs/pluggable-components/network-policy.md
@@ -10,14 +10,14 @@ Starting from v3.0.0, users can configure network policies of native Kubernetes
{{< notice note >}}
-- Please make sure that the CNI network plugin used by the cluster supports Network Policies before you enable the feature. There are a number of CNI network plugins that support Network Policies, including Calico, Cilium, Kube-router, Romana and Weave Net.
+- Please make sure that the CNI network plugin used by the cluster supports Network Policies before you enable the feature. There are a number of CNI network plugins that support Network Policies, including Calico, Cilium, Kube-router, Romana, and Weave Net.
- It is recommended that you use [Calico](https://www.projectcalico.org/) as the CNI plugin before you enable Network Policies.
{{ notice >}}
For more information, see [Network Policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/).
-## Enable the Network Policy before Installation
+## Enable the Network Policy Before Installation
### Installing on Linux
@@ -49,9 +49,9 @@ If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/),
### Installing on Kubernetes
-As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable the Network Policy first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/cluster-configuration.yaml) file.
+As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable the Network Policy first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/cluster-configuration.yaml) file.
-1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/cluster-configuration.yaml) and edit it.
+1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/cluster-configuration.yaml) and edit it.
```bash
vi cluster-configuration.yaml
@@ -68,14 +68,14 @@ As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introdu
3. Execute the following commands to start installation:
```bash
- kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/kubesphere-installer.yaml
+ kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/kubesphere-installer.yaml
kubectl apply -f cluster-configuration.yaml
```
-## Enable the Network Policy after Installation
+## Enable the Network Policy After Installation
-1. Log in to the console as `admin`. Click **Platform** in the top-left corner and select **Cluster Management**.
+1. Log in to the console as `admin`. Click **Platform** in the upper-left corner and select **Cluster Management**.
2. Click **CRDs** and enter `clusterconfiguration` in the search bar. Click the result to view its detail page.
@@ -83,9 +83,9 @@ As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introdu
A Custom Resource Definition (CRD) allows users to create a new type of resources without adding another API server. They can use these resources like any other native Kubernetes objects.
{{ notice >}}
-3. In **Resource List**, click
on the right of `ks-installer` and select **Edit YAML**.
+3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
-4. In this YAML file, navigate to `network.networkpolicy` and change `false` to `true` for `enabled`. After you finish, click **Update** in the bottom-right corner to save the configuration.
+4. In this YAML file, navigate to `network.networkpolicy` and change `false` to `true` for `enabled`. After you finish, click **OK** in the lower-right corner to save the configuration.
```yaml
network:
@@ -101,11 +101,9 @@ A Custom Resource Definition (CRD) allows users to create a new type of resource
{{< notice note >}}
-You can find the web kubectl tool by clicking
in the bottom-right corner of the console.
+You can find the web kubectl tool by clicking
in the lower-right corner of the console.
{{ notice >}}
## Verify the Installation of the Component
-If you can see **Network Policies** in **Network** as the image below, it means the installation succeeds as this part won't display until you install the component.
-
-
\ No newline at end of file
+If you can see the **Network Policies** module in **Network**, it means the installation is successful as this part won't display until you install the component.
\ No newline at end of file
diff --git a/content/en/docs/pluggable-components/pod-ip-pools.md b/content/en/docs/pluggable-components/pod-ip-pools.md
index 995630db7..195278fd5 100644
--- a/content/en/docs/pluggable-components/pod-ip-pools.md
+++ b/content/en/docs/pluggable-components/pod-ip-pools.md
@@ -1,14 +1,14 @@
---
title: "Pod IP Pools"
-keywords: "Kubernetes, KubeSphere, Pod, IP Pools"
-description: "Learn how to enable Pod IP Pools to assign a specific Pod IP Pool to your Pods."
+keywords: "Kubernetes, KubeSphere, Pod, IP pools"
+description: "Learn how to enable Pod IP Pools to assign a specific Pod IP pool to your Pods."
linkTitle: "Pod IP Pools"
weight: 6920
---
-A Pod IP Pool is used to manage the Pod network address space, and the address space between each Pod IP Pool cannot overlap. When you create a workload, you can select a specific Pod IP Pool, so that created Pods will be assigned IP addresses from this Pod IP Pool.
+A Pod IP pool is used to manage the Pod network address space, and the address space between each Pod IP pool cannot overlap. When you create a workload, you can select a specific Pod IP pool, so that created Pods will be assigned IP addresses from this Pod IP pool.
-## Enable Pod IP Pools before Installation
+## Enable Pod IP Pools Before Installation
### Installing on Linux
@@ -21,7 +21,7 @@ When you implement multi-node installation of KubeSphere on Linux, you need to c
```
{{< notice note >}}
- If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Pod IP Pools in this mode (for example, for testing purposes), refer to [the following section](#enable-pod-ip-pools-after-installation) to see how Pod IP Pools can be installed after installation.
+ If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Pod IP Pools in this mode (for example, for testing purposes), refer to [the following section](#enable-pod-ip-pools-after-installation) to see how Pod IP pools can be installed after installation.
{{ notice >}}
2. In this file, navigate to `network.ippool.type` and change `none` to `calico`. Save the file after you finish.
@@ -40,9 +40,9 @@ When you implement multi-node installation of KubeSphere on Linux, you need to c
### Installing on Kubernetes
-As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable Pod IP Pools first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/cluster-configuration.yaml) file.
+As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable Pod IP Pools first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/cluster-configuration.yaml) file.
-1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/cluster-configuration.yaml) and edit it.
+1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/cluster-configuration.yaml) and edit it.
```bash
vi cluster-configuration.yaml
@@ -59,15 +59,15 @@ As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introdu
3. Execute the following commands to start installation:
```bash
- kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/kubesphere-installer.yaml
+ kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/kubesphere-installer.yaml
kubectl apply -f cluster-configuration.yaml
```
-## Enable Pod IP Pools after Installation
+## Enable Pod IP Pools After Installation
-1. Log in to the console as `admin`. Click **Platform** in the top-left corner and select **Cluster Management**.
+1. Log in to the console as `admin`. Click **Platform** in the upper-left corner and select **Cluster Management**.
2. Click **CRDs** and enter `clusterconfiguration` in the search bar. Click the result to view its detail page.
@@ -75,9 +75,9 @@ As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introdu
A Custom Resource Definition (CRD) allows users to create a new type of resources without adding another API server. They can use these resources like any other native Kubernetes objects.
{{ notice >}}
-3. In **Resource List**, click
on the right of `ks-installer` and select **Edit YAML**.
+3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
-4. In this YAML file, navigate to `network` and change `network.ippool.type` to `calico`. After you finish, click **Update** in the bottom-right corner to save the configuration.
+4. In this YAML file, navigate to `network` and change `network.ippool.type` to `calico`. After you finish, click **OK** in the lower-right corner to save the configuration.
```yaml
network:
@@ -93,14 +93,12 @@ A Custom Resource Definition (CRD) allows users to create a new type of resource
{{< notice note >}}
-You can find the web kubectl tool by clicking
in the bottom-right corner of the console.
+You can find the web kubectl tool by clicking
in the lower-right corner of the console.
{{ notice >}}
## Verify the Installation of the Component
-On the **Cluster Management** page, verify that you can see the section **Pod IP Pools** under **Network**.
-
-
+On the **Cluster Management** page, verify that you can see the **Pod IP Pools** module under **Network**.
diff --git a/content/en/docs/pluggable-components/service-mesh.md b/content/en/docs/pluggable-components/service-mesh.md
index 29f2df4d8..364909eaa 100644
--- a/content/en/docs/pluggable-components/service-mesh.md
+++ b/content/en/docs/pluggable-components/service-mesh.md
@@ -6,11 +6,11 @@ linkTitle: "KubeSphere Service Mesh"
weight: 6800
---
-On the basis of [Istio](https://istio.io/), KubeSphere Service Mesh visualizes microservices governance and traffic management. It features a powerful toolkit including **circuit breaking, blue-green deployment, canary release, traffic mirroring, distributed tracing, observability and traffic control**. Developers can easily get started with KubeSphere Service Mesh without any code hacking, with the learning curve of Istio greatly reduced. All features of KubeSphere Service Mesh are designed to meet users' demand for their business.
+On the basis of [Istio](https://istio.io/), KubeSphere Service Mesh visualizes microservices governance and traffic management. It features a powerful toolkit including **circuit breaking, blue-green deployment, canary release, traffic mirroring, tracing, observability, and traffic control**. Developers can easily get started with KubeSphere Service Mesh without any code hacking, with the learning curve of Istio greatly reduced. All features of KubeSphere Service Mesh are designed to meet users' demand for their business.
For more information, see [Grayscale Release](../../project-user-guide/grayscale-release/overview/).
-## Enable KubeSphere Service Mesh before Installation
+## Enable KubeSphere Service Mesh Before Installation
### Installing on Linux
@@ -41,9 +41,9 @@ If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/),
### Installing on Kubernetes
-As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable KubeSphere Service Mesh first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/cluster-configuration.yaml) file.
+As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable KubeSphere Service Mesh first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/cluster-configuration.yaml) file.
-1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/cluster-configuration.yaml) and edit it.
+1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/cluster-configuration.yaml) and edit it.
```bash
vi cluster-configuration.yaml
@@ -59,14 +59,14 @@ As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introdu
3. Execute the following commands to start installation:
```bash
- kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/kubesphere-installer.yaml
+ kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/kubesphere-installer.yaml
kubectl apply -f cluster-configuration.yaml
```
-## Enable KubeSphere Service Mesh after Installation
+## Enable KubeSphere Service Mesh After Installation
-1. Log in to the console as `admin`. Click **Platform** in the top-left corner and select **Cluster Management**.
+1. Log in to the console as `admin`. Click **Platform** in the upper-left corner and select **Cluster Management**.
2. Click **CRDs** and enter `clusterconfiguration` in the search bar. Click the result to view its detail page.
@@ -74,9 +74,9 @@ As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introdu
A Custom Resource Definition (CRD) allows users to create a new type of resources without adding another API server. They can use these resources like any other native Kubernetes objects.
{{ notice >}}
-3. In **Resource List**, click
on the right of `ks-installer` and select **Edit YAML**.
+3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
-4. In this YAML file, navigate to `servicemesh` and change `false` to `true` for `enabled`. After you finish, click **Update** in the bottom-right corner to save the configuration.
+4. In this YAML file, navigate to `servicemesh` and change `false` to `true` for `enabled`. After you finish, click **OK** in the lower-right corner to save the configuration.
```yaml
servicemesh:
@@ -91,7 +91,7 @@ A Custom Resource Definition (CRD) allows users to create a new type of resource
{{< notice note >}}
-You can find the web kubectl tool by clicking
in the bottom-right corner of the console.
+You can find the web kubectl tool by clicking
in the lower-right corner of the console.
{{ notice >}}
## Verify the Installation of the Component
@@ -100,9 +100,7 @@ You can find the web kubectl tool by clicking
on the right of `ks-installer` and select **Edit YAML**.
+3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
-4. In this YAML file, navigate to `network` and change `network.topology.type` to `weave-scope`. After you finish, click **Update** in the bottom-right corner to save the configuration.
+4. In this YAML file, navigate to `network` and change `network.topology.type` to `weave-scope`. After you finish, click **OK** in the lower-right corner to save the configuration.
```yaml
network:
@@ -93,7 +93,7 @@ A Custom Resource Definition (CRD) allows users to create a new type of resource
{{< notice note >}}
-You can find the web kubectl tool by clicking
in the bottom-right corner of the console.
+You can find the web kubectl tool by clicking
in the lower-right corner of the console.
{{ notice >}}
## Verify the Installation of the Component
@@ -102,9 +102,7 @@ You can find the web kubectl tool by clicking
to enable the feature.
-
+2. From the left navigation bar, click **Log Collection** in **Project Settings**, and then click
to enable the feature.
## Create a Deployment
@@ -28,15 +27,13 @@ This tutorial demonstrates how to collect disk logs for an example app.
2. In the dialog that appears, set a name for the Deployment (for example, `demo-deployment`) and click **Next**.
-3. Under **Container Image**, click **Add Container Image**.
+3. Under **Containers**, click **Add Container**.
4. Enter `alpine` in the search bar to use the image (tag: `latest`) as an example.
- 
+5. Scroll down to **Start Command** and select the checkbox. Enter the following values for **Command** and **Parameters** respectively, click **√**, and then click **Next**.
-5. Scroll down to **Start Command** and select the checkbox. Enter the following values for **Run Command** and **Parameters** respectively, click **√**, and then click **Next**.
-
- **Run Command**
+ **Command**
```bash
/bin/sh
@@ -54,15 +51,11 @@ This tutorial demonstrates how to collect disk logs for an example app.
{{ notice >}}
- 
+6. On the **Volume Settings** tab, click vieweroperatoradmin
on the right.
- 
-
## Invite a New Member
-1. Navigate to **Project Members** under **Project Settings**, and click **Invite Member**.
+1. Navigate to **Project Members** under **Project Settings**, and click **Invite**.
2. Invite a user to the project by clicking
on the right of it and assign a role to it.
3. After you add the user to the project, click **OK**. In **Project Members**, you can see the user in the list.
4. To edit the role of an existing user or remove the user from the project, click
on the right and select the corresponding operation.
-
- 
diff --git a/content/en/docs/project-user-guide/alerting/alerting-message.md b/content/en/docs/project-user-guide/alerting/alerting-message.md
index a67c9af87..507563542 100644
--- a/content/en/docs/project-user-guide/alerting/alerting-message.md
+++ b/content/en/docs/project-user-guide/alerting/alerting-message.md
@@ -11,18 +11,16 @@ Alerting messages record detailed information of alerts triggered based on the a
## Prerequisites
- You have enabled [KubeSphere Alerting](../../../pluggable-components/alerting/).
-- You need to create a workspace, a project and an account (`project-regular`). The account must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project/).
+- You need to create a workspace, a project and a user (`project-regular`). The user must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
- You have created a workload-level alerting policy and an alert has been triggered. For more information, refer to [Alerting Policies (Workload Level)](../alerting-policy/).
## View Alerting Messages
-1. Log in to the console as `project-regular`, go to your project, and navigate to **Alerting Messages** under **Monitoring & Alerting**.
+1. Log in to the console as `project-regular`, go to your project, and go to **Alerting Messages** under **Monitoring & Alerting**.
-2. On the **Alerting Messages** page, you can see all alerting messages in the list. The first column displays the summary and message you have defined in the notification of the alert. To view details of an alerting message, click the name of the alerting policy and click the **Alerting Messages** tab on the page that appears.
+2. On the **Alerting Messages** page, you can see all alerting messages in the list. The first column displays the summary and message you have defined in the notification of the alert. To view details of an alerting message, click the name of the alerting policy and click the **Alerting History** tab on the displayed page.
- 
-
-3. On the **Alerting Messages** tab, you can see alert severity, target resources, and alert time.
+3. On the **Alerting History** tab, you can see alert severity, monitoring targets, and activation time.
## View Notifications
diff --git a/content/en/docs/project-user-guide/alerting/alerting-policy.md b/content/en/docs/project-user-guide/alerting/alerting-policy.md
index eb3fa3e8f..163a5d3d5 100644
--- a/content/en/docs/project-user-guide/alerting/alerting-policy.md
+++ b/content/en/docs/project-user-guide/alerting/alerting-policy.md
@@ -12,28 +12,26 @@ KubeSphere provides alerting policies for nodes and workloads. This tutorial dem
- You have enabled [KubeSphere Alerting](../../../pluggable-components/alerting/).
- To receive alert notifications, you must configure a [notification channel](../../../cluster-administration/platform-settings/notification-management/configure-email/) beforehand.
-- You need to create a workspace, a project and an account (`project-regular`). The account must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project/).
+- You need to create a workspace, a project and a user (`project-regular`). The user must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
- You have workloads in this project. If they are not ready, see [Deploy and Access Bookinfo](../../../quick-start/deploy-bookinfo-to-k8s/) to create a sample app.
## Create an Alerting Policy
-1. Log in to the console as `project-regular` and go to your project. Navigate to **Alerting Policies** under **Monitoring & Alerting**, then click **Create**.
+1. Log in to the console as `project-regular` and go to your project. Go to **Alerting Policies** under **Monitoring & Alerting**, then click **Create**.
-2. In the dialog that appears, provide the basic information as follows. Click **Next** to continue.
+2. In the displayed dialog box, provide the basic information as follows. Click **Next** to continue.
- **Name**. A concise and clear name as its unique identifier, such as `alert-demo`.
- **Alias**. Help you distinguish alerting policies better.
- **Description**. A brief introduction to the alerting policy.
- - **Duration (Minutes)**. An alert will be firing when the conditions defined for an alerting policy are met at any given point in the time range.
+ - **Threshold Duration (min)**. The status of the alerting policy becomes `Firing` when the duration of the condition configured in the alerting rule reaches the threshold.
- **Severity**. Allowed values include **Warning**, **Error** and **Critical**, providing an indication of how serious an alert is.
-3. On the **Alerting Rule** tab, you can use the rule template or create a custom rule. To use the template, fill in the following fields.
+3. On the **Rule Settings** tab, you can use the rule template or create a custom rule. To use the template, fill in the following fields.
- - **Resource Type**. Select the resource type you want to monitor, such as **Deployment**, **StatefulSet** and **DaemonSet**.
- - **Monitoring Target**. Depending on the resource type you select, the target can be different. You cannot see any target if you do not have any workload in the project.
- - **Alerting Rules**. Define a rule for the alerting policy. These rules are based on Prometheus expressions and an alert will be triggered when conditions are met. You can monitor objects such as CPU and memory.
-
- 
+ - **Resource Type**. Select the resource type you want to monitor, such as **Deployment**, **StatefulSet**, and **DaemonSet**.
+ - **Monitoring Targets**. Depending on the resource type you select, the target can be different. You cannot see any target if you do not have any workload in the project.
+ - **Alerting Rule**. Define a rule for the alerting policy. These rules are based on Prometheus expressions and an alert will be triggered when conditions are met. You can monitor objects such as CPU and memory.
{{< notice note >}}
@@ -43,24 +41,20 @@ KubeSphere provides alerting policies for nodes and workloads. This tutorial dem
Click **Next** to continue.
-4. On the **Notification Settings** tab, enter the alert summary and message to be included in your notification, then click **Create**.
+4. On the **Message Settings** tab, enter the alert summary and message to be included in your notification, then click **Create**.
-5. An alerting policy will be **Inactive** when just created. If conditions in the rule expression are met, it will reach **Pending** first, then turn to **Firing** if conditions keep to be met in the given time range.
+5. An alerting policy will be **Inactive** when just created. If conditions in the rule expression are met, it reaches **Pending** first, then turn to **Firing** if conditions keep to be met in the given time range.
## Edit an Alerting Policy
To edit an alerting policy after it is created, on the **Alerting Policies** page, click
on the right.
-1. Click **Edit** from the drop-down menu and edit the alerting policy following the same steps as you create it. Click **Update** on the **Notification Settings** page to save it.
-
- 
+1. Click **Edit** from the drop-down menu and edit the alerting policy following the same steps as you create it. Click **OK** on the **Message Settings** page to save it.
2. Click **Delete** from the drop-down menu to delete an alerting policy.
## View an Alerting Policy
-Click an alerting policy on the **Alerting Policies** page to see its detail information, including alerting rules and alerting messages. You can also see the rule expression which is based on the template you use when creating the alerting policy.
+Click an alerting policy on the **Alerting Policies** page to see its detail information, including alerting rules and alerting history. You can also see the rule expression which is based on the template you use when creating the alerting policy.
-Under **Monitoring**, the **Alert Monitoring** chart shows the actual usage or amount of resources over time. **Notification Settings** displays the customized message you set in notifications.
-
-
\ No newline at end of file
+Under **Alert Monitoring**, the **Alert Monitoring** chart shows the actual usage or amount of resources over time. **Alerting Message** displays the customized message you set in notifications.
diff --git a/content/en/docs/project-user-guide/application-workloads/container-image-settings.md b/content/en/docs/project-user-guide/application-workloads/container-image-settings.md
index 279f5d061..d844dcc98 100644
--- a/content/en/docs/project-user-guide/application-workloads/container-image-settings.md
+++ b/content/en/docs/project-user-guide/application-workloads/container-image-settings.md
@@ -1,40 +1,43 @@
---
-title: "Container Image Settings"
+title: "Pod Settings"
keywords: 'KubeSphere, Kubernetes, image, workload, setting, container'
-description: 'Learn different properties on the dashboard in detail as you set container images for your workload.'
-linkTitle: "Container Image Settings"
+description: 'Learn different properties on the dashboard in detail as you set Pods for your workload.'
+linkTitle: "Pod Settings"
weight: 10280
---
-When you create Deployments, StatefulSets or DaemonSets, you need to specify a container image. At the same time, KubeSphere provides users with various options to customize workload configurations, such as health check probes, environment variables and start commands. This page illustrates detailed explanations of different properties in **Container Image**.
+When you create Deployments, StatefulSets or DaemonSets, you need to specify a Pod. At the same time, KubeSphere provides users with various options to customize workload configurations, such as health check probes, environment variables and start commands. This page illustrates detailed explanations of different properties in **Pod Settings**.
{{< notice tip >}}
-You can enable **Edit Mode** in the upper-right corner to see corresponding values in the manifest file (YAML format) of properties on the dashboard.
+You can enable **Edit YAML** in the upper-right corner to see corresponding values in the manifest file (YAML format) of properties on the dashboard.
{{ notice >}}
-## Container Image
+## Pod Settings
### Pod Replicas
Set the number of replicated Pods by clicking
on the right and check the container log as shown below, which displays the expected output.
-
- 
-
- 
+4. In **Resource Status**, you can inspect the Pod status. Click
on the right and click
on the right and select the options from the menu to modify a DaemonSet.
+1. After a DaemonSet is created, it will be displayed in the list. You can click
on the right and select the options from the menu to modify a DaemonSet.
- 
-
- - **Edit**: View and edit the basic information.
+ - **Edit Information**: View and edit the basic information.
- **Edit YAML**: View, upload, download, or update the YAML file.
- - **Redeploy**: Redeploy the DaemonSet.
+ - **Re-create**: Re-create the DaemonSet.
- **Delete**: Delete the DaemonSet.
-2. Click the name of the DaemonSet and you can go to its detail page.
-
- 
+2. Click the name of the DaemonSet and you can go to its details page.
3. Click **More** to display what operations about this DaemonSet you can do.
- 
-
- - **Revision Rollback**: Select the revision to roll back.
- - **Edit Config Template**: Configure update strategies, containers and volumes.
+ - **Roll Back**: Select the revision to roll back.
+ - **Edit Settings**: Configure update strategies, containers and volumes.
- **Edit YAML**: View, upload, download, or update the YAML file.
- - **Redeploy**: Redeploy this DaemonSet.
+ - **Re-create**: Re-create this DaemonSet.
- **Delete**: Delete the DaemonSet, and return to the DaemonSet list page.
4. Click the **Resource Status** tab to view the port and Pod information of a DaemonSet.
- 
-
- **Replica Status**: You cannot change the number of Pod replicas for a DaemonSet.
- - **Pod detail**
-
- 
+ - **Pods**
- The Pod list provides detailed information of the Pod (status, node, Pod IP and resource usage).
- You can view the container information by clicking a Pod item.
- Click the container log icon to view output logs of the container.
- - You can view the Pod detail page by clicking the Pod name.
+ - You can view the Pod details page by clicking the Pod name.
### Revision records
@@ -140,17 +116,11 @@ After the resource template of workload is changed, a new log will be generated
Click the **Metadata** tab to view the labels and annotations of the DaemonSet.
-
-
### Monitoring
1. Click the **Monitoring** tab to view the CPU usage, memory usage, outbound traffic, and inbound traffic of the DaemonSet.
- 
-
-2. Click the drop-down menu in the upper-right corner to customize the time range and time interval.
-
- 
+2. Click the drop-down menu in the upper-right corner to customize the time range and sampling interval.
3. Click
/
in the upper-right corner to start/stop automatic data refreshing.
@@ -160,11 +130,8 @@ Click the **Metadata** tab to view the labels and annotations of the DaemonSet.
Click the **Environment Variables** tab to view the environment variables of the DaemonSet.
-
-
### Events
Click the **Events** tab to view the events of the DaemonSet.
-
diff --git a/content/en/docs/project-user-guide/application-workloads/deployments.md b/content/en/docs/project-user-guide/application-workloads/deployments.md
index cb6d87893..cba033e6a 100644
--- a/content/en/docs/project-user-guide/application-workloads/deployments.md
+++ b/content/en/docs/project-user-guide/application-workloads/deployments.md
@@ -13,7 +13,7 @@ For more information, see the [official documentation of Kubernetes](https://kub
## Prerequisites
-You need to create a workspace, a project and an account (`project-regular`). The account must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project/).
+You need to create a workspace, a project and a user (`project-regular`). The user must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
## Create a Deployment
@@ -21,61 +21,47 @@ You need to create a workspace, a project and an account (`project-regular`). Th
Log in to the console as `project-regular`. Go to **Application Workloads** of a project, select **Workloads**, and click **Create** under the tab **Deployments**.
-
-
### Step 2: Enter basic information
Specify a name for the Deployment (for example, `demo-deployment`) and click **Next** to continue.
-
-
-### Step 3: Set an image
+### Step 3: Set a Pod
1. Before you set an image, define the number of replicated Pods in **Pod Replicas** by clicking
on the right and select options from the menu to modify your Deployment.
+1. After a Deployment is created, it will be displayed in the list. You can click
on the right and select options from the menu to modify your Deployment.
- 
-
- - **Edit**: View and edit the basic information.
+ - **Edit Information**: View and edit the basic information.
- **Edit YAML**: View, upload, download, or update the YAML file.
- - **Redeploy**: Redeploy the Deployment.
+ - **Re-create**: Re-create the Deployment.
- **Delete**: Delete the Deployment.
-2. Click the name of the Deployment and you can go to its detail page.
-
- 
+2. Click the name of the Deployment and you can go to its details page.
3. Click **More** to display the operations about this Deployment you can do.
- 
-
- - **Revision Rollback**: Select the revision to roll back.
- - **Horizontal Pod Autoscaling**: Autoscale the replicas according to CPU and memory usage. If both CPU and memory are specified, replicas are added or deleted if any of the conditions is met.
- - **Edit Config Template**: Configure update strategies, containers and volumes.
+ - **Roll Back**: Select the revision to roll back.
+ - **Edit Autoscaling**: Autoscale the replicas according to CPU and memory usage. If both CPU and memory are specified, replicas are added or deleted if any of the conditions is met.
+ - **Edit Settings**: Configure update strategies, containers and volumes.
- **Edit YAML**: View, upload, download, or update the YAML file.
- - **Redeploy**: Redeploy this Deployment.
+ - **Re-create**: Re-create this Deployment.
- **Delete**: Delete the Deployment, and return to the Deployment list page.
4. Click the **Resource Status** tab to view the port and Pod information of the Deployment.
- 
-
- - **Replica Status**: Click
or
to increase or decrease the number of Pod replicas.
- - **Pod detail**
-
- 
+ - **Replica Status**: Click
/
in the upper-right corner to start/stop automatic data refreshing.
@@ -166,10 +134,6 @@ Click the **Metadata** tab to view the labels and annotations of the Deployment.
Click the **Environment Variables** tab to view the environment variables of the Deployment.
-
-
### Events
-Click the **Events** tab to view the events of the Deployment.
-
-
\ No newline at end of file
+Click the **Events** tab to view the events of the Deployment.
\ No newline at end of file
diff --git a/content/en/docs/project-user-guide/application-workloads/horizontal-pod-autoscaling.md b/content/en/docs/project-user-guide/application-workloads/horizontal-pod-autoscaling.md
index bb9fe21d8..262e2587a 100755
--- a/content/en/docs/project-user-guide/application-workloads/horizontal-pod-autoscaling.md
+++ b/content/en/docs/project-user-guide/application-workloads/horizontal-pod-autoscaling.md
@@ -15,7 +15,7 @@ This document uses HPA based on CPU usage as an example. Operations for HPA base
## Prerequisites
- You need to [enable the Metrics Server](https://kubesphere.io/docs/pluggable-components/metrics-server/).
-- You need to create a workspace, a project and an account (for example, `project-regular`). `project-regular` must be invited to the project and assigned the `operator` role. For more information, see [Create Workspaces, Projects, Accounts and Roles](/docs/quick-start/create-workspace-and-project/).
+- You need to create a workspace, a project and a user (for example, `project-regular`). `project-regular` must be invited to the project and assigned the `operator` role. For more information, see [Create Workspaces, Projects, Users and Roles](/docs/quick-start/create-workspace-and-project/).
## Create a Service
@@ -23,19 +23,11 @@ This document uses HPA based on CPU usage as an example. Operations for HPA base
2. Choose **Services** in **Application Workloads** on the left navigation bar and click **Create** on the right.
- 
-
3. In the **Create Service** dialog box, click **Stateless Service**.
- 
-
4. Set the Service name (for example, `hpa`) and click **Next**.
- 
-
-5. Click **Add Container Image**, set **Image** to `mirrorgooglecontainers/hpa-example` and click **Use Default Ports**.
-
- 
+5. Click **Add Container**, set **Image** to `mirrorgooglecontainers/hpa-example` and click **Use Default Ports**.
6. Set the CPU request (for example, 0.15 cores) for each container, click **√**, and click **Next**.
@@ -46,28 +38,22 @@ This document uses HPA based on CPU usage as an example. Operations for HPA base
{{ notice >}}
- 
-
-7. Click **Next** on the **Mount Volumes** tab and click **Create** on the **Advanced Settings** tab.
+7. Click **Next** on the **Volume Settings** tab and click **Create** on the **Advanced Settings** tab.
## Configure Kubernetes HPA
-1. Choose **Deployments** in **Workloads** on the left navigation bar and click the HPA Deployment (for example, hpa-v1) on the right.
+1. Select **Deployments** in **Workloads** on the left navigation bar and click the HPA Deployment (for example, hpa-v1) on the right.
- 
-
-2. Click **More** and choose **Horizontal Pod Autoscaling** from the drop-down list.
-
- 
+2. Click **More** and select **Edit Autoscaling** from the drop-down menu.
3. In the **Horizontal Pod Autoscaling** dialog box, configure the HPA parameters and click **OK**.
- * **CPU Target Utilization**: Target percentage of the average Pod CPU request.
- * **Memory Target Usage**: Target average Pod memory usage in MiB.
- * **Min Replicas Number**: Minimum number of Pods.
- * **Max Replicas Number**: Maximum number of Pods.
+ * **Target CPU Usage (%)**: Target percentage of the average Pod CPU request.
+ * **Target Memory Usage (MiB)**: Target average Pod memory usage in MiB.
+ * **Minimum Replicas**: Minimum number of Pods.
+ * **Maximum Replicas**: Maximum number of Pods.
- In this example, **CPU Target Utilization** is set to `60`, **Min Replicas Number** is set to `1`, and **Max Replicas Number** is set to `10`.
+ In this example, **Target CPU Usage (%)** is set to `60`, **Minimum Replicas** is set to `1`, and **Maximum Replicas** is set to `10`.
{{< notice note >}}
@@ -75,49 +61,29 @@ This document uses HPA based on CPU usage as an example. Operations for HPA base
{{ notice >}}
- 
-
## Verify HPA
This section uses a Deployment that sends requests to the HPA Service to verify that HPA automatically adjusts the number of Pods to meet the resource usage target.
### Create a load generator Deployment
-1. Choose **Workloads** in **Application Workloads** on the left navigation bar and click **Create** on the right.
-
- 
+1. Select **Workloads** in **Application Workloads** on the left navigation bar and click **Create** on the right.
2. In the **Create Deployment** dialog box, set the Deployment name (for example, `load-generator`) and click **Next**.
- 
+3. Click **Add Container** and set **Image** to `busybox`.
-3. Click **Add Container Image** and set **Image** to `busybox`.
-
- 
-
-4. Scroll down in the dialog box, select **Start Command**, and set **Run Command** to `sh,-c` and **Parameters** to `while true; do wget -q -O- http://
on the right of the load generator Deployment (for example, load-generator-v1), and choose **Delete** from the drop-down list. After the load-generator Deployment is deleted, check the status of the HPA Deployment again.
-
- The number of Pods decreases to the minimum.
-
- 
+2. Choose **Workloads** in **Application Workloads** on the left navigation bar, click
on the right of the load generator Deployment (for example, load-generator-v1), and choose **Delete** from the drop-down list. After the load-generator Deployment is deleted, check the status of the HPA Deployment again. The number of Pods decreases to the minimum.
{{< notice note >}}
@@ -133,7 +99,6 @@ You can repeat steps in [Configure HPA](#configure-hpa) to edit the HPA configur
1. Choose **Workloads** in **Application Workloads** on the left navigation bar and click the HPA Deployment (for example, hpa-v1) on the right.
-2. Click
on the right of **Horizontal Pod Autoscaling** and choose **Cancel** from the drop-down list.
+2. Click
on the right of **Autoscaling** and choose **Cancel** from the drop-down list.
- 
diff --git a/content/en/docs/project-user-guide/application-workloads/jobs.md b/content/en/docs/project-user-guide/application-workloads/jobs.md
index 195ce4d2c..b60d34657 100644
--- a/content/en/docs/project-user-guide/application-workloads/jobs.md
+++ b/content/en/docs/project-user-guide/application-workloads/jobs.md
@@ -1,7 +1,7 @@
---
title: "Jobs"
-keywords: "KubeSphere, Kubernetes, docker, jobs"
-description: "Learn basic concepts of Jobs and how to create Jobs in KubeSphere."
+keywords: "KubeSphere, Kubernetes, Docker, Jobs"
+description: "Learn basic concepts of Jobs and how to create Jobs on KubeSphere."
linkTitle: "Jobs"
weight: 10250
@@ -15,7 +15,7 @@ The following example demonstrates specific steps of creating a Job (computing
## Prerequisites
-You need to create a workspace, a project and an account (`project-regular`). The account must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project/).
+You need to create a workspace, a project and a user (`project-regular`). The user must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
## Create a Job
@@ -23,8 +23,6 @@ You need to create a workspace, a project and an account (`project-regular`). Th
Log in to the console as `project-regular`. Go to **Jobs** under **Application Workloads** and click **Create**.
-
-
### Step 2: Enter basic information
Enter the basic information. Refer to the image below as an example.
@@ -33,34 +31,26 @@ Enter the basic information. Refer to the image below as an example.
- **Alias**: The alias name of the Job, making resources easier to identify.
- **Description**: The description of the Job, which gives a brief introduction of the Job.
-
+### Step 3: Strategy settings (optional)
-### Step 3: Job settings (optional)
-
-You can set the values in this step as below or click **Next** to use the default values. Refer to the table below for detailed explanations of each field.
-
-
+You can set the values in this step or click **Next** to use the default values. Refer to the table below for detailed explanations of each field.
| Name | Definition | Description |
| ----------------------- | ---------------------------- | ------------------------------------------------------------ |
-| Back off Limit | `spec.backoffLimit` | It specifies the number of retries before this Job is marked failed. It defaults to 6. |
-| Completions | `spec.completions` | It specifies the desired number of successfully finished Pods the Job should be run with. Setting it to nil means that the success of any Pod signals the success of all Pods, and allows parallelism to have any positive value. Setting it to 1 means that parallelism is limited to 1 and the success of that Pod signals the success of the Job. For more information, see [Jobs](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/). |
-| Parallelism | `spec.parallelism` | It specifies the maximum desired number of Pods the Job should run at any given time. The actual number of Pods running in a steady state will be less than this number when the work left to do is less than max parallelism ((`.spec.completions - .status.successful`) < `.spec.parallelism`). For more information, see [Jobs](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/). |
-| Active Deadline Seconds | `spec.activeDeadlineSeconds` | It specifies the duration in seconds relative to the startTime that the Job may be active before the system tries to terminate it; the value must be a positive integer. |
+| Maximum Retries | `spec.backoffLimit` | It specifies the maximum number of retries before this Job is marked as failed. It defaults to 6. |
+| Complete Pods | `spec.completions` | It specifies the desired number of successfully finished Pods the Job should be run with. Setting it to nil means that the success of any Pod signals the success of all Pods, and allows parallelism to have any positive value. Setting it to 1 means that parallelism is limited to 1 and the success of that Pod signals the success of the Job. For more information, see [Jobs](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/). |
+| Parallel Pods | `spec.parallelism` | It specifies the maximum desired number of Pods the Job should run at any given time. The actual number of Pods running in a steady state will be less than this number when the work left to do is less than max parallelism ((`.spec.completions - .status.successful`) < `.spec.parallelism`). For more information, see [Jobs](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/). |
+| Maximum Duration (s) | `spec.activeDeadlineSeconds` | It specifies the duration in seconds relative to the startTime that the Job may be active before the system tries to terminate it; the value must be a positive integer. |
-### Step 4: Set an image
+### Step 4: Set a Pod
-1. Select **Never** for **Restart Policy**. You can only specify **Never** or **OnFailure** for **Restart Policy** when the Job is not completed:
+1. Select **Re-create Pod** for **Restart Policy**. You can only specify **Re-create Pod** or **Restart container** for **Restart Policy** when the Job is not completed:
- - If **Restart Policy** is set to **Never**, the Job creates a new Pod when the Pod fails, and the failed Pod does not disappear.
+ - If **Restart Policy** is set to **Re-create Pod**, the Job creates a new Pod when the Pod fails, and the failed Pod does not disappear.
- - If **Restart Policy** is set to **OnFailure**, the Job will internally restart the container when the Pod fails, instead of creating a new Pod.
+ - If **Restart Policy** is set to **Restart container**, the Job will internally restart the container when the Pod fails, instead of creating a new Pod.
- 
-
-2. Click **Add Container Image** which directs you to the **Add Container** page. Enter `perl` in the image search bar and press **Enter**.
-
- 
+2. Click **Add Container** which directs you to the **Add Container** page. Enter `perl` in the image search box and press **Enter**.
3. On the same page, scroll down to **Start Command**. Enter the following commands in the box which computes pi to 2000 places then prints it. Click **√** in the lower-right corner and select **Next** to continue.
@@ -68,13 +58,11 @@ You can set the values in this step as below or click **Next** to use the defaul
perl,-Mbignum=bpi,-wle,print bpi(2000)
```
- 
-
- {{< notice note >}}For more information about setting images, see [Container Image Settings](../container-image-settings/).{{ notice >}}
+ {{< notice note >}}For more information about setting images, see [Pod Settings](../container-image-settings/).{{ notice >}}
### Step 5: Inspect the Job manifest (optional)
-1. Enable **Edit Mode** in the upper-right corner which displays the manifest file of the Job. You can see all the values are set based on what you have specified in the previous steps.
+1. Enable **Edit YAML** in the upper-right corner which displays the manifest file of the Job. You can see all the values are set based on what you have specified in the previous steps.
```yaml
apiVersion: batch/v1
@@ -113,36 +101,28 @@ You can set the values in this step as below or click **Next** to use the defaul
activeDeadlineSeconds: 300
```
-2. You can make adjustments in the manifest directly and click **Create** or disable the **Edit Mode** and get back to the **Create Job** page.
+2. You can make adjustments in the manifest directly and click **Create** or disable the **Edit YAML** and get back to the **Create** page.
- {{< notice note >}}You can skip **Mount Volumes** and **Advanced Settings** for this tutorial. For more information, see [Mount volumes](../deployments/#step-4-mount-volumes) and [Configure advanced settings](../deployments/#step-5-configure-advanced-settings).{{ notice >}}
+ {{< notice note >}}You can skip **Volume Settings** and **Advanced Settings** for this tutorial. For more information, see [Mount volumes](../deployments/#step-4-mount-volumes) and [Configure advanced settings](../deployments/#step-5-configure-advanced-settings).{{ notice >}}
### Step 6: Check the result
1. In the final step of **Advanced Settings**, click **Create** to finish. A new item will be added to the Job list if the creation is successful.
- 
-
-2. Click this Job and go to **Execution Records** where you can see the information of each execution record. There are four completed Pods since **Completions** was set to `4` in Step 3.
-
- 
+2. Click this Job and go to **Job Records** where you can see the information of each execution record. There are four completed Pods since **Completions** was set to `4` in Step 3.
{{< notice tip >}}
-You can rerun the Job if it fails, the reason of which displays under **Messages**.
+You can rerun the Job if it fails and the reason for failure is displayed under **Message**.
{{ notice >}}
-3. In **Resource Status**, you can inspect the Pod status. Two Pods were created each time as **Parallelism** was set to 2. Click
on the right and check the container log as shown below, which displays the expected calculation result.
-
- 
-
- 
+3. In **Resource Status**, you can inspect the Pod status. Two Pods were created each time as **Parallel Pods** was set to 2. Click
on the right and click
to refresh the execution records.
@@ -171,24 +147,16 @@ On the Job detail page, you can manage the Job after it is created.
1. Click the **Resource Status** tab to view the Pods of the Job.
- 
-
2. Click
to refresh the Pod information, and click
/
to display/hide the containers in each Pod.
### Metadata
Click the **Metadata** tab to view the labels and annotations of the Job.
-
-
### Environment variables
Click the **Environment Variables** tab to view the environment variables of the Job.
-
-
### Events
Click the **Events** tab to view the events of the Job.
-
-
\ No newline at end of file
diff --git a/content/en/docs/project-user-guide/application-workloads/routes.md b/content/en/docs/project-user-guide/application-workloads/routes.md
index a0be0cf08..7addb2bb5 100644
--- a/content/en/docs/project-user-guide/application-workloads/routes.md
+++ b/content/en/docs/project-user-guide/application-workloads/routes.md
@@ -11,7 +11,7 @@ A Route on KubeSphere is the same as an [Ingress](https://kubernetes.io/docs/con
## Prerequisites
-- You need to create a workspace, a project and two accounts (for example, `project-admin` and `project-regular`). In the project, the role of `project-admin` must be `admin` and that of `project-regular` must be `operator`. For more information, see [Create Workspaces, Projects, Accounts and Roles](/docs/quick-start/create-workspace-and-project/).
+- You need to create a workspace, a project and two users (for example, `project-admin` and `project-regular`). In the project, the role of `admin` must be `project-admin` and that of `project-regular` must be `operator`. For more information, see [Create Workspaces, Projects, Users and Roles](/docs/quick-start/create-workspace-and-project/).
- If the Route is to be accessed in HTTPS mode, you need to [create a Secret](/docs/project-user-guide/configuration/secrets/) that contains the `tls.crt` (TLS certificate) and `tls.key` (TLS private key) keys used for encryption.
- You need to [create at least one Service](/docs/project-user-guide/application-workloads/services/). This document uses a demo Service as an example, which returns the Pod name to external requests.
@@ -19,28 +19,16 @@ A Route on KubeSphere is the same as an [Ingress](https://kubernetes.io/docs/con
1. Log in to the KubeSphere web console as `project-admin` and go to your project.
-2. Choose **Advanced Settings** in **Project Settings** on the left navigation bar and click **Set Gateway** on the right.
+2. Select **Gateway Settings** in **Project Settings** on the left navigation bar and click **Enable Gateway** on the right.
+
+3. In the displayed dialog box, set **Access Mode** to **NodePort** or **LoadBalancer**, and click **OK**.
{{< notice note >}}
- If the access method has been set, you can click **Edit** and choose **Edit Gateway** to change the access method.
+ If **Access Mode** is set to **LoadBalancer**, you may need to enable the load balancer plugin in your environment according to the plugin user guide.
{{ notice >}}
- 
-
-3. In the displayed **Set Gateway** dialog box, set **Access Method** to **NodePort** or **LoadBalancer**, and click **Save**.
-
- {{< notice note >}}
-
- If **Access Method** is set to **LoadBalancer**, you may need to enable the load balancer plugin in your environment according to the plugin user guide.
-
- {{ notice >}}
-
- 
-
- 
-
## Create a Route
### Step 1: Configure basic information
@@ -49,34 +37,26 @@ A Route on KubeSphere is the same as an [Ingress](https://kubernetes.io/docs/con
2. Choose **Routes** in **Application Workloads** on the left navigation bar and click **Create** on the right.
- 
-
-3. On the **Basic Info** tab, configure the basic information about the Route and click **Next**.
+3. On the **Basic Information** tab, configure the basic information about the Route and click **Next**.
* **Name**: Name of the Route, which is used as a unique identifier.
* **Alias**: Alias of the Route.
* **Description**: Description of the Route.
- 
+### Step 2: Configure routing rules
-### Step 2: Configure Route rules
+1. On the **Routing Rules** tab, click **Add Routing Rule**.
-1. On the **Route Rules** tab, click **Add Route Rule**.
-
-2. Select a mode, configure Route rules, click **√**, and click **Next**.
+2. Select a mode, configure routing rules, click **√**, and click **Next**.
* **Auto Generate**: KubeSphere automatically generates a domain name in the `
on the right to further edit it, such as its metadata (excluding **Name**), YAML, port, and Internet access.
- 
-
- - **Edit**: View and edit the basic information.
+ - **Edit Information**: View and edit the basic information.
- **Edit YAML**: View, upload, download, or update the YAML file.
- **Edit Service**: View the access type and set selectors and ports.
- - **Edit Internet Access**: Edit the service Internet access method.
+ - **Edit External Access**: Edit external access method for the Service.
- **Delete**: When you delete a Service, associated resources will be displayed. If you check them, they will be deleted together with the Service.
-2. Click the name of the Service and you can go to its detail page.
-
- 
+2. Click the name of the Service and you can go to its details page.
- Click **More** to expand the drop-down menu which is the same as the one in the Service list.
- The Pod list provides detailed information of the Pod (status, node, Pod IP and resource usage).
- You can view the container information by clicking a Pod item.
- Click the container log icon to view output logs of the container.
- - You can view the Pod detail page by clicking the Pod name.
+ - You can view the Pod details page by clicking the Pod name.
### Resource status
-1. Click the **Resource Status** tab to view information about the Service ports, Workloads, and Pods.
-
- 
+1. Click the **Resource Status** tab to view information about the Service ports, workloads, and Pods.
2. In the **Pods** area, click
to refresh the Pod information, and click
/
to display/hide the containers in each Pod.
- 
-
### Metadata
Click the **Metadata** tab to view the labels and annotations of the Service.
-
-
### Events
-Click the **Events** tab to view the events of the Service.
-
-
\ No newline at end of file
+Click the **Events** tab to view the events of the Service.
\ No newline at end of file
diff --git a/content/en/docs/project-user-guide/application-workloads/statefulsets.md b/content/en/docs/project-user-guide/application-workloads/statefulsets.md
index f270ca0a4..833c7f925 100644
--- a/content/en/docs/project-user-guide/application-workloads/statefulsets.md
+++ b/content/en/docs/project-user-guide/application-workloads/statefulsets.md
@@ -1,12 +1,12 @@
---
-title: "StatefulSets"
-keywords: 'KubeSphere, Kubernetes, StatefulSets, dashboard, service'
-description: 'Learn basic concepts of StatefulSets and how to create StatefulSets in KubeSphere.'
+title: "Kubernetes StatefulSet in KubeSphere"
+keywords: 'KubeSphere, Kubernetes, StatefulSets, Dashboard, Service'
+description: 'Learn basic concepts of StatefulSets and how to create StatefulSets on KubeSphere.'
linkTitle: "StatefulSets"
weight: 10220
---
-As a workload API object, a StatefulSet is used to manage stateful applications. It is responsible for the deploying, scaling of a set of Pods, and guarantees the ordering and uniqueness of these Pods.
+As a workload API object, a Kubernetes StatefulSet is used to manage stateful applications. It is responsible for the deploying, scaling of a set of Pods, and guarantees the ordering and uniqueness of these Pods.
Like a Deployment, a StatefulSet manages Pods that are based on an identical container specification. Unlike a Deployment, a StatefulSet maintains a sticky identity for each of their Pods. These Pods are created from the same specification, but are not interchangeable: each has a persistent identifier that it maintains across any rescheduling.
@@ -23,64 +23,52 @@ For more information, see the [official documentation of Kubernetes](https://kub
## Prerequisites
-You need to create a workspace, a project and an account (`project-regular`). The account must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project/).
+You need to create a workspace, a project and a user (`project-regular`). The user must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
-## Create a StatefulSet
+## Create a Kubernetes StatefulSet
In KubeSphere, a **Headless** service is also created when you create a StatefulSet. You can find the headless service in [Services](../services/) under **Application Workloads** in a project.
### Step 1: Open the dashboard
-Log in to the console as `project-regular`. Go to **Application Workloads** of a project, select **Workloads**, and click **Create** under the tab **StatefulSets**.
-
-
+Log in to the console as `project-regular`. Go to **Application Workloads** of a project, select **Workloads**, and click **Create** under the **StatefulSets** tab.
### Step 2: Enter basic information
Specify a name for the StatefulSet (for example, `demo-stateful`) and click **Next** to continue.
-
-
-### Step 3: Set an image
+### Step 3: Set a Pod
1. Before you set an image, define the number of replicated Pods in **Pod Replicas** by clicking
on the right to select options from the menu to modify your StatefulSet.
+1. After a StatefulSet is created, it will be displayed in the list. You can click
on the right to select options from the menu to modify your StatefulSet.
- 
-
- - **Edit**: View and edit the basic information.
- - **Edit YAMl**: View, upload, download, or update the YAML file.
- - **Redeploy**: Redeploy the StatefulSet.
+ - **Edit Information**: View and edit the basic information.
+ - **Edit YAML**: View, upload, download, or update the YAML file.
+ - **Re-create**: Re-create the StatefulSet.
- **Delete**: Delete the StatefulSet.
-2. Click the name of the StatefulSet and you can go to its detail page.
-
- 
+2. Click the name of the StatefulSet and you can go to its details page.
3. Click **More** to display what operations about this StatefulSet you can do.
- 
-
- - **Revision Rollback**: Select the revision to roll back.
+ - **Roll Back**: Select the revision to roll back.
- **Edit Service**: Set the port to expose the container image and the service port.
- - **Edit Config Template**: Configure update strategies, containers and volumes.
+ - **Edit Settings**: Configure update strategies, containers and volumes.
- **Edit YAML**: View, upload, download, or update the YAML file.
- - **Redeploy**: Redeploy this StatefulSet.
+ - **Re-create**: Re-create this StatefulSet.
- **Delete**: Delete the StatefulSet, and return to the StatefulSet list page.
4. Click the **Resource Status** tab to view the port and Pod information of a StatefulSet.
- 
-
- - **Replica Status**: Click
or
to increase or decrease the number of Pod replicas.
- - **Pod detail**
-
- 
+ - **Replica Status**: Click
/
in the upper-right corner to start/stop automatic data refreshing.
@@ -174,11 +142,7 @@ Click the **Metadata** tab to view the labels and annotations of the StatefulSet
Click the **Environment Variables** tab to view the environment variables of the StatefulSet.
-
-
### Events
Click the **Events** tab to view the events of the StatefulSet.
-
-
diff --git a/content/en/docs/project-user-guide/application/app-template.md b/content/en/docs/project-user-guide/application/app-template.md
index a7e879a7d..30958f0bc 100644
--- a/content/en/docs/project-user-guide/application/app-template.md
+++ b/content/en/docs/project-user-guide/application/app-template.md
@@ -1,28 +1,24 @@
---
title: "App Templates"
-keywords: 'Kubernetes, chart, Helm, KubeSphere, application, repository, template'
+keywords: 'Kubernetes, Chart, Helm, KubeSphere, Application Template, Repository'
description: 'Understand the concept of app templates and how they can help to deploy applications within enterprises.'
linkTitle: "App Templates"
weight: 10110
---
-An app template serves as a way for users to upload, deliver and manage apps. Generally, an app is composed of one or more Kubernetes workloads (for example, [Deployments](../../../project-user-guide/application-workloads/deployments/), [StatefulSets](../../../project-user-guide/application-workloads/statefulsets/) and [DaemonSets](../../../project-user-guide/application-workloads/daemonsets/)) and [Services](../../../project-user-guide/application-workloads/services/) based on how it functions and communicates with the external environment. Apps that are uploaded as app templates are built based on a [Helm](https://helm.sh/) package.
+An app template serves as a way for users to upload, deliver, and manage apps. Generally, an app is composed of one or more Kubernetes workloads (for example, [Deployments](../../../project-user-guide/application-workloads/deployments/), [StatefulSets](../../../project-user-guide/application-workloads/statefulsets/) and [DaemonSets](../../../project-user-guide/application-workloads/daemonsets/)) and [Services](../../../project-user-guide/application-workloads/services/) based on how it functions and communicates with the external environment. Apps that are uploaded as app templates are built based on a [Helm](https://helm.sh/) package.
## How App Templates Work
You can deliver Helm charts to the public repository of KubeSphere or import a private app repository to offer app templates.
-The public repository is also known as the App Store in KubeSphere, accessible to every tenant in a workspace. After [uploading the Helm chart of an app](../../../workspace-administration/upload-helm-based-application/), you can deploy your app to test its functions and submit it for review. Ultimately, you have the option to release it the App Store after it is approved. For more information, see [Application Lifecycle Management](../../../application-store/app-lifecycle-management/).
-
-
+The public repository, also known as the App Store on KubeSphere, is accessible to every tenant in a workspace. After [uploading the Helm chart of an app](../../../workspace-administration/upload-helm-based-application/), you can deploy your app to test its functions and submit it for review. Ultimately, you have the option to release it to the App Store after it is approved. For more information, see [Application Lifecycle Management](../../../application-store/app-lifecycle-management/).
For a private repository, only users with required permissions are allowed to [add private repositories](../../../workspace-administration/app-repository/import-helm-repository/) in a workspace. Generally, the private repository is built based on object storage services, such as MinIO. After imported to KubeSphere, these private repositories serve as application pools to provide app templates.
-
-
{{< notice note >}}
-[For individual apps that are uploaded as Helm charts](../../../workspace-administration/upload-helm-based-application/) to KubeSphere, they display in the App Store together with built-in apps after approved and released. Besides, when you select app templates from private app repositories, you can also see **From workspace** in the list, which stores these individual apps uploaded as Helm charts.
+[For individual apps that are uploaded as Helm charts](../../../workspace-administration/upload-helm-based-application/) to KubeSphere, they are displayed in the App Store together with built-in apps after approved and released. Besides, when you select app templates from private app repositories, you can also see **Current workspace** in the list, which stores these individual apps uploaded as Helm charts.
{{ notice >}}
diff --git a/content/en/docs/project-user-guide/application/compose-app.md b/content/en/docs/project-user-guide/application/compose-app.md
index 2037dd63c..b2564950e 100644
--- a/content/en/docs/project-user-guide/application/compose-app.md
+++ b/content/en/docs/project-user-guide/application/compose-app.md
@@ -6,32 +6,32 @@ linkTitle: "Create a Microservices-based App"
weight: 10140
---
-With each microservice handling a single part of the app's functionality, an app can be divided into different components. These components have their own responsibilities and limitations, independent from each other. In KubeSphere, this kind of app is called **Composing App**, which can be built through newly created Services or existing Services.
+With each microservice handling a single part of the app's functionality, an app can be divided into different components. These components have their own responsibilities and limitations, independent from each other. In KubeSphere, this kind of app is called **Composed App**, which can be built through newly created Services or existing Services.
This tutorial demonstrates how to create a microservices-based app Bookinfo, which is composed of four Services, and set a customized domain name to access the app.
## Prerequisites
-- You need to create a workspace, a project, and a user account (`project-regular`) for this tutorial. The account needs to be invited to the project with the `operator` role. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project/).
+- You need to create a workspace, a project, and a user (`project-regular`) for this tutorial. The user needs to be invited to the project with the `operator` role. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
- `project-admin` needs to [set the project gateway](../../../project-administration/project-gateway/) so that `project-regular` can define a domain name when creating the app.
## Create Microservices that Compose an App
-1. Log in to the web console of KubeSphere and navigate to **Apps** in **Application Workloads** of your project. On the **Composing Apps** tab, click **Create Composing App**.
+1. Log in to the web console of KubeSphere and navigate to **Apps** in **Application Workloads** of your project. On the **Composed Apps** tab, click **Create**.
2. Set a name for the app (for example, `bookinfo`) and click **Next**.
-3. On the **Components** page, you need to create microservices that compose the app. Click **Add Service** and select **Stateless Service**.
+3. On the **Services** page, you need to create microservices that compose the app. Click **Create Service** and select **Stateless Service**.
4. Set a name for the Service (e.g `productpage`) and click **Next**.
{{< notice note >}}
- You can create a Service on the dashboard directly or enable **Edit Mode** in the top-right corner to edit the YAML file.
+ You can create a Service on the dashboard directly or enable **Edit YAML** in the upper-right corner to edit the YAML file.
{{ notice >}}
-5. Click **Add Container Image** under **Container Image** and enter `kubesphere/examples-bookinfo-productpage-v1:1.13.0` in the search bar to use the Docker Hub image.
+5. Click **Add Container** under **Containers** and enter `kubesphere/examples-bookinfo-productpage-v1:1.13.0` in the search box to use the Docker Hub image.
{{< notice note >}}
@@ -39,11 +39,11 @@ This tutorial demonstrates how to create a microservices-based app Bookinfo, whi
{{ notice >}}
-6. Click **Use Default Ports**. For more information about image settings, see [Container Image Settings](../../../project-user-guide/application-workloads/container-image-settings/). Click **√** in the bottom-right corner and **Next** to continue.
+6. Click **Use Default Ports**. For more information about image settings, see [Pod Settings](../../../project-user-guide/application-workloads/container-image-settings/). Click **√** in the lower-right corner and **Next** to continue.
-7. On the **Mount Volumes** page, [add a volume](../../../project-user-guide/storage/volumes/) or click **Next** to continue.
+7. On the **Volume Settings** page, [add a volume](../../../project-user-guide/storage/volumes/) or click **Next** to continue.
-8. Click **Add** on the **Advanced Settings** page directly.
+8. Click **Create** on the **Advanced Settings** page.
9. Similarly, add the other three microservices for the app. Here is the image information:
@@ -55,13 +55,11 @@ This tutorial demonstrates how to create a microservices-based app Bookinfo, whi
10. When you finish adding microservices, click **Next**.
-11. On the **Internet Access** page, click **Add Route Rule**. On the **Specify Domain** tab, set a domain name for your app (for example, `demo.bookinfo`) and select `http` in the **Protocol** field. For `Paths`, select the Service `productpage` and port `9080`. Click **OK** to continue.
-
- 
+11. On the **Route Settings** page, click **Add Routing Rule**. On the **Specify Domain** tab, set a domain name for your app (for example, `demo.bookinfo`) and select `HTTP` in the **Protocol** field. For `Paths`, select the Service `productpage` and port `9080`. Click **OK** to continue.
{{< notice note >}}
-The button **Add Route Rule** is not visible if the project gateway is not set.
+The button **Add Routing Rule** is not visible if the project gateway is not set.
{{ notice >}}
@@ -84,13 +82,9 @@ The button **Add Route Rule** is not visible if the project gateway is not set.
{{ notice >}}
-2. In **Composing Apps**, click the app you just created.
+2. In **Composed Apps**, click the app you just created.
-3. In **Application Components**, click **Click to visit** to access the app.
-
- 
-
- 
+3. In **Resource Status**, click **Access Service** under **Routes** to access the app.
{{< notice note >}}
@@ -100,7 +94,3 @@ The button **Add Route Rule** is not visible if the project gateway is not set.
4. Click **Normal user** and **Test user** respectively to see other **Services**.
- 
-
-
-
diff --git a/content/en/docs/project-user-guide/application/deploy-app-from-appstore.md b/content/en/docs/project-user-guide/application/deploy-app-from-appstore.md
index b6d9d0493..f131f373c 100644
--- a/content/en/docs/project-user-guide/application/deploy-app-from-appstore.md
+++ b/content/en/docs/project-user-guide/application/deploy-app-from-appstore.md
@@ -13,70 +13,48 @@ This tutorial demonstrates how to quickly deploy [NGINX](https://www.nginx.com/)
## Prerequisites
- You have enabled [OpenPitrix (App Store)](../../../pluggable-components/app-store/).
-- You need to create a workspace, a project, and a user account (`project-regular`) for this tutorial. The account needs to be a platform regular user invited to the project with the `operator` role. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project/).
+- You need to create a workspace, a project, and a user (`project-regular`) for this tutorial. The user must be invited to the project and granted the `operator` role. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
## Hands-on Lab
### Step 1: Deploy NGINX from the App Store
-1. Log in to the web console of KubeSphere as `project-regular` and click **App Store** in the top-left corner.
+1. Log in to the web console of KubeSphere as `project-regular` and click **App Store** in the upper-left corner.
{{< notice note >}}
- You can also go to **Apps** under **Application Workloads** in your project, click **Deploy New App**, and select **From App Store** to go to the App Store.
+ You can also go to **Apps** under **Application Workloads** in your project, click **Create**, and select **From App Store** to go to the App Store.
{{ notice >}}
-2. Find NGINX and click **Deploy** on the **App Information** page.
-
- 
-
- 
+2. Search for NGINX, click it, and click **Install** on the **App Information** page. Make sure you click **Agree** in the displayed **App Deploy Agreement** dialog box.
3. Set a name and select an app version. Make sure NGINX is deployed in `demo-project` and click **Next**.
- 
-
-4. In **App Configurations**, specify the number of replicas to deploy for the app and enable Ingress based on your needs. When you finish, click **Deploy**.
-
- 
-
- 
+4. In **App Settings**, specify the number of replicas to deploy for the app and enable Ingress based on your needs. When you finish, click **Install**.
{{< notice note >}}
- To specify more values for NGINX, use the toggle switch to see the app’s manifest in YAML format and edit its configurations.
+ To specify more values for NGINX, use the toggle to see the app’s manifest in YAML format and edit its configurations.
{{ notice >}}
5. Wait until NGINX is up and running.
- 
-
### Step 2: Access NGINX
To access NGINX outside the cluster, you need to expose the app through a NodePort first.
-1. Go to **Services** and click the service name of NGINX.
+1. Go to **Services** in the project `demo-project` and click the service name of NGINX.
- 
-
-2. On the Service detail page, click **More** and select **Edit Internet Access** from the drop-down menu.
-
- 
+2. On the Service details page, click **More** and select **Edit External Access** from the drop-down menu.
3. Select **NodePort** for **Access Method** and click **OK**. For more information, see [Project Gateway](../../../project-administration/project-gateway/).
- 
-
-4. Under **Service Ports**, you can see the port is exposed.
-
- 
+4. Under **Ports**, view the exposed port.
5. Access NGINX through `
on the right and select the operation below from the drop-down list.
+1. After a ConfigMap is created, it is displayed on the **ConfigMaps** page. You can click
on the right and select the operation below from the drop-down list.
- - **Edit**: View and edit the basic information.
+ - **Edit Information**: View and edit the basic information.
- **Edit YAML**: View, upload, download, or update the YAML file.
- - **Modify Config**: Modify the key-value pair of the ConfigMap.
+ - **Edit Settings**: Modify the key-value pair of the ConfigMap.
- **Delete**: Delete the ConfigMap.
-2. Click the name of the ConfigMap to go to its detail page. Under the tab **Detail**, you can see all the key-value pairs you have added for the ConfigMap.
-
- 
+2. Click the name of the ConfigMap to go to its details page. Under the tab **Data**, you can see all the key-value pairs you have added for the ConfigMap.
3. Click **More** to display what operations about this ConfigMap you can do.
- **Edit YAML**: View, upload, download, or update the YAML file.
- - **Modify Config**: Modify the key-value pair of the ConfigMap.
+ - **Edit Settings**: Modify the key-value pair of the ConfigMap.
- **Delete**: Delete the ConfigMap, and return to the list page.
4. Click **Edit Information** to view and edit the basic information.
@@ -72,6 +68,4 @@ You can see the ConfigMap manifest file in YAML format by enabling **Edit Mode**
## Use a ConfigMap
-When you create workloads, [Services](../../../project-user-guide/application-workloads/services/), [Jobs](../../../project-user-guide/application-workloads/jobs/) or [CronJobs](../../../project-user-guide/application-workloads/cronjobs/), you may need to add environment variables for containers. On the **Container Image** page, check **Environment Variables** and click **Use ConfigMap or Secret** to use a ConfigMap from the list.
-
-
\ No newline at end of file
+When you create workloads, [Services](../../../project-user-guide/application-workloads/services/), [Jobs](../../../project-user-guide/application-workloads/jobs/) or [CronJobs](../../../project-user-guide/application-workloads/cronjobs/), you may need to add environment variables for containers. On the **Add Container** page, check **Environment Variables** and click **Use ConfigMap or Secret** to use a ConfigMap from the list.
diff --git a/content/en/docs/project-user-guide/configuration/image-registry.md b/content/en/docs/project-user-guide/configuration/image-registry.md
index 6bb20dae8..1469cbf9c 100644
--- a/content/en/docs/project-user-guide/configuration/image-registry.md
+++ b/content/en/docs/project-user-guide/configuration/image-registry.md
@@ -1,18 +1,18 @@
---
title: "Image Registries"
keywords: 'KubeSphere, Kubernetes, docker, Secrets'
-description: 'Learn how to create an image registry in KubeSphere.'
+description: 'Learn how to create an image registry on KubeSphere.'
linkTitle: "Image Registries"
weight: 10430
---
-A Docker image is a read-only template that can be used to deploy container services. Each image has a unique identifier (i.e. image name:tag). For example, an image can contain a complete package of an Ubuntu operating system environment with only Apache and a few applications installed. An image registry is used to store and distribute Docker images.
+A Docker image is a read-only template that can be used to deploy container services. Each image has a unique identifier (for example, image name:tag). For example, an image can contain a complete package of an Ubuntu operating system environment with only Apache and a few applications installed. An image registry is used to store and distribute Docker images.
This tutorial demonstrates how to create Secrets for different image registries.
## Prerequisites
-You need to create a workspace, a project and an account (`project-regular`). The account must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project/).
+You need to create a workspace, a project and a user (`project-regular`). The user must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
## Create a Secret
@@ -20,9 +20,7 @@ When you create workloads, [Services](../../../project-user-guide/application-wo
### Step 1: Open the dashboard
-Log in to the web console of KubeSphere as `project-regular`. Go to **Configurations** of a project, choose **Secrets** and click **Create**.
-
-
+Log in to the web console of KubeSphere as `project-regular`. Go to **Configuration** of a project, select **Secrets** and click **Create**.
### Step 2: Enter basic information
@@ -30,30 +28,24 @@ Specify a name for the Secret (for example, `demo-registry-secret`) and click **
{{< notice tip >}}
-You can see the Secret's manifest file in YAML format by enabling **Edit Mode** in the upper-right corner. KubeSphere allows you to edit the manifest file directly to create a Secret. Alternatively, you can follow the steps below to create a Secret via the dashboard.
+You can see the Secret's manifest file in YAML format by enabling **Edit YAML** in the upper-right corner. KubeSphere allows you to edit the manifest file directly to create a Secret. Alternatively, you can follow the steps below to create a Secret via the dashboard.
{{ notice >}}
-
-
### Step 3: Specify image registry information
-Select **kubernetes.io/dockerconfigjson (Image Registry Secret)** for **Type**. To use images from your private registry as you create application workloads, you need to specify the following fields.
+Select **Image registry information** for **Type**. To use images from your private registry as you create application workloads, you need to specify the following fields.
- **Registry Address**. The address of the image registry that stores images for you to use when creating application workloads.
- **Username**. The account name you use to log in to the registry.
- **Password**. The password you use to log in to the registry.
- **Email** (optional). Your email address.
-
-
#### Add the Docker Hub registry
1. Before you add your image registry in [Docker Hub](https://hub.docker.com/), make sure you have an available Docker Hub account. On the **Secret Settings** page, enter `docker.io` for **Registry Address** and enter your Docker ID and password for **User Name** and **Password**. Click **Validate** to check whether the address is available.
- 
-
-2. Click **Create**. Later, the Secret will be displayed on the **Secrets** page. For more information about how to edit the Secret after you create it, see [Check Secret Details](../../../project-user-guide/configuration/secrets/#check-secret-details).
+2. Click **Create**. Later, the Secret is displayed on the **Secrets** page. For more information about how to edit the Secret after you create it, see [Check Secret Details](../../../project-user-guide/configuration/secrets/#check-secret-details).
#### Add the Harbor image registry
@@ -75,7 +67,7 @@ Select **kubernetes.io/dockerconfigjson (Image Registry Secret)** for **Type**.
- `Environment` represents [dockerd options](https://docs.docker.com/engine/reference/commandline/dockerd/).
- - `--insecure-registry` is required by the Docker daemon for the communication with an insecure registry. Refer to [docker docs](https://docs.docker.com/engine/reference/commandline/dockerd/#insecure-registries) for its syntax.
+ - `--insecure-registry` is required by the Docker daemon for the communication with an insecure registry. Refer to [Docker documentation](https://docs.docker.com/engine/reference/commandline/dockerd/#insecure-registries) for its syntax.
{{ notice >}}
@@ -89,9 +81,7 @@ Select **kubernetes.io/dockerconfigjson (Image Registry Secret)** for **Type**.
sudo systemctl restart docker
```
-3. Go back to the **Secret Settings** page and select **kubernetes.io/dockerconfigjson (Image Registry Secret)** for **Type**. Enter your Harbor IP address for **Registry Address** and enter the username and password.
-
- 
+3. Go back to the **Data Settings** page and select **Image registry information** for **Type**. Enter your Harbor IP address for **Registry Address** and enter the username and password.
{{< notice note >}}
@@ -99,7 +89,7 @@ Select **kubernetes.io/dockerconfigjson (Image Registry Secret)** for **Type**.
{{ notice >}}
-4. Click **Create**. Later, the Secret will be displayed on the **Secrets** page. For more information about how to edit the Secret after you create it, see [Check Secret Details](../../../project-user-guide/configuration/secrets/#check-secret-details).
+4. Click **Create**. Later, the Secret is displayed on the **Secrets** page. For more information about how to edit the Secret after you create it, see [Check Secret Details](../../../project-user-guide/configuration/secrets/#check-secret-details).
**HTTPS**
@@ -107,6 +97,4 @@ For the integration of the HTTPS-based Harbor registry, refer to [Harbor Documen
## Use an Image Registry
-When you set images, you can select the private image registry if the Secret of it is created in advance. For example, click the arrow on the **Container Image** page to expand the registry list when you create a [Deployment](../../../project-user-guide/application-workloads/deployments/). After you choose the image registry, enter the image name and tag to use the image.
-
-
\ No newline at end of file
+When you set images, you can select the private image registry if the Secret of it is created in advance. For example, click the arrow on the **Add Container** page to expand the registry list when you create a [Deployment](../../../project-user-guide/application-workloads/deployments/). After you choose the image registry, enter the image name and tag to use the image.
diff --git a/content/en/docs/project-user-guide/configuration/secrets.md b/content/en/docs/project-user-guide/configuration/secrets.md
index 23f5f2968..c2043b065 100644
--- a/content/en/docs/project-user-guide/configuration/secrets.md
+++ b/content/en/docs/project-user-guide/configuration/secrets.md
@@ -1,7 +1,7 @@
---
-title: "Secrets"
+title: "Kubernetes Secrets in KubeSphere"
keywords: 'KubeSphere, Kubernetes, Secrets'
-description: 'Learn how to create a Secret in KubeSphere.'
+description: 'Learn how to create a Secret on KubeSphere.'
linkTitle: "Secrets"
weight: 10410
---
@@ -16,15 +16,13 @@ This tutorial demonstrates how to create a Secret in KubeSphere.
## Prerequisites
-You need to create a workspace, a project and an account (`project-regular`). The account must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project/).
+You need to create a workspace, a project and a user (`project-regular`). The user must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
-## Create a Secret
+## Create a Kubernetes Secret
### Step 1: Open the dashboard
-Log in to the console as `project-regular`. Go to **Configurations** of a project, choose **Secrets** and click **Create**.
-
-
+Log in to the console as `project-regular`. Go to **Configuration** of a project, select **Secrets** and click **Create**.
### Step 2: Enter basic information
@@ -32,17 +30,13 @@ Specify a name for the Secret (for example, `demo-secret`) and click **Next** to
{{< notice tip >}}
-You can see the Secret's manifest file in YAML format by enabling **Edit Mode** in the upper-right corner. KubeSphere allows you to edit the manifest file directly to create a Secret. Alternatively, you can follow the steps below to create a Secret via the dashboard.
+You can see the Secret's manifest file in YAML format by enabling **Edit YAML** in the upper-right corner. KubeSphere allows you to edit the manifest file directly to create a Secret. Alternatively, you can follow the steps below to create a Secret via the dashboard.
{{ notice >}}
-
-
### Step 3: Set a Secret
-1. Under the tab **Secret Settings**, you must choose a Secret type. In KubeSphere, you can create the following types of Secrets, indicated by the `type` field.
-
- 
+1. Under the tab **Data Settings**, you must select a Secret type. In KubeSphere, you can create the following Kubernetes Secret types, indicated by the `type` field.
{{< notice note >}}
@@ -50,42 +44,28 @@ You can see the Secret's manifest file in YAML format by enabling **Edit Mode**
{{ notice >}}
- - **Opaque (Default)**. The type of [Opaque](https://kubernetes.io/docs/concepts/configuration/secret/#opaque-secrets) in Kubernetes, which is also the default Secret type in Kubernetes. You can create arbitrary user-defined data for this type of Secret. Click **Add Data** to add key-value pairs for it.
+ - **Default**. The type of [Opaque](https://kubernetes.io/docs/concepts/configuration/secret/#opaque-secrets) in Kubernetes, which is also the default Secret type in Kubernetes. You can create arbitrary user-defined data for this type of Secret. Click **Add Data** to add key-value pairs for it.
- 
+ - **TLS information**. The type of [kubernetes.io/tls](https://kubernetes.io/docs/concepts/configuration/secret/#tls-secrets) in Kubernetes, which is used to store a certificate and its associated key that are typically used for TLS, such as TLS termination of Ingress resources. You must specify **Credential** and **Private Key** for it, indicated by `tls.crt` and `tls.key` in the YAML file respectively.
- - **kubernetes.io/tis (TLS)**. The type of [kubernetes.io/tls](https://kubernetes.io/docs/concepts/configuration/secret/#tls-secrets) in Kubernetes, which is used to store a certificate and its associated key that are typically used for TLS, such as TLS termination of Ingress resources. You must specify **Credential** and **Private Key** for it, indicated by `tls.crt` and `tls.key` in the YAML file respectively.
+ - **Image registry information**. The type of [kubernetes.io/dockerconfigjson](https://kubernetes.io/docs/concepts/configuration/secret/#docker-config-secrets) in Kubernetes, which is used to store the credentials for accessing a Docker registry for images. For more information, see [Image Registries](../image-registry/).
- 
+ - **Username and password**. The type of [kubernetes.io/basic-auth](https://kubernetes.io/docs/concepts/configuration/secret/#basic-authentication-secret) in Kubernetes, which is used to store credentials needed for basic authentication. You must specify **Username** and **Password** for it, indicated by `username` and `password` in the YAML file respectively.
- - **kubernetes.io/dockerconfigjson (Image Registry Secret)**. The type of [kubernetes.io/dockerconfigjson](https://kubernetes.io/docs/concepts/configuration/secret/#docker-config-secrets) in Kubernetes, which is used to store the credentials for accessing a Docker registry for images. For more information, see [Image Registries](../image-registry/).
+2. For this tutorial, select the default type of Secret. Click **Add Data** and enter the **Key** (`MYSQL_ROOT_PASSWORD`) and **Value** (`123456`) to specify a Secret for MySQL.
- 
-
- - **kubernetes.io/basic-auth (Account Password Secret)**. The type of [kubernetes.io/basic-auth](https://kubernetes.io/docs/concepts/configuration/secret/#basic-authentication-secret) in Kubernetes, which is used to store credentials needed for basic authentication. You must specify **Username** and **Password** for it, indicated by `username` and `password` in the YAML file respectively.
-
- 
-
-2. For this tutorial, select the default type of Secret. Click **Add Data** and enter the **Key** (`MYSQL_ROOT_PASSWORD`) and **Value** (`123456`) as below to specify a Secret for MySQL.
-
- 
-
-3. Click **√** in the bottom-right corner to confirm. You can continue to add key-value pairs to the Secret or click **Create** to finish the creation. For more information about how to use the Secret, see [Compose and Deploy WordPress](../../../quick-start/wordpress-deployment/#task-3-create-an-application).
+3. Click **√** in the lower-right corner to confirm. You can continue to add key-value pairs to the Secret or click **Create** to finish the creation. For more information about how to use the Secret, see [Compose and Deploy WordPress](../../../quick-start/wordpress-deployment/#task-3-create-an-application).
## Check Secret Details
-1. After a Secret is created, it will be displayed in the list as below. You can click
on the right and select the operation from the menu to modify it.
+1. After a Secret is created, it will be displayed in the list. You can click
on the right and select the operation from the menu to modify it.
- 
-
- - **Edit**: View and edit the basic information.
+ - **Edit Information**: View and edit the basic information.
- **Edit YAML**: View, upload, download, or update the YAML file.
- - **Edit Seret**: Modify the key-value pair of the Secret.
+ - **Edit Settings**: Modify the key-value pair of the Secret.
- **Delete**: Delete the Secret.
-2. Click the name of the Secret and you can go to its detail page. Under the tab **Detail**, you can see all the key-value pairs you have added for the Secret.
-
- 
+2. Click the name of the Secret and you can go to its details page. Under the tab **Data**, you can see all the key-value pairs you have added for the Secret.
{{< notice note >}}
@@ -95,23 +75,17 @@ As mentioned above, KubeSphere automatically converts the value of a key into it
3. Click **More** to display what operations about this Secret you can do.
- 
-
- **Edit YAML**: View, upload, download, or update the YAML file.
- **Edit Secret**: Modify the key-value pair of the Secret.
- **Delete**: Delete the Secret, and return to the list page.
-## Use a Secret
+## How to Use a Kubernetes Secret
Generally, you need to use a Secret when you create workloads, [Services](../../../project-user-guide/application-workloads/services/), [Jobs](../../../project-user-guide/application-workloads/jobs/) or [CronJobs](../../../project-user-guide/application-workloads/cronjobs/). For example, you can select a Secret for a code repository. For more information, see [Image Registries](../image-registry/).
-
-
Alternatively, you may need to add environment variables for containers. On the **Container Image** page, select **Environment Variables** and click **Use ConfigMap or Secret** to use a Secret from the list.
-
-
## Create the Most Common Secrets
This section shows how to create Secrets from your Docker Hub account and GitHub account.
@@ -120,9 +94,9 @@ This section shows how to create Secrets from your Docker Hub account and GitHub
1. Log in to KubeSphere as `project-regular` and go to your project. Select **Secrets** from the navigation bar and click **Create** on the right.
-2. Set a name, such as `dockerhub-id`, and click **Next**. On the **Secret Settings** page, fill in the following fields and click **Validate** to verify whether the information provided is valid.
+2. Set a name, such as `dockerhub-id`, and click **Next**. On the **Data Settings** page, fill in the following fields and click **Validate** to verify whether the information provided is valid.
- **Type**: Select **kubernetes.io/dockerconfigjson (Image Registry Secret)**.
+ **Type**: Select **Image registry information**.
**Registry Address**: Enter the Docker Hub registry address, such as `docker.io`.
@@ -130,22 +104,18 @@ This section shows how to create Secrets from your Docker Hub account and GitHub
**Password**: Enter your Docker Hub password.
- 
-
3. Click **Create** to finish.
### Create the GitHub Secret
1. Log in to KubeSphere as `project-regular` and go to your project. Select **Secrets** from the navigation bar and click **Create** on the right.
-2. Set a name, such as `github-id`, and click **Next**. On the **Secret Settings** page, fill in the following fields.
+2. Set a name, such as `github-id`, and click **Next**. On the **Data Settings** page, fill in the following fields.
- **Type**: Select **kubernetes.io/basic-auth (Account Password Secret)**.
+ **Type**: Select **Username and password**.
**Username**: Enter your GitHub account.
**Password**: Enter your GitHub password.
- 
-
-3. Click **Create** to finish.
\ No newline at end of file
+3. Click **Create** to finish.
diff --git a/content/en/docs/project-user-guide/configuration/serviceaccounts.md b/content/en/docs/project-user-guide/configuration/serviceaccounts.md
index 2e9384367..d05ffb2a9 100644
--- a/content/en/docs/project-user-guide/configuration/serviceaccounts.md
+++ b/content/en/docs/project-user-guide/configuration/serviceaccounts.md
@@ -1,9 +1,48 @@
---
title: "Service Accounts"
-keywords: 'KubeSphere, Kubernetes, ServiceAccounts'
-description: 'Learn how to create Service Accounts in KubeSphere.'
+keywords: 'KubeSphere, Kubernetes, Service Accounts'
+description: 'Learn how to create service accounts on KubeSphere.'
linkTitle: "Service Accounts"
weight: 10440
---
-TBD
\ No newline at end of file
+A [service account](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/) provides an identity for processes that run in a Pod. When accessing a cluster, a user is authenticated by the API server as a particular user account. Processes in containers inside Pods are authenticated as a particular service account when these processes contact the API server.
+
+This document describes how to create service accounts on KubeSphere.
+
+## Prerequisites
+
+You need to create a workspace, a project, and a user (`project-regular`), and invite the user to the project and assign it the `operator` role. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
+
+## Create Service Account
+
+### Step 1: Log in to KubeSphere
+
+1. Log in to the KubeSphere console as `project-regular`. Go to **Configuration** of a project and click **Service Accounts**. A service account named `default` is displayed on the **Service Accounts** page as it is automatically created when the project is created.
+
+ {{< notice note >}}
+
+ If no service account is specified when creating workloads in a project, the service account `default` in the same project is automatically assigned.
+
+ {{ notice >}}
+
+2. Click **Create**.
+
+### Step 2: Set a service account
+
+1. In the displayed dialog box, set the following parameters:
+ - **Name**: A unique identifier for the service account.
+ - **Alias**: An alias for the service account to help you better identify the service account.
+ - **Description**: A brief introduction of the service account.
+ - **Project Role**: Select a project role from the drop-down list for the service account. Different project roles have [different permissions](../../../project-administration/role-and-member-management/#built-in-roles) in a project.
+2. Click **Create** after you finish setting the parameters. The service account created is displayed on the **Service Accounts** page.
+
+## Service Account Details Page
+
+1. Click the service account created to go to its details page.
+2. Click **Edit Information** to edit its basic information, or click **More** to select an operation from the drop-down menu.
+ - **Edit YAML**: View, update, or download the YAML file.
+ - **Change Role**: Change the project role of the service account.
+ - **Delete**: Delete the service account and return to the previous page.
+3. On the **Resource Status** tab, details about the corresponding Secret and the kubeconfig of the service account are displayed.
+
diff --git a/content/en/docs/project-user-guide/custom-application-monitoring/examples/monitor-mysql.md b/content/en/docs/project-user-guide/custom-application-monitoring/examples/monitor-mysql.md
index 771ba3df4..5522caf7b 100644
--- a/content/en/docs/project-user-guide/custom-application-monitoring/examples/monitor-mysql.md
+++ b/content/en/docs/project-user-guide/custom-application-monitoring/examples/monitor-mysql.md
@@ -11,16 +11,16 @@ This tutorial demonstrates how to monitor and visualize MySQL metrics.
## Prerequisites
-- You need to [enable the App Store](../../../../pluggable-components/app-store/). MySQL and MySQL Exporter will be deployed from the App Store.
-- You need to create a workspace, a project, and an account (`project-regular`) for this tutorial. The account needs to be invited to the project with the `operator` role. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../../quick-start/create-workspace-and-project/).
+- You need to [enable the App Store](../../../../pluggable-components/app-store/). MySQL and MySQL Exporter are available in the App Store.
+- You need to create a workspace, a project, and a user (`project-regular`) for this tutorial. The user needs to be invited to the project with the `operator` role. For more information, see [Create Workspaces, Projects, Users and Roles](../../../../quick-start/create-workspace-and-project/).
## Step 1: Deploy MySQL
To begin with, you need to [deploy MySQL from the App Store](../../../../application-store/built-in-apps/mysql-app/).
-1. Go to your project and click **App Store** in the top-left corner.
+1. Go to your project and click **App Store** in the upper-left corner.
-2. Click **MySQL** to go to its product detail page and click **Deploy** on the **App Information** tab.
+2. Click **MySQL** to go to its details page and click **Install** on the **App Information** tab.
{{< notice note >}}
@@ -28,25 +28,21 @@ MySQL is a built-in app in the KubeSphere App Store, which means it can be deplo
{{ notice >}}
-3. Under **Basic Information**, set an **App Name** and select an **App Version**. Select the project where the app will be deployed under **Deployment Location** and click **Next**.
+3. Under **Basic Information**, set a **Name** and select a **Version**. Select the project where the app is deployed under **Location** and click **Next**.
-4. Under **App Configurations**, set a root password by uncommenting the `mysqlRootPassword` field and click **Deploy**.
-
- 
+4. Under **App Settings**, set a root password by uncommenting the `mysqlRootPassword` field and click **Install**.
5. Wait until MySQL is up and running.
- 
-
## Step 2: Deploy MySQL Exporter
You need to deploy MySQL Exporter in the same project on the same cluster. MySQL Exporter is responsible for querying the status of MySQL and reports the data in Prometheus format.
1. Go to **App Store** and click **MySQL Exporter**.
-2. On the product detail page, click **Deploy**.
+2. On the details page, click **Install**.
-3. Under **Basic Information**, set an **App Name** and select an **App Version**. Select the same project where MySQL is deployed under **Deployment Location** and click **Next**.
+3. Under **Basic Information**, set a **Name** and select a **Version**. Select the same project where MySQL is deployed under **Location** and click **Next**.
4. Make sure `serviceMonitor.enabled` is set to `true`. The built-in MySQL Exporter sets it to `true` by default, so you don't need to manually change the value of `serviceMonitor.enabled`.
@@ -54,27 +50,19 @@ You need to deploy MySQL Exporter in the same project on the same cluster. MySQL
You must enable the ServiceMonitor CRD if you are using external exporter Helm charts. Those charts usually disable ServiceMonitors by default and require manual modification.
{{ notice >}}
-5. Modify MySQL connection parameters. MySQL Exporter needs to connect to the target MySQL. In this tutorial, MySQL is installed with the service name `mysql-dh3ily`. Navigate to `mysql` in the configuration file, and set `host` to `mysql-dh3ily`, `pass` to `testing`, and `user` to `root` as below. Note that your MySQL service may be created with **a different name**.
-
- 
-
- Click **Deploy**.
+5. Modify MySQL connection parameters. MySQL Exporter needs to connect to the target MySQL. In this tutorial, MySQL is installed with the service name `mysql-dh3ily`. Navigate to `mysql` in the configuration file, and set `host` to `mysql-dh3ily`, `pass` to `testing`, and `user` to `root`. Note that your MySQL service may be created with **a different name**. After you finish editing the file, click **Install**.
6. Wait until MySQL Exporter is up and running.
- 
-
## Step 3: Create a Monitoring Dashboard
You can create a monitoring dashboard for MySQL and visualize real-time metrics.
1. In the same project, go to **Custom Monitoring** under **Monitoring & Alerting** in the sidebar and click **Create**.
-2. In the dialog that appears, set a name for the dashboard (for example, `mysql-overview`) and select the MySQL template. Click **Next** to continue.
+2. In the displayed dialog box, set a name for the dashboard (for example, `mysql-overview`) and select the MySQL template. Click **Next** to continue.
-3. Save the template by clicking **Save Template** in the top-right corner. A newly-created dashboard will appear on the **Custom Monitoring Dashboards** page.
-
- 
+3. Save the template by clicking **Save Template** in the upper-right corner. A newly-created dashboard is displayed on the **Custom Monitoring Dashboards** page.
{{< notice note >}}
diff --git a/content/en/docs/project-user-guide/custom-application-monitoring/examples/monitor-sample-web.md b/content/en/docs/project-user-guide/custom-application-monitoring/examples/monitor-sample-web.md
index aac17352b..42b44242a 100644
--- a/content/en/docs/project-user-guide/custom-application-monitoring/examples/monitor-sample-web.md
+++ b/content/en/docs/project-user-guide/custom-application-monitoring/examples/monitor-sample-web.md
@@ -11,7 +11,7 @@ This section walks you through monitoring a sample web application. The applicat
## Prerequisites
- Please make sure you [enable the OpenPitrix system](../../../../pluggable-components/app-store/).
-- You need to create a workspace, a project, and a user account for this tutorial. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../../quick-start/create-workspace-and-project/). The account needs to be a platform regular user and to be invited to the workspace with the `self-provisioner` role. Namely, create an account `workspace-self-provisioner` of the `self-provisioner` role, and use this account to create a project (for example, `test`). In this tutorial, you log in as `workspace-self-provisioner` and work in the project `test` in the workspace `demo-workspace`.
+- You need to create a workspace, a project, and a user account for this tutorial. For more information, see [Create Workspaces, Projects, Users and Roles](../../../../quick-start/create-workspace-and-project/). The account needs to be a platform regular user and to be invited to the workspace with the `self-provisioner` role. Namely, create a user `workspace-self-provisioner` of the `self-provisioner` role, and use this account to create a project (for example, `test`). In this tutorial, you log in as `workspace-self-provisioner` and work in the project `test` in the workspace `demo-workspace`.
- Knowledge of Helm charts and [PromQL](https://prometheus.io/docs/prometheus/latest/querying/examples/).
@@ -33,23 +33,9 @@ Find the source code in the folder `helm` in [kubesphere/prometheus-example-app]
### Step 3: Upload the Helm chart
-1. Go to the workspace **Overview** page of `demo-workspace` and navigate to **App Templates**.
+1. Go to the workspace **Overview** page of `demo-workspace` and navigate to **App Templates** under **App Management**.
- 
-
-2. Click **Create** and upload `prometheus-example-app-0.1.0.tgz` as images below.
-
- 
-
- 
-
- 
-
- 
-
- 
-
- 
+2. Click **Create** and upload `prometheus-example-app-0.1.0.tgz`.
### Step 4: Deploy the sample web application
@@ -57,62 +43,30 @@ You need to deploy the sample web application into `test`. For demonstration pur
1. Click `prometheus-example-app`.
- 
-
-2. Expand the menu and click **Test Deployment**.
-
- 
-
- 
+2. Expand the menu and click **Install**.
3. Make sure you deploy the sample web application in `test` and click **Next**.
- 
-
-4. Make sure `serviceMonitor.enabled` is set to `true` and click **Deploy**.
-
- 
-
- 
+4. Make sure `serviceMonitor.enabled` is set to `true` and click **Install**.
5. In **Workloads** of the project `test`, wait until the sample web application is up and running.
- 
-
### Step 5: Create a monitoring dashboard
This section guides you on how to create a dashboard from scratch. You will create a text chart showing the total number of processed operations and a line chart for displaying the operation rate.
-1. Navigate to **Custom Monitoring** and click **Create**.
+1. Navigate to **Custom Monitoring Dashboards** and click **Create**.
- 
+2. Set a name (for example, `sample-web`) and click **Next**.
-2. Set a name (for example, `sample-web`) and click **Create**.
+3. Enter a title in the upper-left corner (for example, `Sample Web Overview`).
- 
+4. Click
in the left column. To add charts in the middle column, click **Add Monitoring Item** in the lower-right corner.
-
-
### Add a monitoring group
To group monitoring items, you can click
to drag and drop an item into the target group. To add a new group, click **Add Monitoring Group**. If you want to change the place of a group, hover over a group and click
or
arrow on the right.
diff --git a/content/en/docs/project-user-guide/custom-application-monitoring/visualization/panel.md b/content/en/docs/project-user-guide/custom-application-monitoring/visualization/panel.md
index 282221549..1dd9703d8 100644
--- a/content/en/docs/project-user-guide/custom-application-monitoring/visualization/panel.md
+++ b/content/en/docs/project-user-guide/custom-application-monitoring/visualization/panel.md
@@ -15,23 +15,20 @@ A text chart is preferable for displaying a single metric value. The editing win
- **Chart Name**: The name of the text chart.
- **Unit**: The metric data unit.
- **Decimal Places**: Accept an integer.
-- **Monitoring Metrics**: A list of available Prometheus metrics.
+- **Monitoring Metric**: Specify a monitoring metric from the drop-down list of available Prometheus metrics.
-
+## Graph Chart
-## Graph
+A graph chart is preferable for displaying multiple metric values. The editing window for the graph is composed of three parts. The upper part displays real-time metric values. The left part is for setting the graph theme. The right part is for editing metrics and chart descriptions.
-A graph is preferable for displaying multiple metric values. The editing window for the graph is composed of three parts. The upper part displays real-time metric values. The left part is for setting the graph theme. The right part is for editing metrics and chart descriptions.
-
-- **Graph Types**: Support line charts and stacked charts.
+- **Chart Types**: Support basic charts and bar charts.
+- **Graph Types**: Support basic charts and stacked charts.
- **Chart Colors**: Change line colors.
- **Chart Name**: The name of the chart.
- **Description**: The chart description.
- **Add**: Add a new query editor.
- **Metric Name**: Legend for the line. It supports variables. For example, `{{pod}}` means using the value of the Prometheus metric label `pod` to name this line.
- **Interval**: The step value between two data points.
-- **Monitoring Metrics**: A list of available Prometheus metrics.
+- **Monitoring Metric**: A list of available Prometheus metrics.
- **Unit**: The metric data unit.
- **Decimal Places**: Accept an integer.
-
-
\ No newline at end of file
diff --git a/content/en/docs/project-user-guide/custom-application-monitoring/visualization/querying.md b/content/en/docs/project-user-guide/custom-application-monitoring/visualization/querying.md
index ac11c4048..11e5a2ccc 100644
--- a/content/en/docs/project-user-guide/custom-application-monitoring/visualization/querying.md
+++ b/content/en/docs/project-user-guide/custom-application-monitoring/visualization/querying.md
@@ -6,7 +6,7 @@ linkTitle: "Querying"
weight: 10817
---
-In the query editor, you can enter PromQL expressions to process and fetch metrics. To learn how to write PromQL, read [Query Examples](https://prometheus.io/docs/prometheus/latest/querying/examples/).
+In the query editor, enter PromQL expressions in **Monitoring Metrics** to process and fetch metrics. To learn how to write PromQL, read [Query Examples](https://prometheus.io/docs/prometheus/latest/querying/examples/).

diff --git a/content/en/docs/project-user-guide/grayscale-release/blue-green-deployment.md b/content/en/docs/project-user-guide/grayscale-release/blue-green-deployment.md
index c053d42af..1eb06f097 100644
--- a/content/en/docs/project-user-guide/grayscale-release/blue-green-deployment.md
+++ b/content/en/docs/project-user-guide/grayscale-release/blue-green-deployment.md
@@ -1,7 +1,7 @@
---
-title: "Kubernetes Blue-green Deployment in Kubesphere"
-keywords: 'KubeSphere, Kubernetes, service mesh, istio, release, blue-green deployment'
-description: 'Learn how to release a blue-green deployment in KubeSphere.'
+title: "Kubernetes Blue-Green Deployment on Kubesphere"
+keywords: 'KubeSphere, Kubernetes, Service Mesh, Istio, Grayscale Release, Blue-Green deployment'
+description: 'Learn how to release a blue-green deployment on KubeSphere.'
linkTitle: "Blue-Green Deployment with Kubernetes"
weight: 10520
---
@@ -15,41 +15,27 @@ The blue-green release provides a zero downtime deployment, which means the new
## Prerequisites
- You need to enable [KubeSphere Service Mesh](../../../pluggable-components/service-mesh/).
-- You need to create a workspace, a project and an account (`project-regular`). The account must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project/).
+- You need to create a workspace, a project and a user (`project-regular`). The user must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
- You need to enable **Application Governance** and have an available app so that you can implement the blue-green deployment for it. The sample app used in this tutorial is Bookinfo. For more information, see [Deploy Bookinfo and Manage Traffic](../../../quick-start/deploy-bookinfo-to-k8s/).
## Create a Blue-green Deployment Job
-1. Log in to KubeSphere as `project-regular` and go to **Grayscale Release**. Under **Categories**, click **Create Job** on the right of **Blue-green Deployment**.
+1. Log in to KubeSphere as `project-regular` and go to **Grayscale Release**. Under **Release Modes**, click **Create** on the right of **Blue-Green Deployment**.
2. Set a name for it and click **Next**.
-3. On the **Grayscale Release Components** tab, select your app from the drop-down list and the Service for which you want to implement the blue-green deployment. If you also use the sample app Bookinfo, select **reviews** and click **Next**.
+3. On the **Service Settings** tab, select your app from the drop-down list and the Service for which you want to implement the blue-green deployment. If you also use the sample app Bookinfo, select **reviews** and click **Next**.
-4. On the **Grayscale Release Version** tab, add another version (e.g `v2`) as shown in the following figure and click **Next**:
+4. On the **New Version Settings** tab, add another version (e.g `kubesphere/examples-bookinfo-reviews-v2:1.16.2`) as shown in the following figure and click **Next**.
- 
+5. On the **Strategy Settings** tab, to allow the app version `v2` to take over all the traffic, select **Take Over** and click **Create**.
- {{< notice note >}}
+6. The blue-green deployment job created is displayed under the **Release Jobs** tab. Click it to view details.
- The image version is `v2` in the screenshot.
-
- {{ notice >}}
-
-5. On the **Policy Config** tab, to allow the app version `v2` to take over all the traffic, select **Take over all traffic** and click **Create**.
-
-6. The blue-green deployment job created is displayed under the tab **Job Status**. Click it to view details.
-
- 
-
-7. Wait for a while and you can see all the traffic go to the version `v2`:
-
- 
+7. Wait for a while and you can see all the traffic go to the version `v2`.
8. The new **Deployment** is created as well.
- 
-
9. You can get the virtual service to identify the weight by running the following command:
```bash
@@ -59,7 +45,7 @@ The blue-green release provides a zero downtime deployment, which means the new
{{< notice note >}}
- When you run the command above, replace `demo-project` with your own project (namely, namespace) name.
- - If you want to run the command from the web kubectl on the KubeSphere console, you need to use the account `admin`.
+ - If you want to run the command from the web kubectl on the KubeSphere console, you need to use the user `admin`.
{{ notice >}}
@@ -83,7 +69,6 @@ The blue-green release provides a zero downtime deployment, which means the new
## Take a Job Offline
-After you implement the blue-green deployment, and the result meets your expectation, you can take the task offline with the version `v1` removed by clicking **Job offline**.
+After you implement the blue-green deployment, and the result meets your expectation, you can take the task offline with the version `v1` removed by clicking **Delete**.
-
diff --git a/content/en/docs/project-user-guide/grayscale-release/canary-release.md b/content/en/docs/project-user-guide/grayscale-release/canary-release.md
index 53d2669b6..d87aa11d9 100644
--- a/content/en/docs/project-user-guide/grayscale-release/canary-release.md
+++ b/content/en/docs/project-user-guide/grayscale-release/canary-release.md
@@ -1,7 +1,7 @@
---
title: "Canary Release"
-keywords: 'KubeSphere, Kubernetes, canary release, istio, service mesh'
-description: 'Learn how to deploy a canary service in KubeSphere.'
+keywords: 'KubeSphere, Kubernetes, Canary Release, Istio, Service Mesh'
+description: 'Learn how to deploy a canary service on KubeSphere.'
linkTitle: "Canary Release"
weight: 10530
---
@@ -16,30 +16,20 @@ This method serves as an efficient way to test performance and reliability of a
- You need to enable [KubeSphere Service Mesh](../../../pluggable-components/service-mesh/).
- You need to enable [KubeSphere Logging](../../../pluggable-components/logging/) so that you can use the Tracing feature.
-- You need to create a workspace, a project and an account (`project-regular`). The account must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project/).
+- You need to create a workspace, a project and a user (`project-regular`). The user must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
- You need to enable **Application Governance** and have an available app so that you can implement the canary release for it. The sample app used in this tutorial is Bookinfo. For more information, see [Deploy and Access Bookinfo](../../../quick-start/deploy-bookinfo-to-k8s/).
## Step 1: Create a Canary Release Job
-1. Log in to KubeSphere as `project-regular` and navigate to **Grayscale Release**. Under **Categories**, click **Create Job** on the right of **Canary Release**.
+1. Log in to KubeSphere as `project-regular` and navigate to **Grayscale Release**. Under **Release Modes**, click **Create** on the right of **Canary Release**.
2. Set a name for it and click **Next**.
-3. On the **Grayscale Release Components** tab, select your app from the drop-down list and the Service for which you want to implement the canary release. If you also use the sample app Bookinfo, select **reviews** and click **Next**.
+3. On the **Service Settings** tab, select your app from the drop-down list and the Service for which you want to implement the canary release. If you also use the sample app Bookinfo, select **reviews** and click **Next**.
-4. On the **Grayscale Release Version** tab, add another version of it (e.g `kubesphere/examples-bookinfo-reviews-v2:1.13.0`; change `v1` to `v2`) as shown in the image below and click **Next**:
+4. On the **New Version Settings** tab, add another version of it (e.g `kubesphere/examples-bookinfo-reviews-v2:1.16.2`; change `v1` to `v2`) and click **Next**.
- 
-
- {{< notice note >}}
-
- The image version is `v2` in the screenshot.
-
- {{ notice >}}
-
-5. You send traffic to these two versions (`v1` and `v2`) either by a specific percentage or by the request content such as `Http Header`, `Cookie` and `URI`. Select **Forward by traffic ratio** and drag the icon in the middle to change the percentage of traffic sent to these two versions respectively (for example, set 50% for either one). When you finish, click **Create**.
-
- 
+5. You send traffic to these two versions (`v1` and `v2`) either by a specific percentage or by the request content such as `Http Header`, `Cookie` and `URI`. Select **Specify Traffic Distribution** and move the slider to the middle to change the percentage of traffic sent to these two versions respectively (for example, set 50% for either one). When you finish, click **Create**.
## Step 2: Verify the Canary Release
@@ -47,20 +37,12 @@ Now that you have two available app versions, access the app to verify the canar
1. Visit the Bookinfo website and refresh your browser repeatedly. You can see that the **Book Reviews** section switching between v1 and v2 at a rate of 50%.
- 
+2. The created canary release job is displayed under the tab **Release Jobs**. Click it to view details.
-2. The created canary release job is displayed under the tab **Job Status**. Click it to view details.
-
- 
-
-3. You can see half of the traffic goes to each of them:
-
- 
+3. You can see half of the traffic goes to each of them.
4. The new Deployment is created as well.
- 
-
5. You can directly get the virtual Service to identify the weight by executing the following command:
```bash
@@ -70,7 +52,7 @@ Now that you have two available app versions, access the app to verify the canar
{{< notice note >}}
- When you execute the command above, replace `demo-project` with your own project (namely, namespace) name.
- - If you want to execute the command from the web kubectl on the KubeSphere console, you need to use the account `admin`.
+ - If you want to execute the command from the web kubectl on the KubeSphere console, you need to use the user `admin`.
{{ notice >}}
@@ -110,40 +92,29 @@ Now that you have two available app versions, access the app to verify the canar
Make sure you replace the hostname and port number in the above command with your own.
{{ notice >}}
-2. In **Traffic Management**, you can see communications, dependency, health and performance among different microservices.
+2. In **Traffic Monitoring**, you can see communications, dependency, health and performance among different microservices.
- 
-
-3. Click a component (for example, **reviews**) and you can see the information of traffic monitoring on the right, displaying real-time data of **Traffic**, **Success rate** and **Duration**.
-
- 
+3. Click a component (for example, **reviews**) and you can see the information of traffic monitoring on the right, displaying real-time data of **Traffic**, **Success rate**, and **Duration**.
## Step 4: View Tracing Details
KubeSphere provides the distributed tracing feature based on [Jaeger](https://www.jaegertracing.io/), which is used to monitor and troubleshoot microservices-based distributed applications.
-1. On the **Tracing** tab, you can clearly see all phases and internal calls of requests, as well as the period in each phase.
-
- 
+1. On the **Tracing** tab, you can see all phases and internal calls of requests, as well as the period in each phase.
2. Click any item, and you can even drill down to see request details and where this request is being processed (which machine or container).
- 
-
## Step 5: Take Over All Traffic
If everything runs smoothly, you can bring all the traffic to the new version.
-1. In **Grayscale Release**, click the canary release job.
+1. In **Release Jobs**, click the canary release job.
2. In the displayed dialog box, click
on the right of **reviews v2** and select **Take Over**. It means 100% of the traffic will be sent to the new version (v2).
- 
-
{{< notice note >}}
If anything goes wrong with the new version, you can roll back to the previous version v1 anytime.
{{ notice >}}
3. Access Bookinfo again and refresh the browser several times. You can find that it only shows the result of **reviews v2** (i.e. ratings with black stars).
- 
diff --git a/content/en/docs/project-user-guide/grayscale-release/traffic-mirroring.md b/content/en/docs/project-user-guide/grayscale-release/traffic-mirroring.md
index 61f341e94..7d7568fd4 100644
--- a/content/en/docs/project-user-guide/grayscale-release/traffic-mirroring.md
+++ b/content/en/docs/project-user-guide/grayscale-release/traffic-mirroring.md
@@ -1,7 +1,7 @@
---
title: "Traffic Mirroring"
-keywords: 'KubeSphere, Kubernetes, traffic mirroring, istio'
-description: 'Learn how to conduct a traffic mirroring job in KubeSphere.'
+keywords: 'KubeSphere, Kubernetes, Traffic Mirroring, Istio'
+description: 'Learn how to conduct a traffic mirroring job on KubeSphere.'
linkTitle: "Traffic Mirroring"
weight: 10540
---
@@ -11,41 +11,27 @@ Traffic mirroring, also called shadowing, is a powerful, risk-free method of tes
## Prerequisites
- You need to enable [KubeSphere Service Mesh](../../../pluggable-components/service-mesh/).
-- You need to create a workspace, a project and an account (`project-regular`). The account must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project/).
+- You need to create a workspace, a project and a user (`project-regular`). The user must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
- You need to enable **Application Governance** and have an available app so that you can mirror the traffic of it. The sample app used in this tutorial is Bookinfo. For more information, see [Deploy Bookinfo and Manage Traffic](../../../quick-start/deploy-bookinfo-to-k8s/).
## Create a Traffic Mirroring Job
-1. Log in to KubeSphere as `project-regular` and go to **Grayscale Release**. Under **Categories**, click **Create Job** on the right of **Traffic Mirroring**.
+1. Log in to KubeSphere as `project-regular` and go to **Grayscale Release**. Under **Release Modes**, click **Create** on the right of **Traffic Mirroring**.
2. Set a name for it and click **Next**.
-3. On the **Grayscale Release Components** tab, select your app from the drop-down list and the Service of which you want to mirror the traffic. If you also use the sample app Bookinfo, select **reviews** and click **Next**.
+3. On the **Service Settings** tab, select your app from the drop-down list and the Service of which you want to mirror the traffic. If you also use the sample app Bookinfo, select **reviews** and click **Next**.
-4. On the **Grayscale Release Version** tab, add another version of it (for example, `v2`) as shown in the image below and click **Next**:
+4. On the **New Version Settings** tab, add another version of it (for example, `kubesphere/examples-bookinfo-reviews-v2:1.16.2`; change `v1` to `v2`) and click **Next**.
- 
+5. On the **Strategy Settings** tab, click **Create**.
- {{< notice note >}}
-
- The image version is `v2` in the screenshot.
-
- {{ notice >}}
-
-5. On the **Policy Config** tab, click **Create**.
-
-6. The traffic mirroring job created is displayed under the tab **Job Status**. Click it to view details.
-
- 
+6. The traffic mirroring job created is displayed under the **Release Jobs** tab. Click it to view details.
7. You can see the traffic is being mirrored to `v2` with real-time traffic displayed in the line chart.
- 
-
8. The new **Deployment** is created as well.
- 
-
9. You can get the virtual service to view `mirror` and `weight` by running the following command:
```bash
@@ -55,7 +41,7 @@ Traffic mirroring, also called shadowing, is a powerful, risk-free method of tes
{{< notice note >}}
- When you run the command above, replace `demo-project` with your own project (namely, namespace) name.
- - If you want to run the command from the web kubectl on the KubeSphere console, you need to use the account `admin`.
+ - If you want to run the command from the web kubectl on the KubeSphere console, you need to use the user `admin`.
{{ notice >}}
@@ -92,6 +78,4 @@ These requests are mirrored as “fire and forget”, which means that the respo
## Take a Job Offline
-You can remove the traffic mirroring job by clicking **Job offline**, which does not affect the current app version.
-
-
\ No newline at end of file
+You can remove the traffic mirroring job by clicking **Delete**, which does not affect the current app version.
diff --git a/content/en/docs/project-user-guide/image-builder/binary-to-image.md b/content/en/docs/project-user-guide/image-builder/binary-to-image.md
index 31c2c39ab..5e5fd6a57 100644
--- a/content/en/docs/project-user-guide/image-builder/binary-to-image.md
+++ b/content/en/docs/project-user-guide/image-builder/binary-to-image.md
@@ -20,13 +20,13 @@ For demonstration and testing purposes, here are some example artifacts you can
| [b2i-war-java11.war](https://github.com/kubesphere/tutorial/raw/master/tutorial%204%20-%20s2i-b2i/b2i-war-java11.war) | [springmvc5](https://github.com/kubesphere/s2i-java-container/tree/master/tomcat/examples/springmvc5) |
| [b2i-binary](https://github.com/kubesphere/tutorial/raw/master/tutorial%204%20-%20s2i-b2i/b2i-binary) | [devops-go-sample](https://github.com/runzexia/devops-go-sample) |
| [b2i-jar-java11.jar](https://github.com/kubesphere/tutorial/raw/master/tutorial%204%20-%20s2i-b2i/b2i-jar-java11.jar) | [ java-maven-example](https://github.com/kubesphere/s2i-java-container/tree/master/java/examples/maven) |
-| [b2i-jar-java8.jar](https://github.com/kubesphere/tutorial/raw/master/tutorial%204%20-%20s2i-b2i/b2i-jar-java8.jar) | [devops-java-sample](https://github.com/kubesphere/devops-java-sample) |
+| [b2i-jar-java8.jar](https://github.com/kubesphere/tutorial/raw/master/tutorial%204%20-%20s2i-b2i/b2i-jar-java8.jar) | [devops-maven-sample](https://github.com/kubesphere/devops-maven-sample) |
## Prerequisites
- You have enabled the [KubeSphere DevOps System](../../../pluggable-components/devops/).
- You need to create a [Docker Hub](http://www.dockerhub.com/) account. GitLab and Harbor are also supported.
-- You need to create a workspace, a project and an account (`project-regular`). The account must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project/).
+- You need to create a workspace, a project and a user (`project-regular`). The user must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
- Set a CI dedicated node for building images. This is not mandatory but recommended for the development and production environment as it caches dependencies and reduces build time. For more information, see [Set a CI Node for Dependency Caching](../../../devops-user-guide/how-to-use/set-ci-node/).
## Create a Service Using Binary-to-Image (B2I)
@@ -43,83 +43,52 @@ You must create a Docker Hub Secret so that the Docker image created through B2I
1. In the same project, navigate to **Services** under **Application Workloads** and click **Create**.
- 
-
-2. Scroll down to **Build a New Service through the Artifact** and select **war**. This tutorial uses the [spring-mvc-showcase](https://github.com/spring-projects/spring-mvc-showcase) project as a sample and uploads a war artifact to KubeSphere. Set a name, such as `b2i-war-java8`, and click **Next**.
+2. Scroll down to **Create Service from Artifact** and select **WAR**. This tutorial uses the [spring-mvc-showcase](https://github.com/spring-projects/spring-mvc-showcase) project as a sample and uploads a war artifact to KubeSphere. Set a name, such as `b2i-war-java8`, and click **Next**.
3. On the **Build Settings** page, provide the following information accordingly and click **Next**.
- 
-
**Service Type**: Select **Stateless Service** for this example. For more information about different Services, see [Service Type](../../../project-user-guide/application-workloads/services/#service-type).
- **Upload Artifact**: Upload the war artifact ([b2i-war-java8](https://github.com/kubesphere/tutorial/raw/master/tutorial%204%20-%20s2i-b2i/b2i-war-java8.war)).
+ **Artifact File**: Upload the war artifact ([b2i-war-java8](https://github.com/kubesphere/tutorial/raw/master/tutorial%204%20-%20s2i-b2i/b2i-war-java8.war)).
**Build Environment**: Select **kubesphere/tomcat85-java8-centos7:v2.1.0**.
- **imageName**: Enter `
on the right of a record to see building logs. You can see `Build completed successfully` at the end of the log if everything runs normally.
-2. Click this image to go to its detail page. Under **Job Records**, click
on the right of a record to see building logs. You can see `Build completed successfully` at the end of the log if everything runs normally.
-
- 
-
-3. Go back to the previous page, and you can see the corresponding Job, Deployment and Service of the image have all been created successfully.
-
- #### Service
-
- 
-
- #### Deployment
-
- 
-
- #### Job
-
- 
+3. Go back to the **Services**, **Deployments**, and **Jobs** page, and you can see the corresponding Service, Deployment, and Job of the image have been all created successfully.
4. In your Docker Hub repository, you can see that KubeSphere has pushed the image to the repository with the expected tag.
- 
-
### Step 4: Access the B2I Service
-1. On the **Services** page, click the B2I Service to go to its detail page, where you can see the port number has been exposed.
-
- 
+1. On the **Services** page, click the B2I Service to go to its details page, where you can see the port number has been exposed.
2. Access the Service at `http://
on the right of a record to see building logs. You can see `Build completed successfully` at the end of the log if everything runs normally.
-2. Click this image to go to its detail page. Under **Job Records**, click
on the right of a record to see building logs. You can see `Build completed successfully` at the end of the log if everything runs normally.
-
- 
-
-3. Go back to the previous page, and you can see the corresponding Job of the image has been created successfully.
-
- 
+3. Go to the **Jobs** page, and you can see the corresponding Job of the image has been created successfully.
4. In your Docker Hub repository, you can see that KubeSphere has pushed the image to the repository with the expected tag.
- 
-
diff --git a/content/en/docs/project-user-guide/image-builder/s2i-and-b2i-webhooks.md b/content/en/docs/project-user-guide/image-builder/s2i-and-b2i-webhooks.md
index d161a3b94..3b89fba20 100644
--- a/content/en/docs/project-user-guide/image-builder/s2i-and-b2i-webhooks.md
+++ b/content/en/docs/project-user-guide/image-builder/s2i-and-b2i-webhooks.md
@@ -6,33 +6,27 @@ linkTitle: "Configure S2I and B2I Webhooks"
weight: 10650
---
-KubeSphere provides Source-to-Image (S2I) and Binary-to-Image (B2I) features to automate image building and pushing and application deployment. In KubeSphere v3.1, you can configure S2I and B2I webhooks so that your Image Builder can be automatically triggered when there is any relevant activity in your code repository.
+KubeSphere provides Source-to-Image (S2I) and Binary-to-Image (B2I) features to automate image building and pushing and application deployment. In KubeSphere v3.1.x and later versions, you can configure S2I and B2I webhooks so that your Image Builder can be automatically triggered when there is any relevant activity in your code repository.
This tutorial demonstrates how to configure S2I and B2I webhooks.
## Prerequisites
- You need to enable the [KubeSphere DevOps System](../../../pluggable-components/devops/).
-- You need to create a workspace, a project (`demo-project`) and an account (`project-regular`). The account must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project/).
+- You need to create a workspace, a project (`demo-project`) and a user (`project-regular`). The user must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
- You need to create an S2I Image Builder and a B2I Image Builder. For more information, refer to [Source to Image: Publish an App without a Dockerfile](../source-to-image/) and [Binary to Image: Publish an Artifact to Kubernetes](../binary-to-image/).
## Configure an S2I Webhook
### Step 1: Expose the S2I trigger Service
-1. Log in to the KubeSphere web console as `admin`. Click **Platform** in the top-left corner and then select **Cluster Management**.
+1. Log in to the KubeSphere web console as `admin`. Click **Platform** in the upper-left corner and then select **Cluster Management**.
-2. In **Services** under **Application Workloads**, select **kubesphere-devops-system** from the drop-down list and click **s2ioperator-trigger-service** to go to its detail page.
+2. In **Services** under **Application Workloads**, select **kubesphere-devops-system** from the drop-down list and click **s2ioperator-trigger-service** to go to its details page.
- 
+3. Click **More** and select **Edit External Access**.
-3. Click **More** and select **Edit Internet Access**.
-
- 
-
-4. In the window that appears, select **NodePort** from the drop-down list for **Access Method** and then click **OK**.
-
- 
+4. In the displayed dialog box, select **NodePort** from the drop-down list for **Access Method** and then click **OK**.
{{< notice note >}}
@@ -40,29 +34,19 @@ This tutorial demonstrates how to configure S2I and B2I webhooks.
{{ notice >}}
-5. You can view the **Node Port** on the detail page. It will be included in the S2I webhook URL.
-
- 
+5. You can view the **NodePort** on the details page. It is going to be included in the S2I webhook URL.
### Step 2: Configure an S2I webhook
1. Log out of KubeSphere and log back in as `project-regular`. Go to `demo-project`.
-2. In **Image Builder**, click the S2I Image Builder to go to its detail page.
+2. In **Image Builders**, click the S2I Image Builder to go to its details page.
- 
-
-3. You can see an auto-generated link shown in **Remote Trigger Link**. Copy `/s2itrigger/v1alpha1/general/namespaces/demo-project/s2ibuilders/felixnoo-s2i-sample-latest-zhd/` as it will be included in the S2I webhook URL.
-
- 
+3. You can see an auto-generated link shown in **Remote Trigger**. Copy `/s2itrigger/v1alpha1/general/namespaces/demo-project/s2ibuilders/felixnoo-s2i-sample-latest-zhd/` as it is going to be included in the S2I webhook URL.
4. Log in to your GitHub account and go to the source code repository used for the S2I Image Builder. Go to **Webhooks** under **Settings** and then click **Add webhook**.
- 
-
-5. In **Payload URL**, enter `http://
on the right of a record to see building logs. You can see `Build completed successfully` at the end of the log if everything runs normally.
-2. Click this image to go to its detail page. Under **Job Records**, click
on the right of a record to see building logs. You can see `Build completed successfully` at the end of the log if everything runs normally.
-
- 
-
-3. Go back to the previous page, and you can see the corresponding Job, Deployment and Service of the image have been all created successfully.
-
- #### Service
-
- 
-
- #### Deployment
-
- 
-
- #### Job
-
- 
+3. Go back to the **Services**, **Deployments**, and **Jobs** page, and you can see the corresponding Service, Deployment, and Job of the image have been all created successfully.
4. In your Docker Hub repository, you can see that KubeSphere has pushed the image to the repository with the expected tag.
- 
-
### Step 5: Access the S2I Service
-1. On the **Services** page, click the S2I Service to go to its detail page.
+1. On the **Services** page, click the S2I Service to go to its details page.
- 
-
-2. To access the Service, you can either use the endpoint with the `curl` command or visit `
on the right of a user, and click **OK** for the displayed message to assign the user to the department.
+2. In the user list, click
on the right of a user, and click **OK** for the displayed message to assign the user to the department.
{{< notice note >}}
@@ -59,9 +58,9 @@ A department in a workspace is a logical unit used for permission control. You c
## Delete and Edit a Department
-1. On the **Department Management** page, click **Set Department**.
+1. On the **Department Management** page, click **Set Departments**.
-2. In the **Set Department** dialog box, on the left, click the upper level of the department to be edited or deleted.
+2. In the **Set Departments** dialog box, on the left, click the upper level of the department to be edited or deleted.
3. Click
on the right of the department to edit it.
diff --git a/content/en/docs/workspace-administration/project-quotas.md b/content/en/docs/workspace-administration/project-quotas.md
index 9c4503af5..ad59de15f 100644
--- a/content/en/docs/workspace-administration/project-quotas.md
+++ b/content/en/docs/workspace-administration/project-quotas.md
@@ -6,52 +6,46 @@ linkTitle: "Project Quotas"
weight: 9600
---
-KubeSphere uses [Kubernetes requests and limits](https://kubesphere.io/blogs/understand-requests-and-limits-in-kubernetes/) to control resource (for example, CPU and memory) usage in a project, also known as [ResourceQuotas](https://kubernetes.io/docs/concepts/policy/resource-quotas/) in Kubernetes. Requests make sure a project can get the resources it needs as they are specifically guaranteed and reserved. On the contrary, limits ensure that a project can never use resources above a certain value.
+KubeSphere uses [Kubernetes requests and limits](https://kubesphere.io/blogs/understand-requests-and-limits-in-kubernetes/) to control resource (for example, CPU and memory) usage in a project, also known as [resource quotas](https://kubernetes.io/docs/concepts/policy/resource-quotas/) in Kubernetes. Requests make sure a project can get the resources it needs as they are specifically guaranteed and reserved. On the contrary, limits ensure that a project can never use resources above a certain value.
-Besides CPU and memory, you can also set resource quotas for other objects separately such as Pods, [Deployments](../../project-user-guide/application-workloads/deployments/), [Jobs](../../project-user-guide/application-workloads/jobs/), [Services](../../project-user-guide/application-workloads/services/) and [ConfigMaps](../../project-user-guide/configuration/configmaps/) in a project.
+Besides CPU and memory, you can also set resource quotas for other objects separately such as Pods, [Deployments](../../project-user-guide/application-workloads/deployments/), [Jobs](../../project-user-guide/application-workloads/jobs/), [Services](../../project-user-guide/application-workloads/services/), and [ConfigMaps](../../project-user-guide/configuration/configmaps/) in a project.
This tutorial demonstrates how to configure quotas for a project.
## Prerequisites
-You have an available workspace, a project and an account (`ws-admin`). The account must have the `admin` role at the workspace level. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../quick-start/create-workspace-and-project/).
+You have an available workspace, a project and a user (`ws-admin`). The user must have the `admin` role at the workspace level. For more information, see [Create Workspaces, Projects, Users and Roles](../../quick-start/create-workspace-and-project/).
{{< notice note >}}
-If you use the account `project-admin` (an account of the `admin` role at the project level), you can set project quotas as well for a new project (i.e. its quotas remain unset). However, `project-admin` cannot change project quotas once they are set. Generally, it is the responsibility of `ws-admin` to set limits and requests for a project. `project-admin` is responsible for [setting limit ranges](../../project-administration/container-limit-ranges/) for containers in a project.
+If you use the user `project-admin` (a user of the `admin` role at the project level), you can set project quotas as well for a new project (i.e. its quotas remain unset). However, `project-admin` cannot change project quotas once they are set. Generally, it is the responsibility of `ws-admin` to set limits and requests for a project. `project-admin` is responsible for [setting limit ranges](../../project-administration/container-limit-ranges/) for containers in a project.
{{ notice >}}
## Set Project Quotas
-1. Log in to the console as `ws-admin` and go to a project. On the **Overview** page, you can see project quotas remain unset if the project is newly created. Click **Set** to configure quotas.
+1. Log in to the console as `ws-admin` and go to a project. On the **Overview** page, you can see project quotas remain unset if the project is newly created. Click **Edit Quotas** to configure quotas.
- 
-
-2. In the dialog that appears, you can see that KubeSphere does not set any requests or limits for a project by default. To set
+2. In the displayed dialog box, you can see that KubeSphere does not set any requests or limits for a project by default. To set
limits to control CPU and memory resources, use the slider to move to a desired value or enter numbers directly. Leaving a field blank means you do not set any requests or limits.
- 
-
{{< notice note >}}
The limit can never be lower than the request.
{{ notice >}}
-3. To set quotas for other resources, click **Add Quota Item** and select an object from the list.
-
- 
+3. To set quotas for other resources, click **Add** under **Project Resource Quotas**, and then select a resource or enter a recource name and set a quota.
4. Click **OK** to finish setting quotas.
5. Go to **Basic Information** in **Project Settings**, and you can see all resource quotas for the project.
-6. To change project quotas, click **Manage Project** on the **Basic Information** page and select **Edit Quota**.
+6. To change project quotas, click **Edit Project** on the **Basic Information** page and select **Edit Project Quotas**.
{{< notice note >}}
- For [a multi-cluster project](../../project-administration/project-and-multicluster-project/#multi-cluster-projects), the option **Edit Quota** does not display in the **Manage Project** drop-down menu. To set quotas for a multi-cluster project, go to **Quota Management** under **Project Settings** and click **Edit Quota**. Note that as a multi-cluster project runs across clusters, you can set resource quotas on different clusters separately.
+ For [a multi-cluster project](../../project-administration/project-and-multicluster-project/#multi-cluster-projects), the option **Edit Project Quotas** does not display in the **Manage Project** drop-down menu. To set quotas for a multi-cluster project, go to **Projects Quotas** under **Project Settings** and click **Edit Quotas**. Note that as a multi-cluster project runs across clusters, you can set resource quotas on different clusters separately.
{{ notice >}}
diff --git a/content/en/docs/workspace-administration/role-and-member-management.md b/content/en/docs/workspace-administration/role-and-member-management.md
index 084de5a32..b6c2bab04 100644
--- a/content/en/docs/workspace-administration/role-and-member-management.md
+++ b/content/en/docs/workspace-administration/role-and-member-management.md
@@ -6,17 +6,11 @@ linkTitle: "Workspace Role and Member Management"
weight: 9400
---
-This tutorial demonstrates how to manage roles and members in a workspace. At the workspace level, you can grant permissions in the following modules to a role:
-
-- **Project Management**
-- **DevOps Project Management**
-- **App Management**
-- **Access Control**
-- **Workspace Settings**
+This tutorial demonstrates how to manage roles and members in a workspace.
## Prerequisites
-At least one workspace has been created, such as `demo-workspace`. Besides, you need an account of the `workspace-admin` role (for example, `ws-admin`) at the workspace level. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../quick-start/create-workspace-and-project/).
+At least one workspace has been created, such as `demo-workspace`. Besides, you need a user of the `workspace-admin` role (for example, `ws-admin`) at the workspace level. For more information, see [Create Workspaces, Projects, Users and Roles](../../quick-start/create-workspace-and-project/).
{{< notice note >}}
@@ -26,20 +20,18 @@ The actual role name follows a naming convention: `workspace name-role name`. Fo
## Built-in Roles
-In **Workspace Roles**, there are four available built-in roles as shown below. Built-in roles are created automatically by KubeSphere when a workspace is created and they cannot be edited or deleted. You can only view permissions included in a built-in role or assign it to a user.
+In **Workspace Roles**, there are four available built-in roles. Built-in roles are created automatically by KubeSphere when a workspace is created and they cannot be edited or deleted. You can only view permissions included in a built-in role or assign it to a user.
| Built-in Roles | Description |
| ------------------ | ------------------------------------------------------------ |
-| `workspace-viewer` | The viewer in the workspace who can view all resources in the workspace. |
-| `workspace-self-provisioner` | The regular user in the workspace who can create projects and DevOps projects. |
-| `workspace-regular` | The regular user in the workspace who cannot create projects or DevOps projects. |
-| `workspace-admin` | The administrator in the workspace who can perform any action on any resource. It gives full control over all resources in the workspace. |
+| `workspace-viewer` | Workspace viewer who can view all resources in the workspace. |
+| `workspace-self-provisioner` | Workspace regular member who can view workspace settings, manage app templates, and create projects and DevOps projects. |
+| `workspace-regular` | Workspace regular member who can view workspace settings. |
+| `workspace-admin` | Workspace administrator who has full control over all resources in the workspace. |
To view the permissions that a role contains:
-1. Log in to the console as `ws-admin`. In **Workspace Roles**, click a role (for example, `workspace-admin`) and you can see role details as shown below.
-
- 
+1. Log in to the console as `ws-admin`. In **Workspace Roles**, click a role (for example, `workspace-admin`) and you can see role details.
2. Click the **Authorized Users** tab to see all the users that are granted the role.
@@ -57,20 +49,13 @@ To view the permissions that a role contains:
{{ notice >}}
-4. Newly-created roles will be listed in **Workspace Roles**. To edit an existing role, click
on the right.
-
- 
+4. Newly-created roles will be listed in **Workspace Roles**. To edit the information or permissions, or delete an existing role, click
on the right.
## Invite a New Member
-1. Navigate to **Workspace Members** under **Workspace Settings**, and click **Invite Member**.
+1. Navigate to **Workspace Members** under **Workspace Settings**, and click **Invite**.
2. Invite a user to the workspace by clicking
on the right of it and assign a role to it.
-
-
3. After you add the user to the workspace, click **OK**. In **Workspace Members**, you can see the user in the list.
-4. To edit the role of an existing user or remove the user from the workspace, click
on the right and select the corresponding operation.
-
- 
-
+4. To edit the role of an existing user or remove the user from the workspace, click
on the right and select the corresponding operation.
\ No newline at end of file
diff --git a/content/en/docs/workspace-administration/upload-helm-based-application.md b/content/en/docs/workspace-administration/upload-helm-based-application.md
index 685daf91a..1a2236f91 100644
--- a/content/en/docs/workspace-administration/upload-helm-based-application.md
+++ b/content/en/docs/workspace-administration/upload-helm-based-application.md
@@ -13,25 +13,17 @@ This tutorial demonstrates how to develop an app template by uploading a package
## Prerequisites
- You need to enable the [KubeSphere App Store (OpenPitrix)](../../pluggable-components/app-store/).
-- You need to create a workspace and a user account (`project-admin`). The account must be invited to the workspace with the role of `workspace-self-provisioner`. For more information, refer to [Create Workspaces, Projects, Accounts and Roles](../../quick-start/create-workspace-and-project/).
+- You need to create a workspace and a user (`project-admin`). The user must be invited to the workspace with the role of `workspace-self-provisioner`. For more information, refer to [Create Workspaces, Projects, Users and Roles](../../quick-start/create-workspace-and-project/).
## Hands-on Lab
-1. Log in to KubeSphere as `project-admin`. In your workspace, go to **App Templates** under **App Management**, and click **Upload Template**.
+1. Log in to KubeSphere as `project-admin`. In your workspace, go to **App Templates** under **App Management**, and click **Create**.
- 
-
-2. In the dialog that appears, click **Upload Helm Chart Package**. You can upload your own Helm chart or download the [Nginx chart](/files/application-templates/nginx-0.1.0.tgz) and use it as an example for the following steps.
-
- 
+2. In the dialog that appears, click **Upload**. You can upload your own Helm chart or download the [Nginx chart](/files/application-templates/nginx-0.1.0.tgz) and use it as an example for the following steps.
3. After the package is uploaded, click **OK** to continue.
- 
-
-4. You can view the basic information of the app under **App Information**. To upload an icon for the app, click **Upload icon**. You can also skip it and click **OK** directly.
-
- 
+4. You can view the basic information of the app under **App Information**. To upload an icon for the app, click **Upload Icon**. You can also skip it and click **OK** directly.
{{< notice note >}}
@@ -39,12 +31,8 @@ Maximum accepted resolutions of the app icon: 96 x 96 pixels.
{{ notice >}}
-5. The app appears in the template list with the status **Draft** after successfully uploaded, which means this app is under development. The uploaded app is visible to all members in the same workspace.
+5. The app appears in the template list with the status **Developing** after successfully uploaded, which means this app is under development. The uploaded app is visible to all members in the same workspace.
- 
-
-6. Click the app and the page opens with the **Versions** tab selected. Click the draft version to expand the menu, where you can see options including **Delete Version**, **Test Deployment**, and **Submit for Review**.
-
- 
+6. Click the app and the page opens with the **Versions** tab selected. Click the draft version to expand the menu, where you can see options including **Delete**, **Install**, and **Submit for Release**.
7. For more information about how to release your app to the App Store, refer to [Application Lifecycle Management](../../application-store/app-lifecycle-management/#step-2-upload-and-submit-application).
diff --git a/content/en/docs/workspace-administration/what-is-workspace.md b/content/en/docs/workspace-administration/what-is-workspace.md
index 4dd4770d8..d110c0079 100644
--- a/content/en/docs/workspace-administration/what-is-workspace.md
+++ b/content/en/docs/workspace-administration/what-is-workspace.md
@@ -15,13 +15,11 @@ This tutorial demonstrates how to create and delete a workspace.
## Prerequisites
-You have an account granted the role of `workspaces-manager`, such as `ws-manager` in [Create Workspaces, Projects, Accounts and Roles](../../quick-start/create-workspace-and-project/).
+You have a user granted the role of `workspaces-manager`, such as `ws-manager` in [Create Workspaces, Projects, Users and Roles](../../quick-start/create-workspace-and-project/).
## Create a Workspace
-1. Log in to the web console of KubeSphere as `ws-manager`. On the **Workspaces** page, you can see all workspaces on the platform. Click **Create**.
-
- 
+1. Log in to the web console of KubeSphere as `ws-manager`. Click **Platform** on the upper-left corner, and then select **Access Control**. On the **Workspaces** page, click **Create**.
{{< notice note >}}
@@ -29,22 +27,18 @@ You have an account granted the role of `workspaces-manager`, such as `ws-manage
{{ notice >}}
-2. On the **Basic Information** page, specify a name for the workspace and select an administrator from the drop-down list. Click **Create** to continue.
-
- 
+2. For single-node cluster, on the **Basic Information** page, specify a name for the workspace and select an administrator from the drop-down list. Click **Create**.
- **Name**: Set a name for the workspace which serves as a unique identifier.
- **Alias**: An alias name for the workspace.
- - **Administrator**: Account that administers the workspace.
+ - **Administrator**: User that administers the workspace.
- **Description**: A brief introduction of the workspace.
-3. The workspace created appears in the list as shown below.
+ For multi-node cluster, after the basic information about the workspace is set, click **Next** to continue. On the **Cluster Settings** page, select clusters to be used in the workspace, and then click **Create**.
- 
+3. The workspace is displayed in the workspace list after it is created.
-4. Click the workspace and you can see resource status in the workspace on the **Overview** page.
-
- 
+4. Click the workspace and you can see resource status of the workspace on the **Overview** page.
## Delete a Workspace
@@ -78,15 +72,13 @@ Be extremely cautious about deleting a workspace if you use kubectl to delete wo
1. In your workspace, go to **Basic Information** under **Workspace Settings**. On the **Basic Information** page, you can see the general information of the workspace, such as the number of projects and members.
- 
-
{{< notice note >}}
On this page, you can click **Edit Information** to change the basic information of the workspace (excluding the workspace name) and turn on/off [Network Isolation](../../workspace-administration/workspace-network-isolation/).
{{ notice >}}
-2. To delete the workspace, check **Delete Workspace** and click **Delete**.
+2. To delete the workspace, click **Delete** under **Delete Workspace**. In the displayed dialog box, enter the name of the workspace, and then click **OK**.
{{< notice warning >}}
diff --git a/content/en/docs/workspace-administration/workspace-network-isolation.md b/content/en/docs/workspace-administration/workspace-network-isolation.md
index 13245136d..8bc7582da 100644
--- a/content/en/docs/workspace-administration/workspace-network-isolation.md
+++ b/content/en/docs/workspace-administration/workspace-network-isolation.md
@@ -10,7 +10,7 @@ weight: 9500
- You have already enabled [Network Policies](../../pluggable-components/network-policy/).
-- Use an account of the `workspace-admin` role. For example, use the account `ws-admin` created in [Create Workspaces, Projects, Accounts and Roles](../../quick-start/create-workspace-and-project/).
+- Use a user of the `workspace-admin` role. For example, use the `ws-admin` user created in [Create Workspaces, Projects, Users and Roles](../../quick-start/create-workspace-and-project/).
{{< notice note >}}
@@ -22,8 +22,6 @@ weight: 9500
Workspace network isolation is disabled by default. You can turn on network isolation in **Basic Information** under **Workspace Settings**.
-
-
{{< notice note >}}
When network isolation is turned on, egress traffic will be allowed by default, while ingress traffic will be denied for different workspaces. If you need to customize your network policy, you need to turn on [Project Network Isolation](../../project-administration/project-network-isolation/) and add a network policy in **Project Settings**.
diff --git a/content/en/docs/workspace-administration/workspace-quotas.md b/content/en/docs/workspace-administration/workspace-quotas.md
index 24725c3e9..3a55c4bd2 100644
--- a/content/en/docs/workspace-administration/workspace-quotas.md
+++ b/content/en/docs/workspace-administration/workspace-quotas.md
@@ -14,19 +14,17 @@ This tutorial demonstrates how to manage resource quotas for a workspace.
## Prerequisites
-You have an available workspace and an account (`ws-manager`). The account must have the `workspaces-manager` role at the platform level. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../quick-start/create-workspace-and-project/).
+You have an available workspace and a user (`ws-manager`). The user must have the `workspaces-manager` role at the platform level. For more information, see [Create Workspaces, Projects, Users and Roles](../../quick-start/create-workspace-and-project/).
## Set Workspace Quotas
1. Log in to the KubeSphere web console as `ws-manager` and go to a workspace.
-2. Navigate to **Quota Management** under **Workspace Settings**.
+2. Navigate to **Workspace Quotas** under **Workspace Settings**.
-3. The **Quota Management** page lists all the available clusters assigned to the workspace and their respective requests and limits of CPU and memory. Click **Edit Quota** on the right of a cluster.
+3. The **Workspace Quotas** page lists all the available clusters assigned to the workspace and their respective requests and limits of CPU and memory. Click **Edit Quotas** on the right of a cluster.
-4. In the dialog that appears, you can see that KubeSphere does not set any requests or limits for the workspace by default. To set requests and limits to control CPU and memory resources, use the slider to move to a desired value or enter numbers directly. Leaving a field blank means you do not set any requests or limits.
-
- 
+4. In the displayed dialog box, you can see that KubeSphere does not set any requests or limits for the workspace by default. To set requests and limits to control CPU and memory resources, move Third-party Social Media Service refers to any website or any social network website through which a User can log in or create an account to use the Service.
+Third-party Social Media Service refers to any website or any social network website through which a User can log in or create a user to use the Service.
Usage Data refers to data collected automatically, either generated by the use of the Service or from the Service infrastructure itself (for example, the duration of a page visit).
diff --git a/content/tr/conferences/admin-quick-start.md b/content/tr/conferences/admin-quick-start.md index 09b4dd913..3bb7286c1 100644 --- a/content/tr/conferences/admin-quick-start.md +++ b/content/tr/conferences/admin-quick-start.md @@ -31,7 +31,7 @@ The role of cluster-admin is able to create accounts for other users and assign #### Step 1: Create roles and accounts -First, we will create a new role (user-manager), grants account management and role management authority to this role, then we will create an account and grant the user-manager role to this account. +First, we will create a new role (user-manager), grants account management and role management authority to this role, then we will create a user and grant the user-manager role to this account. | Account Name | Cluster Role | Responsibility | | ------------ | ------------ | --------------------------------- | @@ -47,7 +47,7 @@ First, we will create a new role (user-manager), grants account management and r  -1.4. Click **Platform**, then navigate to **Accounts** page and click **Create** to create an account. +1.4. Click **Platform**, then navigate to **Accounts** page and click **Create** to create a user.  diff --git a/content/zh/_index.md b/content/zh/_index.md index 3bae4d514..e6f882cde 100644 --- a/content/zh/_index.md +++ b/content/zh/_index.md @@ -90,7 +90,7 @@ section4: - name: 支持多种存储与网络方案 icon: /images/home/multi-tenant-management.svg - content: 支持 GlusterFS、Ceph、NFS、LocalPV,提供多个 CSI 插件对接公有云与企业级存储;提供面向物理机 Kubernetes 环境的负载均衡器 Porter,支持网络策略可视化,支持 Calico、Flannel、Cilium、Kube-OVN 等网络插件 + content: 支持 GlusterFS、Ceph、NFS、LocalPV,提供多个 CSI 插件对接公有云与企业级存储;提供面向物理机 Kubernetes 环境的负载均衡器 OpenELB,支持网络策略可视化,支持 Calico、Flannel、Cilium、Kube-OVN 等网络插件 features: - name: Kubernetes DevOps 系统 diff --git a/content/zh/blogs/DevOps-pipeline-remove-Docker-dependencies.md b/content/zh/blogs/DevOps-pipeline-remove-Docker-dependencies.md index e2a63893f..fd3694f1f 100644 --- a/content/zh/blogs/DevOps-pipeline-remove-Docker-dependencies.md +++ b/content/zh/blogs/DevOps-pipeline-remove-Docker-dependencies.md @@ -61,7 +61,7 @@ containerd github.com/containerd/containerd v1.4.3 269548fa27e0089a8b8278fc4 这里主要用于测试,因此没有将 Podman 安装到基础镜像中,而是在流水线中实时安装。生产环境,应该提前安装,以加快执行速度。 -以 [devops-java-sample](https://github.com/kubesphere/devops-java-sample) 为例,流水线中主要需要增加如下部分: +以 [devops-maven-sample](https://github.com/kubesphere/devops-maven-sample) 为例,流水线中主要需要增加如下部分: ```groovy stage ('install podman') { @@ -78,11 +78,11 @@ containerd github.com/containerd/containerd v1.4.3 269548fa27e0089a8b8278fc4 } ``` -相关脚本,已经更新到 [Podman](https://github.com/kubesphere/devops-java-sample/tree/podman) 分支中。 +相关脚本,已经更新到 [Podman](https://github.com/kubesphere/devops-maven-sample/tree/podman) 分支中。 -## 测试 devops-java-sample 项目 +## 测试 devops-maven-sample 项目 -使用 devops-java-sample 创建 SCM 流水线,Jenkinsfile 路径设置为 Jenkinsfile-online,并配置好相关的秘钥值。 +使用 devops-maven-sample 创建 SCM 流水线,Jenkinsfile 路径设置为 Jenkinsfile-online,并配置好相关的秘钥值。 最后执行时,在 Podman 分支上可以看到如下日志: diff --git a/content/zh/blogs/Kubernetes-multicluster-KubeSphere.md b/content/zh/blogs/Kubernetes-multicluster-KubeSphere.md index 270206cf4..42e1b0d59 100644 --- a/content/zh/blogs/Kubernetes-multicluster-KubeSphere.md +++ b/content/zh/blogs/Kubernetes-multicluster-KubeSphere.md @@ -1,10 +1,10 @@ --- title: '混合云下的 Kubernetes 多集群管理与应用部署' tag: 'KubeSphere, Kubernetes, 多集群管理' -keywords: 'KKubeSphere, Kubernetes, 多集群管理, Kubefed' +keywords: 'KubeSphere, Kubernetes, 多集群管理, Kubefed' description: '本文介绍了 Kubernetes 社区多集群方向的发展历程以及已有的多集群解决方案,分享在混合云的场景下, KubeSphere 如何基于 Kubefed 统一应用的分发与部署,以达到跨 region 的多活/容灾等目的。同时探讨未来多集群领域可能迈向的去中心化的架构。' createTime: '2021-05-26' -author: ' 李宇' +author: '李宇' snapshot: 'https://pek3b.qingstor.com/kubesphere-community/images/Kubernetes-multicluster-KubeSphere-banner.jpg' --- @@ -68,7 +68,7 @@ Kubernetes 内部分为 Master 和 Worker 两个角色。Master 上面有 API Se  当然 Kubefed 也不是银弹,也有其一定的局限性。从前面可以看到,其 API 定义复杂,容易出错,也只能使用 kubefedctl 加入和解绑集群,没有提供单独的 SDK。再就是它要求控制层集群到管控集群必须网络可达,单集群到多集群需要改造 API,旧版本也不支持联邦资源的状态收集。 -## KubeShere On Kubefed +## KubeSphere On Kubefed 接下来我们看看 KubeSphere 基于 Kubefed 如何实现并简化了多集群管理。 @@ -135,4 +135,3 @@ Virtual Kubelet 可以帮助你把自己的服务伪装成一个 Kubernetes 的 在 Liqo 里面,集群之间不存在联邦关系,左图里在 Kubefed 架构下 k2、k3 两个集群是 k1 的成员集群,资源下方需要经过一次 k1 的 push,而在右边的图里面,k2、k3 只是 k1 的一个节点,因此在部署应用的时候,完全不需要引入任何的 API,k2、k3 看起来就是 k1 的节点,这样业务就可以无感知的被部署到不同的集群上去,极大减少了单集群到多集群改造的复杂性。现在 Liqo 属于刚起步阶段,目前不支持两个集群以上的拓扑,在未来 KubeSphere 也会持续关注开源领域的一些其他的多集群管理方案。 - diff --git a/content/zh/blogs/Serverless-way-for-Kubernetes-Log-Alerting.md b/content/zh/blogs/Serverless-way-for-Kubernetes-Log-Alerting.md deleted file mode 100644 index 64fb25e96..000000000 --- a/content/zh/blogs/Serverless-way-for-Kubernetes-Log-Alerting.md +++ /dev/null @@ -1,426 +0,0 @@ ---- -title: 'OpenFunction 应用系列之一: 以 Serverless 的方式实现 Kubernetes 日志告警' -tag: 'OpenFunction, KubeSphere, Kubernetes' -keywords: 'penFunction, Serverless, KubeSphere, Kubernetes, Kafka, FaaS, 无服务器' -description: '本文提供了一种基于 Serverless 的日志处理思路,可以在降低该任务链路成本的同时提高其灵活性。' -createTime: '2021-08-26' -author: '方阗' -snapshot: 'https://pek3b.qingstor.com/kubesphere-community/images/202109031518797.png' ---- -## 概述 - -当我们将容器的日志收集到消息服务器之后,我们该如何处理这些日志?部署一个专用的日志处理工作负载可能会耗费多余的成本,而当日志体量骤增、骤降时亦难以评估日志处理工作负载的待机数量。本文提供了一种基于 Serverless 的日志处理思路,可以在降低该任务链路成本的同时提高其灵活性。 - -我们的大体设计是使用 Kafka 服务器作为日志的接收器,之后以输入 Kafka 服务器的日志作为事件,驱动 Serverless 工作负载对日志进行处理。据此的大致步骤为: - -1. 搭建 Kafka 服务器作为 Kubernetes 集群的日志接收器 -2. 部署 OpenFunction 为日志处理工作负载提供 Serverless 能力 -3. 编写日志处理函数,抓取特定的日志生成告警消息 -4. 配置 [Notification Manager](https://github.com/kubesphere/notification-manager/) 将告警发送至 Slack - - - -在这个场景中,我们会利用到 [OpenFunction](https://github.com/OpenFunction/OpenFunction) 带来的 Serverless 能力。 - -> [OpenFunction](https://github.com/OpenFunction/OpenFunction) 是 KubeSphere 社区开源的一个 FaaS(Serverless)项目,旨在让用户专注于他们的业务逻辑,而不必关心底层运行环境和基础设施。该项目当前具备以下关键能力: -> -> - 支持通过 dockerfile 或 buildpacks 方式构建 OCI 镜像 -> - 支持使用 Knative Serving 或 OpenFunctionAsync ( KEDA + Dapr ) 作为 runtime 运行 Serverless 工作负载 -> - 自带事件驱动框架 - -## 使用 Kafka 作为日志接收器 - -首先,我们为 KubeSphere 平台开启 **logging** 组件(可以参考 [启用可插拔组件](https://kubesphere.io/zh/docs/pluggable-components/) 获取更多信息)。然后我们使用 [strimzi-kafka-operator](https://github.com/strimzi/strimzi-kafka-operator) 搭建一个最小化的 Kafka 服务器。 - -1. 在 default 命名空间中安装 [strimzi-kafka-operator](https://github.com/strimzi/strimzi-kafka-operator) : - - ```shell - helm repo add strimzi https://strimzi.io/charts/ - helm install kafka-operator -n default strimzi/strimzi-kafka-operator - ``` - -2. 运行以下命令在 default 命名空间中创建 Kafka 集群和 Kafka Topic,该命令所创建的 Kafka 和 Zookeeper 集群的存储类型为 **ephemeral**,使用 emptyDir 进行演示。 - - > 注意,我们此时创建了一个名为 “logs” 的 topic,后续会用到它 - - ```shell - cat <
,点击 **kubectl**,然后执行以下命令来编辑 CRD `ClusterConfiguration` 中的 `ks-installer`:
+
+ ```bash
+ kubectl -n kubesphere-system edit cc ks-installer
+ ```
+
+2. 在 `spec.authentication.jwtSecret` 字段下添加以下字段。
+
+ *使用 [Google Identity Platform](https://developers.google.com/identity/protocols/oauth2/openid-connect) 的示例*:
+
+ ```yaml
+ spec:
+ authentication:
+ jwtSecret: ''
+ authenticateRateLimiterMaxTries: 10
+ authenticateRateLimiterDuration: 10m0s
+ oauthOptions:
+ accessTokenMaxAge: 1h
+ accessTokenInactivityTimeout: 30m
+ identityProviders:
+ - name: google
+ type: OIDCIdentityProvider
+ mappingMethod: auto
+ provider:
+ clientID: '********'
+ clientSecret: '********'
+ issuer: https://accounts.google.com
+ redirectURL: 'https://ks-console/oauth/redirect/google'
+ ```
+
+ 字段描述如下:
+
+ | 参数 | 描述 |
+ | -------------------- | ------------------------------------------------------------ |
+ | clientID | 客户端 ID。 |
+ | clientSecret | 客户端密码。 |
+ | redirectURL | 重定向到 ks-console 的 URL,格式为:`https://<域名>/oauth/redirect/<身份提供者名称>`。URL 中的 `<身份提供者名称>` 对应 `oauthOptions:identityProviders:name` 的值。 |
+ | issuer | 定义客户端如何动态发现有关 OpenID 提供者的信息。 |
+ | preferredUsernameKey | 可配置的密钥,包含首选用户声明。此参数为可选参数。 |
+ | emailKey | 可配置的密钥,包含电子邮件声明。此参数为可选参数。 |
+ | getUserInfo | 使用 userinfo 端点获取令牌的附加声明。非常适用于上游返回 “thin” ID 令牌的场景。此参数为可选参数。 |
+ | insecureSkipVerify | 关闭 TLS 证书验证。 |
+
+
+
diff --git a/content/zh/docs/access-control-and-account-management/external-authentication/set-up-external-authentication.md b/content/zh/docs/access-control-and-account-management/external-authentication/set-up-external-authentication.md
new file mode 100644
index 000000000..4c8a8fd53
--- /dev/null
+++ b/content/zh/docs/access-control-and-account-management/external-authentication/set-up-external-authentication.md
@@ -0,0 +1,112 @@
+---
+title: "设置外部身份验证"
+keywords: "LDAP, 外部, 第三方, 身份验证"
+description: "如何在 KubeSphere 上设置外部身份验证。"
+
+linkTitle: "设置外部身份验证"
+weight: 12210
+---
+
+本文档描述了如何在 KubeSphere 上使用外部身份提供者,例如 LDAP 服务或 Active Directory 服务。
+
+KubeSphere 提供了一个内置的 OAuth 服务。用户通过获取 OAuth 访问令牌以对 API 进行身份验证。作为 KubeSphere 管理员,您可以编辑 CRD `ClusterConfiguration` 中的 `ks-installer` 来配置 OAuth 并指定身份提供者。
+
+## 准备工作
+
+您需要部署一个 Kubernetes 集群,并在集群中安装 KubeSphere。有关详细信息,请参阅[在 Linux 上安装](../../../installing-on-linux/)和[在 Kubernetes 上安装](../../../installing-on-kubernetes/)。
+
+
+## 步骤
+
+1. 以 `admin` 身份登录 KubeSphere,将光标移动到右下角
,点击 **kubectl**,然后执行以下命令来编辑 CRD `ClusterConfiguration` 中的 `ks-installer`:
+
+ ```bash
+ kubectl -n kubesphere-system edit cc ks-installer
+ ```
+
+2. 在 `spec.authentication.jwtSecret` 字段下添加以下字段。
+
+ 示例:
+
+ ```yaml
+ spec:
+ authentication:
+ jwtSecret: ''
+ authenticateRateLimiterMaxTries: 10
+ authenticateRateLimiterDuration: 10m0s
+ loginHistoryRetentionPeriod: 168h
+ maximumClockSkew: 10s
+ multipleLogin: true
+ oauthOptions:
+ accessTokenMaxAge: 1h
+ accessTokenInactivityTimeout: 30m
+ identityProviders:
+ - name: LDAP
+ type: LDAPIdentityProvider
+ mappingMethod: auto
+ provider:
+ host: 192.168.0.2:389
+ managerDN: uid=root,cn=users,dc=nas
+ managerPassword: ********
+ userSearchBase: cn=users,dc=nas
+ loginAttribute: uid
+ mailAttribute: mail
+ ```
+
+ 字段描述如下:
+
+ * `jwtSecret`:签发用户令牌的密钥。在多集群环境下,所有的集群必须[使用相同的密钥](../../../multicluster-management/enable-multicluster/direct-connection/#prepare-a-member-cluster)。
+ * `authenticateRateLimiterMaxTries`:`authenticateLimiterDuration` 指定的期间内允许的最大连续登录失败次数。如果用户连续登录失败次数达到限制,则该用户将被封禁。
+ * `authenticateRateLimiterDuration`:`authenticateRateLimiterMaxTries` 适用的时间段。
+ * `loginHistoryRetentionPeriod`:用户登录记录保留期限,过期的登录记录将被自动删除。
+ * `maximumClockSkew`:时间敏感操作(例如验证用户令牌的过期时间)的最大时钟偏差,默认值为10秒。
+ * `multipleLogin`:是否允许多个用户同时从不同位置登录,默认值为 `true`。
+ * `oauthOptions`:
+ * `accessTokenMaxAge`:访问令牌有效期。对于多集群环境中的成员集群,默认值为 `0h`,这意味着访问令牌永不过期。对于其他集群,默认值为 `2h`。
+ * `accessTokenInactivityTimeout`:令牌空闲超时时间。该值表示令牌过期后,刷新用户令牌最大的间隔时间,如果不在此时间窗口内刷新用户身份令牌,用户将需要重新登录以获得访问权。
+ * `identityProviders`:
+ * `name`:身份提供者的名称。
+ * `type`:身份提供者的类型。
+ * `mappingMethod`:帐户映射方式,值可以是 `auto` 或者 `lookup`。
+ * 如果值为 `auto`(默认),需要指定新的用户名。通过第三方帐户登录时,KubeSphere 会根据用户名自动创建关联帐户。
+ * 如果值为 `lookup`,需要执行步骤 3 以手动关联第三方帐户与 KubeSphere 帐户。
+ * `provider`:身份提供者信息。此部分中的字段根据身份提供者的类型而异。
+
+3. 如果 `mappingMethod` 设置为 `lookup`,可以运行以下命令并添加标签来进行帐户关联。如果 `mappingMethod` 是 `auto` 可以跳过这个部分。
+
+ ```bash
+ kubectl edit user