diff --git a/KubeSphere Documentation Style Guide.md b/KubeSphere Documentation Style Guide.md index 547e0d8f6..37346af97 100644 --- a/KubeSphere Documentation Style Guide.md +++ b/KubeSphere Documentation Style Guide.md @@ -2,15 +2,15 @@ This style guide provides a set of editorial guidelines for those who are writing documentation for KubeSphere. -## **Basic Rules** +## Basic Rules - Write clearly, concisely and precisely. -- English is the preferred language to use when you write documentation. If you are not sure whether you are writing correctly, you can use grammar checkers (e.g. [grammarly](https://www.grammarly.com/)). Although they are not 100% accurate, they can help you get rid of most of the wording issues. That said, Chinese is also acceptable if you really don't know how to express your meaning in English. -- It is recommended that you use more images or diagrams to show UI functions and logical relations with tools such as [draw.io](https://draw.io). +- English is the preferred language to use when you write documentation. If you are not sure whether you are writing correctly, you can use grammar checkers (for example, [grammarly](https://www.grammarly.com/)). Although they are not 100% accurate, they can help you get rid of most of the wording issues. That said, Chinese is also acceptable if you really don't know how to express your meaning in English. +- Recommended image or diagram tools: [draw.io](https://draw.io) and [Visio](https://www.microsoft.com/en-ww/microsoft-365/visio/flowchart-software/). ## Preparation Notice -Before you start writing the specific steps for a feature, state clearly what should be ready in advance, such as necessary components, accounts or roles (do not tell readers to use `admin` for all the operations, which is unreasonable in reality for different tenants), or a specific environment. You can add this part at the beginning of a tutorial or put it in a separate part (e.g. **Prerequisites**). +Before you start writing the specific steps for a feature, state clearly what should be ready in advance, such as necessary components, accounts or roles (do not tell readers to use `admin` for all the operations, which is unreasonable in reality for different tenants), or a specific environment. You can add this part at the beginning of a tutorial or put it in a separate part (for example, **Prerequisites**). ## Paragraphs @@ -19,7 +19,7 @@ Before you start writing the specific steps for a feature, state clearly what sh - It is recommended that you use an ordered list to organize your paragraphs for a specific operation. This is to tell your readers what step they are in and they can have a clear view of the overall process. For example: 1. Go to **Application Workloads** and click **Workloads**. -2. Click **Create** on the right to create a deployment. +2. Click **Create** on the right to create a Deployment. 3. Enter the basic information and click **Next**. ## Titles @@ -34,7 +34,7 @@ Give a title first before you write a paragraph. It can be grouped into differen ``` - Heading 1: The title of a tutorial. You do not need to add this type of title in the main body as it is already defined at the beginning in the value `title`. -- Heading 2: The title of a major part in the tutorial. Make sure you capitalize each word in Heading 2, except prepositions, articles, conjunctions and words that are commonly written with a lower case letter at the beginning (e.g. macOS). +- Heading 2: The title of a major part in the tutorial. Make sure you capitalize each word in Heading 2, except prepositions, articles, conjunctions and words that are commonly written with a lowercase letter at the beginning (for example, macOS). - Heading 3: A subtitle under Heading 2. You only need to capitalize the first word for Heading 3. - Heading 4: This is rarely used as Heading 2 and Heading 3 will do in most cases. Make sure if Heading 4 is really needed before you use it. - Do not add any periods after each heading. @@ -42,7 +42,7 @@ Give a title first before you write a paragraph. It can be grouped into differen ## Images - When you submit your md files to GitHub, make sure you add related image files that appear in md files in the pull request as well. Please save your image files in static/images/docs. You can create a folder in the directory to save your images. -- If you want to add remarks (e.g. put a box on a UI button), use the color **green**. As some screenshot apps does not support the color picking function for a specific color code, as long as the color is **similar** to #09F709, #00FF00, #09F709 or #09F738, it is acceptable. +- If you want to add remarks (for example, put a box on a UI button), use the color **green**. As some screenshot apps does not support the color picking function for a specific color code, as long as the color is **similar** to #09F709, #00FF00, #09F709 or #09F738, it is acceptable. - Image format: PNG. - Make sure images in your guide match the content. For example, you mention that users need to log in to KubeSphere using an account of a role; this means the account that displays in your image is expected to be the one you are talking about. It confuses your readers if the content you are describing is not consistent with the image used. - Recommended: [Xnip](https://xnipapp.com/) for Mac and [Sniptool](https://www.reasyze.com/sniptool/) for Windows. @@ -51,11 +51,12 @@ Give a title first before you write a paragraph. It can be grouped into differen ## Tone - Do not use “we”. Address the reader as “you” directly. Using “we” in a sentence can be confusing, because the reader might not know whether they are part of the “we” you are describing. You can also use words like users, developers, administrators and engineers, depending on the feature you are describing. -- Do not use words which can imply a specific gender, including he, him, his, himself, she, her, hers and herself. -| Do | Don't | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| The component has been installed. You can now use the feature. | The component has been installed. We can now use the feature. | + | Do | Don't | + | ------------------------------------------------------------ | ------------------------------------------------------------ | + | The component has been installed. You can now use the feature. | The component has been installed. We can now use the feature. | + +- Do not use words which can imply a specific gender, including he, him, his, himself, she, her, hers and herself. ## Format @@ -69,85 +70,127 @@ Use a **period** or a **conjunction** between two **complete** sentences. | Check the status of the component. You can see it is running normally. | Check the status of the component, you can see it is running normally. | | Check the status of the component, and you can see it is running normally. | Check the status of the component, you can see it is running normally. | -### **Bold** +### Bold -- Mark any UI text (e.g. a button) in bold. +- Mark any UI text (for example, a button) in bold. - -| Do | Don't | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| In the top-right corner of this page, click **Save**. | In the top-right corner of this page, click Save. | -| In **Workspaces**, you can see all your workspaces listed. | In Workspaces, you can see all your workspaces listed. | -| On the **Create Project** Page, click **OK** in the bottom-right corner to continue. | On the Create Project Page, click OK in the bottom-right corner to continue. | + | Do | Don't | + | ------------------------------------------------------------ | ------------------------------------------------------------ | + | In the top-right corner of this page, click **Save**. | In the top-right corner of this page, click Save. | + | In **Workspaces**, you can see all your workspaces listed. | In Workspaces, you can see all your workspaces listed. | + | On the **Create Project** Page, click **OK** in the bottom-right corner to continue. | On the Create Project Page, click OK in the bottom-right corner to continue. | - Mark the content of great importance or deserving special attention to readers in bold. For example: KubeSphere is a **distributed operating system managing cloud-native applications** with Kubernetes as its kernel. -### **Code** +### Prepositions + +When describing the UI, you can use the following prepositions. + +
| Preposition | +UI element | +Recommended | +
|---|---|---|
| in | +
+ dialogs +fields +lists +menus +sidebars +windows + |
+
+ In the Delete User dialog, enter the name and click OK. +In the Name field, enter In the Language drop-down list, select a desired language. +In the More menu, click Delete. +Click Volumes under Storage in the sidebar. +In the Metering and Billing window, click View Consumption. + |
+
| on | +
+ pages +tabs + |
+
+ On the Volumes page, click Create. +On the Deployments tab, click Create. + |
+
on the right of `ks-installer` and select **Edit YAML**.
-4. Click the three dots on the right of `ks-installer` and select **Edit YAML**.
+5. Scroll down to the bottom of the file, add `telemetry_enabled: false`, and then click **Update**.
- 
-
-5. Scroll down to the bottom of the file and add the value `telemetry_enabled: false`. When you finish, click **Update**.
-
- 
{{< notice note >}}
-If you want to enable Telemetry again, you can update `ks-installer` by deleting the value `telemetry_enabled: false` or changing it to `telemetry_enabled: true`.
+If you want to enable Telemetry again, you can update `ks-installer` by deleting `telemetry_enabled: false` or changing it to `telemetry_enabled: true`.
{{ notice >}}
diff --git a/content/en/docs/faq/observability/byop.md b/content/en/docs/faq/observability/byop.md
index 76a764f5b..70208867f 100644
--- a/content/en/docs/faq/observability/byop.md
+++ b/content/en/docs/faq/observability/byop.md
@@ -194,4 +194,4 @@ Now that your own Prometheus stack is up and running, you can change KubeSphere'
If you enable/disable KubeSphere pluggable components following [this guide](https://kubesphere.io/docs/pluggable-components/overview/) , the `monitoring endpoint` will be reset to the original one. In this case, you have to change it to the new one and then restart the KubeSphere APIServer again.
-{{ notice >}}
\ No newline at end of file
+{{ notice >}}
diff --git a/content/en/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-aks.md b/content/en/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-aks.md
index 1c2ec9e95..1e5747a83 100644
--- a/content/en/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-aks.md
+++ b/content/en/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-aks.md
@@ -14,7 +14,7 @@ Azure can help you implement infrastructure as code by providing resource deploy
### Use Azure Cloud Shell
-You don't have to install Azure CLI on your machine as Azure provides a web-based terminal. Click the Cloud Shell button on the menu bar at the upper right corner in Azure portal.
+You don't have to install Azure CLI on your machine as Azure provides a web-based terminal. Click the Cloud Shell button on the menu bar at the upper-right corner in Azure portal.

diff --git a/content/en/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-do.md b/content/en/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-do.md
index 1cc749336..923f5a327 100644
--- a/content/en/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-do.md
+++ b/content/en/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-do.md
@@ -18,11 +18,11 @@ A Kubernetes cluster in DO is a prerequisite for installing KubeSphere. Go to yo
You need to select:
-1. Kubernetes version (e.g. *1.18.6-do.0*)
-2. Datacenter region (e.g. *Frankfurt*)
-3. VPC network (e.g. *default-fra1*)
-4. Cluster capacity (e.g. 2 standard nodes with 2 vCPUs and 4GB of RAM each)
-5. A name for the cluster (e.g. *kubesphere-3*)
+1. Kubernetes version (for example, *1.18.6-do.0*)
+2. Datacenter region (for example, *Frankfurt*)
+3. VPC network (for example, *default-fra1*)
+4. Cluster capacity (for example, 2 standard nodes with 2 vCPUs and 4GB of RAM each)
+5. A name for the cluster (for example, *kubesphere-3*)

diff --git a/content/en/docs/installing-on-kubernetes/introduction/overview.md b/content/en/docs/installing-on-kubernetes/introduction/overview.md
index f77bd15fc..83bbc6e73 100644
--- a/content/en/docs/installing-on-kubernetes/introduction/overview.md
+++ b/content/en/docs/installing-on-kubernetes/introduction/overview.md
@@ -8,7 +8,7 @@ weight: 4110

-As part of KubeSphere's commitment to provide a plug-and-play architecture for users, it can be easily installed on existing Kubernetes clusters. More specifically, KubeSphere can be deployed on Kubernetes either hosted on clouds (e.g. AWS EKS, QingCloud QKE and Google GKE) or on-premises. This is because KubeSphere does not hack Kubernetes itself. It only interacts with the Kubernetes API to manage Kubernetes cluster resources. In other words, KubeSphere can be installed on any native Kubernetes cluster and Kubernetes distribution.
+As part of KubeSphere's commitment to provide a plug-and-play architecture for users, it can be easily installed on existing Kubernetes clusters. More specifically, KubeSphere can be deployed on Kubernetes either hosted on clouds (for example, AWS EKS, QingCloud QKE and Google GKE) or on-premises. This is because KubeSphere does not hack Kubernetes itself. It only interacts with the Kubernetes API to manage Kubernetes cluster resources. In other words, KubeSphere can be installed on any native Kubernetes cluster and Kubernetes distribution.
This section gives you an overview of the general steps of installing KubeSphere on Kubernetes. For more information about the specific way of installation in different environments, see Installing on Hosted Kubernetes and Installing on On-premises Kubernetes.
diff --git a/content/en/docs/installing-on-linux/cluster-operation/add-edge-nodes.md b/content/en/docs/installing-on-linux/cluster-operation/add-edge-nodes.md
index dab7a3be2..b28a4bc69 100644
--- a/content/en/docs/installing-on-linux/cluster-operation/add-edge-nodes.md
+++ b/content/en/docs/installing-on-linux/cluster-operation/add-edge-nodes.md
@@ -90,7 +90,7 @@ To make sure edge nodes can successfully talk to your cluster, you must forward
## Add an Edge Node
-1. Log in to the console as `admin` and click **Platform** in the top left corner.
+1. Log in to the console as `admin` and click **Platform** in the top-left corner.
2. Select **Cluster Management** and navigate to **Edge Nodes** under **Node Management**.
diff --git a/content/en/docs/installing-on-linux/cluster-operation/add-new-nodes.md b/content/en/docs/installing-on-linux/cluster-operation/add-new-nodes.md
index c1e7b4833..866aa2f5d 100644
--- a/content/en/docs/installing-on-linux/cluster-operation/add-new-nodes.md
+++ b/content/en/docs/installing-on-linux/cluster-operation/add-new-nodes.md
@@ -76,7 +76,7 @@ You can skip this step if you already have the configuration file on your machin
## Add Master Nodes for High Availability
-The steps of adding master nodes are generally the same as adding worker nodes while you need to configure a load balancer for your cluster. You can use any cloud load balancers or hardware load balancers (e.g. F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating highly available clusters.
+The steps of adding master nodes are generally the same as adding worker nodes while you need to configure a load balancer for your cluster. You can use any cloud load balancers or hardware load balancers (for example, F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating highly available clusters.
1. Create a configuration file using KubeKey.
diff --git a/content/en/docs/installing-on-linux/high-availability-configurations/ha-configuration.md b/content/en/docs/installing-on-linux/high-availability-configurations/ha-configuration.md
index 1f61c76d2..13f87b786 100644
--- a/content/en/docs/installing-on-linux/high-availability-configurations/ha-configuration.md
+++ b/content/en/docs/installing-on-linux/high-availability-configurations/ha-configuration.md
@@ -6,7 +6,7 @@ linkTitle: "Set up an HA Cluster Using a Load Balancer"
weight: 3210
---
-You can set up a single-master Kubernetes cluster with KubeSphere installed based on the tutorial of [Multi-node Installation](../../../installing-on-linux/introduction/multioverview/). Single-master clusters may be sufficient for development and testing in most cases. For a production environment, however, you need to consider the high availability of the cluster. If key components (for example, kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same master node, Kubernetes and KubeSphere will be unavailable once the master node goes down. Therefore, you need to set up a high-availability cluster by provisioning load balancers with multiple master nodes. You can use any cloud load balancer, or any hardware load balancer (e.g. F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters.
+You can set up a single-master Kubernetes cluster with KubeSphere installed based on the tutorial of [Multi-node Installation](../../../installing-on-linux/introduction/multioverview/). Single-master clusters may be sufficient for development and testing in most cases. For a production environment, however, you need to consider the high availability of the cluster. If key components (for example, kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same master node, Kubernetes and KubeSphere will be unavailable once the master node goes down. Therefore, you need to set up a high-availability cluster by provisioning load balancers with multiple master nodes. You can use any cloud load balancer, or any hardware load balancer (for example, F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters.
This tutorial demonstrates the general configurations of a high-availability cluster as you install KubeSphere on Linux.
@@ -163,7 +163,7 @@ For more information about different fields in this configuration file, see [Kub
### Persistent storage plugin configurations
-For a production environment, you need to prepare persistent storage and configure the storage plugin (e.g. CSI) in `config-sample.yaml` to define which storage service you want to use. For more information, see [Persistent Storage Configurations](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/).
+For a production environment, you need to prepare persistent storage and configure the storage plugin (for example, CSI) in `config-sample.yaml` to define which storage service you want to use. For more information, see [Persistent Storage Configurations](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/).
### Enable pluggable components (Optional)
diff --git a/content/en/docs/installing-on-linux/high-availability-configurations/set-up-ha-cluster-using-keepalived-haproxy.md b/content/en/docs/installing-on-linux/high-availability-configurations/set-up-ha-cluster-using-keepalived-haproxy.md
index 651240496..33eee9c18 100644
--- a/content/en/docs/installing-on-linux/high-availability-configurations/set-up-ha-cluster-using-keepalived-haproxy.md
+++ b/content/en/docs/installing-on-linux/high-availability-configurations/set-up-ha-cluster-using-keepalived-haproxy.md
@@ -168,7 +168,7 @@ Keepalived must be installed on both machines while the configuration of them is
- For the `interface` field, you must provide your own network card information. You can run `ifconfig` on your machine to get the value.
- - The IP address provided for `unicast_src_ip` is the IP address of your current machine. For other machines where HAproxy and Keepalived are also installed for load balancing, their IP address must be input for the field `unicast_peer`.
+ - The IP address provided for `unicast_src_ip` is the IP address of your current machine. For other machines where HAproxy and Keepalived are also installed for load balancing, their IP address must be provided for the field `unicast_peer`.
{{ notice >}}
diff --git a/content/en/docs/installing-on-linux/introduction/multioverview.md b/content/en/docs/installing-on-linux/introduction/multioverview.md
index fc8ce629e..949038e4c 100644
--- a/content/en/docs/installing-on-linux/introduction/multioverview.md
+++ b/content/en/docs/installing-on-linux/introduction/multioverview.md
@@ -16,7 +16,7 @@ This section gives you an overview of a single-master multi-node installation, i
## Concept
-A multi-node cluster is composed of at least one master node and one worker node. You can use any node as the **taskbox** to carry out the installation task. You can add additional nodes based on your needs (e.g. for high availability) both before and after the installation.
+A multi-node cluster is composed of at least one master node and one worker node. You can use any node as the **taskbox** to carry out the installation task. You can add additional nodes based on your needs (for example, for high availability) both before and after the installation.
- **Master**. A master node generally hosts the control plane that controls and manages the whole system.
- **Worker**. Worker nodes run the actual applications deployed on them.
@@ -177,7 +177,7 @@ Here are some examples for your reference:
./kk create config [-f ~/myfolder/abc.yaml]
```
-- You can specify a KubeSphere version that you want to install (e.g. `--with-kubesphere v3.1.0`).
+- You can specify a KubeSphere version that you want to install (for example, `--with-kubesphere v3.1.0`).
```bash
./kk create config --with-kubesphere [version]
@@ -219,7 +219,7 @@ List all your machines under `hosts` and add their detailed information as above
`name`: The hostname of the instance.
-`address`: The IP address you use for the connection between the taskbox and other instances through SSH. This can be either the public IP address or the private IP address depending on your environment. For example, some cloud platforms provide every instance with a public IP address which you use to access instances through SSH. In this case, you can input the public IP address for this field.
+`address`: The IP address you use for the connection between the taskbox and other instances through SSH. This can be either the public IP address or the private IP address depending on your environment. For example, some cloud platforms provide every instance with a public IP address which you use to access instances through SSH. In this case, you can provide the public IP address for this field.
`internalAddress`: The private IP address of the instance.
@@ -278,7 +278,7 @@ The `controlPlaneEndpoint` is where you provide your external load balancer info
#### addons
-You can customize persistent storage plugins (e.g. NFS Client, Ceph RBD, and GlusterFS) by specifying storage under the field `addons` in `config-sample.yaml`. For more information, see [Persistent Storage Configurations](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/).
+You can customize persistent storage plugins (for example, NFS Client, Ceph RBD, and GlusterFS) by specifying storage under the field `addons` in `config-sample.yaml`. For more information, see [Persistent Storage Configurations](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/).
KubeKey will install [OpenEBS](https://openebs.io/) to provision [LocalPV](https://kubernetes.io/docs/concepts/storage/volumes/#local) for development and testing environment by default, which is convenient for new users. In this example of multi-node installation, the default storage class (local volume) is used. For production, you can use NFS/Ceph/GlusterFS/CSI or commercial products as persistent storage solutions.
diff --git a/content/en/docs/installing-on-linux/on-premises/install-kubesphere-on-bare-metal.md b/content/en/docs/installing-on-linux/on-premises/install-kubesphere-on-bare-metal.md
index 66307b77e..7804f72ec 100644
--- a/content/en/docs/installing-on-linux/on-premises/install-kubesphere-on-bare-metal.md
+++ b/content/en/docs/installing-on-linux/on-premises/install-kubesphere-on-bare-metal.md
@@ -244,7 +244,7 @@ chmod +x kk
With KubeKey, you can install Kubernetes and KubeSphere together. You have the option to create a multi-node cluster by customizing parameters in the configuration file.
-Create a Kubernetes cluster with KubeSphere installed (e.g. `--with-kubesphere v3.1.0`):
+Create a Kubernetes cluster with KubeSphere installed (for example, `--with-kubesphere v3.1.0`):
```bash
./kk create config --with-kubernetes v1.20.4 --with-kubesphere v3.1.0
diff --git a/content/en/docs/installing-on-linux/on-premises/install-kubesphere-on-vmware-vsphere.md b/content/en/docs/installing-on-linux/on-premises/install-kubesphere-on-vmware-vsphere.md
index b785d8cbf..e4e5385a8 100644
--- a/content/en/docs/installing-on-linux/on-premises/install-kubesphere-on-vmware-vsphere.md
+++ b/content/en/docs/installing-on-linux/on-premises/install-kubesphere-on-vmware-vsphere.md
@@ -9,7 +9,7 @@ weight: 3510
## Introduction
-For a production environment, we need to consider the high availability of the cluster. If the key components (e.g. kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same master node, Kubernetes and KubeSphere will be unavailable once the master node goes down. Therefore, we need to set up a high-availability cluster by provisioning load balancers with multiple master nodes. You can use any cloud load balancer, or any hardware load balancer (e.g. F5). In addition, Keepalived and [HAProxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters.
+For a production environment, we need to consider the high availability of the cluster. If the key components (for example, kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same master node, Kubernetes and KubeSphere will be unavailable once the master node goes down. Therefore, we need to set up a high-availability cluster by provisioning load balancers with multiple master nodes. You can use any cloud load balancer, or any hardware load balancer (for example, F5). In addition, Keepalived and [HAProxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters.
This tutorial walks you through an example of how to create Keepalived and HAProxy, and implement high availability of master and etcd nodes using the load balancers on VMware vSphere.
@@ -77,7 +77,7 @@ You can follow the New Virtual Machine wizard to create a virtual machine to pla

-6. In **Ready to complete** page, you review the configuration selections that you have made for the virtual machine. Click **Finish** at the bottom right corner to continue.
+6. In **Ready to complete** page, you review the configuration selections that you have made for the virtual machine. Click **Finish** at the bottom-right corner to continue.

@@ -345,7 +345,7 @@ chmod +x kk
With KubeKey, you can install Kubernetes and KubeSphere together. You have the option to create a multi-node cluster by customizing parameters in the configuration file.
-Create a Kubernetes cluster with KubeSphere installed (e.g. `--with-kubesphere v3.1.0`):
+Create a Kubernetes cluster with KubeSphere installed (for example, `--with-kubesphere v3.1.0`):
```bash
./kk create config --with-kubernetes v1.19.8 --with-kubesphere v3.1.0
diff --git a/content/en/docs/installing-on-linux/persistent-storage-configurations/install-ceph-csi-rbd.md b/content/en/docs/installing-on-linux/persistent-storage-configurations/install-ceph-csi-rbd.md
index 34eada2fd..9656b5986 100644
--- a/content/en/docs/installing-on-linux/persistent-storage-configurations/install-ceph-csi-rbd.md
+++ b/content/en/docs/installing-on-linux/persistent-storage-configurations/install-ceph-csi-rbd.md
@@ -77,7 +77,7 @@ mountOptions:
#### Add-on configurations
-Save the above chart config and StorageClass locally (e.g. `/root/ceph-csi-rbd.yaml` and `/root/ceph-csi-rbd-sc.yaml`). The add-on configuration can be set like:
+Save the above chart config and StorageClass locally (for example, `/root/ceph-csi-rbd.yaml` and `/root/ceph-csi-rbd-sc.yaml`). The add-on configuration can be set like:
```yaml
addons:
@@ -115,7 +115,7 @@ If you want to configure more values, see [chart configuration for rbd-provision
#### Add-on configurations
-Save the above chart config locally (e.g. `/root/rbd-provisioner.yaml`). The add-on config for rbd provisioner cloud be like:
+Save the above chart config locally (for example, `/root/rbd-provisioner.yaml`). The add-on config for rbd provisioner cloud be like:
```yaml
- name: rbd-provisioner
diff --git a/content/en/docs/installing-on-linux/persistent-storage-configurations/install-glusterfs.md b/content/en/docs/installing-on-linux/persistent-storage-configurations/install-glusterfs.md
index 25f635d08..573b3efbd 100644
--- a/content/en/docs/installing-on-linux/persistent-storage-configurations/install-glusterfs.md
+++ b/content/en/docs/installing-on-linux/persistent-storage-configurations/install-glusterfs.md
@@ -284,7 +284,7 @@ glusterfs (default) kubernetes.io/glusterfs Delete Immediate
### KubeSphere console
-1. Log in to the web console with the default account and password (`admin/P@88w0rd`) at `
}}
-Verify that you can use the **Auditing Operating** function from the **Toolbox** in the bottom right corner.
+Verify that you can use the **Auditing Operating** function from the **Toolbox** in the bottom-right corner.

diff --git a/content/en/docs/pluggable-components/devops.md b/content/en/docs/pluggable-components/devops.md
index bee829c69..283325f7f 100644
--- a/content/en/docs/pluggable-components/devops.md
+++ b/content/en/docs/pluggable-components/devops.md
@@ -8,7 +8,7 @@ weight: 6300
The KubeSphere DevOps System is designed for CI/CD workflows in Kubernetes. Based on [Jenkins](https://jenkins.io/), it provides one-stop solutions to help both development and Ops teams build, test and publish apps to Kubernetes in a straight-forward way. It also features plugin management, [Binary-to-Image (B2I)](../../project-user-guide/image-builder/binary-to-image/), [Source-to-Image (S2I)](../../project-user-guide/image-builder/source-to-image/), code dependency caching, code quality analysis, pipeline logging, etc.
-The DevOps System offers an enabling environment for users as apps can be automatically released to the same platform. It is also compatible with third-party private image registries (e.g. Harbor) and code repositories (e.g. GitLab/GitHub/SVN/BitBucket). As such, it creates excellent user experiences by providing users with comprehensive, visualized CI/CD pipelines which are extremely useful in air-gapped environments.
+The DevOps System offers an enabling environment for users as apps can be automatically released to the same platform. It is also compatible with third-party private image registries (for example, Harbor) and code repositories (for example, GitLab/GitHub/SVN/BitBucket). As such, it creates excellent user experiences by providing users with comprehensive, visualized CI/CD pipelines which are extremely useful in air-gapped environments.
For more information, see [DevOps User Guide](../../devops-user-guide/).
@@ -25,7 +25,7 @@ When you implement multi-node installation of KubeSphere on Linux, you need to c
```
{{< notice note >}}
-If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable DevOps in this mode (e.g. for testing purposes), refer to [the following section](#enable-devops-after-installation) to see how DevOps can be installed after installation.
+If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable DevOps in this mode (for example, for testing purposes), refer to [the following section](#enable-devops-after-installation) to see how DevOps can be installed after installation.
{{ notice >}}
2. In this file, navigate to `devops` and change `false` to `true` for `enabled`. Save the file after you finish.
diff --git a/content/en/docs/pluggable-components/events.md b/content/en/docs/pluggable-components/events.md
index fb7497ccd..c5cf7977c 100644
--- a/content/en/docs/pluggable-components/events.md
+++ b/content/en/docs/pluggable-components/events.md
@@ -24,7 +24,7 @@ When you implement multi-node installation of KubeSphere on Linux, you need to c
{{< notice note >}}
-If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Events in this mode (e.g. for testing purposes), refer to [the following section](#enable-events-after-installation) to see how Events can be [installed after installation](#enable-events-after-installation).
+If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Events in this mode (for example, for testing purposes), refer to [the following section](#enable-events-after-installation) to see how Events can be [installed after installation](#enable-events-after-installation).
{{ notice >}}
@@ -154,7 +154,7 @@ You can find the web kubectl tool by clicking | Built-in Roles | +Description | +
|---|---|
viewer |
+ The viewer who can view all resources in the project. | +
operator |
+ The maintainer of the project who can manage resources other than users and roles in the project. | +
admin |
+ The administrator in the project who can perform any action on any resource. It gives full control over all resources in the project. | +
on the right.
- 
-
-3. Select the authorization that you want this role to contain. For example, **Application Workloads View** in **Application Workloads**, and **Alerting Messages View** and **Alerting Policies View** in **Monitoring & Alerting** are selected for this role. Click **OK** to finish.
-
- 
-
- {{< notice note >}}
-
-**Depend on** means the major authorization (the one listed after **Depend on**) needs to be selected first so that the affiliated authorization can be assigned.
-
- {{ notice >}}
-
-4. Newly-created roles will be listed in **Project Roles**. You can click the three dots on the right to edit it.
-
- 
-
- {{< notice note >}}
-
-The role of `project-monitor` is only granted limited permissions in **Monitoring & Alerting**, which may not satisfy your need. This example is only for demonstration purpose. You can create customized roles based on your needs.
-
- {{ notice >}}
+ 
## Invite a New Member
-1. In **Project Settings**, select **Project Members** and click **Invite Member**.
-2. Invite a user to the project. Grant the role of `project-monitor` to the user.
+1. Navigate to **Project Members** under **Project Settings**, and click **Invite Member**.
- 
+2. Invite a user to the project by clicking
on the right of it and assign a role to it.
- {{< notice note >}}
+3. After you add the user to the project, click **OK**. In **Project Members**, you can see the user in the list.
-The user must be invited to the project's workspace first.
+4. To edit the role of an existing user or remove the user from the project, click
on the right and select the corresponding operation.
- {{ notice >}}
-
-3. After you add a user to the project, click **OK**. In **Project Members**, you can see the newly invited member listed.
-
-4. You can also change the role of an existing member by editing it or remove it from the project.
-
- 
+ 
diff --git a/content/en/docs/project-user-guide/application-workloads/container-image-settings.md b/content/en/docs/project-user-guide/application-workloads/container-image-settings.md
index f861ce5b5..682217452 100644
--- a/content/en/docs/project-user-guide/application-workloads/container-image-settings.md
+++ b/content/en/docs/project-user-guide/application-workloads/container-image-settings.md
@@ -10,7 +10,7 @@ When you create Deployments, StatefulSets or DaemonSets, you need to specify a c
{{< notice tip >}}
-You can enable **Edit Mode** in the top right corner to see corresponding values in the manifest file (YAML format) of properties on the dashboard.
+You can enable **Edit Mode** in the top-right corner to see corresponding values in the manifest file (YAML format) of properties on the dashboard.
{{ notice >}}
@@ -30,17 +30,17 @@ After you click **Add Container Image**, you will see an image as below.
#### Image Search Bar
-You can click the cube icon on the right to select an image from the list or input an image name to search it. KubeSphere provides Docker Hub images and your private image repository. If you want to use your private image repository, you need to create an Image Registry Secret first in **Secrets** under **Configurations**.
+You can click the cube icon on the right to select an image from the list or enter an image name to search it. KubeSphere provides Docker Hub images and your private image repository. If you want to use your private image repository, you need to create an Image Registry Secret first in **Secrets** under **Configurations**.
{{< notice note >}}
-Remember to press **Enter** on your keyboard after you input an image name in the search bar.
+Remember to press **Enter** on your keyboard after you enter an image name in the search bar.
{{ notice >}}
#### Image Tag
-You can input a tag like `imagename:tag`. If you do not specify it, it will default to the latest version.
+You can enter a tag like `imagename:tag`. If you do not specify it, it will default to the latest version.
#### Container Name
@@ -274,7 +274,7 @@ A security context defines privilege and access control settings for a Pod or Co
### Deployment Mode
-You can select different deployment modes to switch between inter-pod affinity and inter-pod anti-affinity. In Kubernetes, inter-pod affinity is specified as field `podAffinity` of field `affinity` while inter-pod anti-affinity is specified as field `podAntiAffinity` of field `affinity`. In KubeSphere, both `podAffinity` and `podAntiAffinity` are set to `preferredDuringSchedulingIgnoredDuringExecution`. You can enable **Edit Mode** in the top right corner to see field details.
+You can select different deployment modes to switch between inter-pod affinity and inter-pod anti-affinity. In Kubernetes, inter-pod affinity is specified as field `podAffinity` of field `affinity` while inter-pod anti-affinity is specified as field `podAntiAffinity` of field `affinity`. In KubeSphere, both `podAffinity` and `podAntiAffinity` are set to `preferredDuringSchedulingIgnoredDuringExecution`. You can enable **Edit Mode** in the top-right corner to see field details.
- **Pod Decentralized Deployment** represents anti-affinity.
- **Pod Aggregation Deployment** represents affinity.
diff --git a/content/en/docs/project-user-guide/application-workloads/cronjobs.md b/content/en/docs/project-user-guide/application-workloads/cronjobs.md
index fb579a451..928e7f564 100644
--- a/content/en/docs/project-user-guide/application-workloads/cronjobs.md
+++ b/content/en/docs/project-user-guide/application-workloads/cronjobs.md
@@ -22,7 +22,7 @@ Log in to the console as `project-regular`. Go to **Jobs** of a project, choose

-### Step 2: Input basic information
+### Step 2: Enter basic information
Enter the basic information. You can refer to the image below for each field. When you finish, click **Next**.
@@ -30,7 +30,7 @@ Enter the basic information. You can refer to the image below for each field. Wh
- **Name**: The name of the CronJob, which is also the unique identifier.
- **Alias**: The alias name of the CronJob, making resources easier to identify.
-- **Schedule**: It runs a Job periodically on a given time-based schedule. Please see [CRON](https://en.wikipedia.org/wiki/Cron) for grammar reference. Some preset CRON statements are provided in KubeSphere to simplify the input. This field is specified by `.spec.schedule`. For this CronJob, input `*/1 * * * *`, which means it runs once per minute.
+- **Schedule**: It runs a Job periodically on a given time-based schedule. Please see [CRON](https://en.wikipedia.org/wiki/Cron) for grammar reference. Some preset CRON statements are provided in KubeSphere to simplify the input. This field is specified by `.spec.schedule`. For this CronJob, enter `*/1 * * * *`, which means it runs once per minute.
| Type | CRON |
| ----------- | ----------- |
@@ -51,7 +51,7 @@ Enter the basic information. You can refer to the image below for each field. Wh
{{< notice note >}}
-You can enable **Edit Mode** in the top right corner to see the YAML manifest of this CronJob.
+You can enable **Edit Mode** in the top-right corner to see the YAML manifest of this CronJob.
{{ notice >}}
@@ -61,7 +61,7 @@ Please refer to [Jobs](../jobs/#step-3-job-settings-optional).
### Step 4: Set an image
-1. Click **Add Container Image** in **Container Image** and input `busybox` in the search bar.
+1. Click **Add Container Image** in **Container Image** and enter `busybox` in the search bar.

diff --git a/content/en/docs/project-user-guide/application-workloads/daemonsets.md b/content/en/docs/project-user-guide/application-workloads/daemonsets.md
index cb5167e62..c18af14f0 100644
--- a/content/en/docs/project-user-guide/application-workloads/daemonsets.md
+++ b/content/en/docs/project-user-guide/application-workloads/daemonsets.md
@@ -30,9 +30,9 @@ Log in to the console as `project-regular`. Go to **Application Workloads** of a

-### Step 2: Input basic information
+### Step 2: Enter basic information
-Specify a name for the DaemonSet (e.g. `demo-daemonset`) and click **Next** to continue.
+Specify a name for the DaemonSet (for example, `demo-daemonset`) and click **Next** to continue.

@@ -42,13 +42,13 @@ Specify a name for the DaemonSet (e.g. `demo-daemonset`) and click **Next** to c

-2. Input an image name from public Docker Hub or from a [private repository](../../configuration/image-registry/) you specified. For example, input `fluentd` in the search bar and press **Enter**.
+2. Enter an image name from public Docker Hub or from a [private repository](../../configuration/image-registry/) you specified. For example, enter `fluentd` in the search bar and press **Enter**.

{{< notice note >}}
-- Remember to press **Enter** on your keyboard after you input an image name in the search bar.
+- Remember to press **Enter** on your keyboard after you enter an image name in the search bar.
- If you want to use your private image repository, you should [create an Image Registry Secret](../../configuration/image-registry/) first in **Secrets** under **Configurations**.
{{ notice >}}
@@ -61,7 +61,7 @@ Specify a name for the DaemonSet (e.g. `demo-daemonset`) and click **Next** to c
5. Select a policy for image pulling from the drop-down menu. For more information, see [Image Pull Policy in Container Image Settings](../container-image-settings/#add-container-image).
-6. For other settings (**Health Checker**, **Start Command**, **Environment Variables**, **Container Security Context** and **Sync Host Timezone**), you can configure them on the dashboard as well. For more information, see detailed explanations of these properties in [Container Image Settings](../container-image-settings/#add-container-image). When you finish, click **√** in the bottom right corner to continue.
+6. For other settings (**Health Checker**, **Start Command**, **Environment Variables**, **Container Security Context** and **Sync Host Timezone**), you can configure them on the dashboard as well. For more information, see detailed explanations of these properties in [Container Image Settings](../container-image-settings/#add-container-image). When you finish, click **√** in the bottom-right corner to continue.
7. Select an update strategy from the drop-down menu. It is recommended you choose **RollingUpdate**. For more information, see [Update Strategy](../container-image-settings/#update-strategy).
diff --git a/content/en/docs/project-user-guide/application-workloads/deployments.md b/content/en/docs/project-user-guide/application-workloads/deployments.md
index b4671c2ec..bdb31788e 100644
--- a/content/en/docs/project-user-guide/application-workloads/deployments.md
+++ b/content/en/docs/project-user-guide/application-workloads/deployments.md
@@ -23,9 +23,9 @@ Log in to the console as `project-regular`. Go to **Application Workloads** of a

-### Step 2: Input basic information
+### Step 2: Enter basic information
-Specify a name for the Deployment (e.g. `demo-deployment`) and click **Next** to continue.
+Specify a name for the Deployment (for example, `demo-deployment`) and click **Next** to continue.

@@ -34,7 +34,7 @@ Specify a name for the Deployment (e.g. `demo-deployment`) and click **Next** to
1. Before you set an image, define the number of replicated Pods in **Pod Replicas** by clicking the **plus** or **minus** icon, which is indicated by the `.spec.replicas` field in the manifest file.
{{< notice tip >}}
-You can see the Deployment manifest file in YAML format by enabling **Edit Mode** in the top right corner. KubeSphere allows you to edit the manifest file directly to create a Deployment. Alternatively, you can follow the steps below to create a Deployment via the dashboard.
+You can see the Deployment manifest file in YAML format by enabling **Edit Mode** in the top-right corner. KubeSphere allows you to edit the manifest file directly to create a Deployment. Alternatively, you can follow the steps below to create a Deployment via the dashboard.
{{ notice >}}

@@ -43,13 +43,13 @@ You can see the Deployment manifest file in YAML format by enabling **Edit Mode*

-3. Input an image name from public Docker Hub or from a [private repository](../../configuration/image-registry/) you specified. For example, input `nginx` in the search bar and press **Enter**.
+3. Enter an image name from public Docker Hub or from a [private repository](../../configuration/image-registry/) you specified. For example, enter `nginx` in the search bar and press **Enter**.

{{< notice note >}}
-- Remember to press **Enter** on your keyboard after you input an image name in the search bar.
+- Remember to press **Enter** on your keyboard after you enter an image name in the search bar.
- If you want to use your private image repository, you should [create an Image Registry Secret](../../configuration/image-registry/) first in **Secrets** under **Configurations**.
{{ notice >}}
@@ -62,7 +62,7 @@ You can see the Deployment manifest file in YAML format by enabling **Edit Mode*
6. Select a policy for image pulling from the drop-down menu. For more information, see [Image Pull Policy in Container Image Settings](../container-image-settings/#add-container-image).
-7. For other settings (**Health Checker**, **Start Command**, **Environment Variables**, **Container Security Context** and **Sync Host Timezone**), you can configure them on the dashboard as well. For more information, see detailed explanations of these properties in [Container Image Settings](../container-image-settings/#add-container-image). When you finish, click **√** in the bottom right corner to continue.
+7. For other settings (**Health Checker**, **Start Command**, **Environment Variables**, **Container Security Context** and **Sync Host Timezone**), you can configure them on the dashboard as well. For more information, see detailed explanations of these properties in [Container Image Settings](../container-image-settings/#add-container-image). When you finish, click **√** in the bottom-right corner to continue.
8. Select an update strategy from the drop-down menu. It is recommended you choose **RollingUpdate**. For more information, see [Update Strategy](../container-image-settings/#update-strategy).
diff --git a/content/en/docs/project-user-guide/application-workloads/horizontal-pod-autoscaling.md b/content/en/docs/project-user-guide/application-workloads/horizontal-pod-autoscaling.md
index 3b19dad50..8d25b0b17 100755
--- a/content/en/docs/project-user-guide/application-workloads/horizontal-pod-autoscaling.md
+++ b/content/en/docs/project-user-guide/application-workloads/horizontal-pod-autoscaling.md
@@ -1,14 +1,14 @@
---
-title: "Horizontal Pod Autoscaling"
+title: "Kubernetes HPA (Horizontal Pod Autoscaling) on KubeSphere"
keywords: "Horizontal, Pod, Autoscaling, Autoscaler"
-description: "How to configure Horizontal Pod Autoscaling on KubeSphere."
+description: "How to configure Kubernetes Horizontal Pod Autoscaling on KubeSphere."
weight: 10290
---
This document describes how to configure Horizontal Pod Autoscaling (HPA) on KubeSphere.
-The HPA feature automatically adjusts the number of Pods to maintain average resource usage (CPU and memory) of Pods around preset values. For details about how HPA functions, see the [official Kubernetes document](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/).
+The Kubernetes HPA feature automatically adjusts the number of Pods to maintain average resource usage (CPU and memory) of Pods around preset values. For details about how HPA functions, see the [official Kubernetes document](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/).
This document uses HPA based on CPU usage as an example. Operations for HPA based on memory usage are similar.
@@ -50,7 +50,7 @@ This document uses HPA based on CPU usage as an example. Operations for HPA base
7. Click **Next** on the **Mount Volumes** tab and click **Create** on the **Advanced Settings** tab.
-## Configure HPA
+## Configure Kubernetes HPA
1. Choose **Deployments** in **Workloads** on the left navigation bar and click the HPA Deployment (for example, hpa-v1) on the right.
diff --git a/content/en/docs/project-user-guide/application-workloads/jobs.md b/content/en/docs/project-user-guide/application-workloads/jobs.md
index f5c28eaef..dd58e1e6e 100644
--- a/content/en/docs/project-user-guide/application-workloads/jobs.md
+++ b/content/en/docs/project-user-guide/application-workloads/jobs.md
@@ -25,7 +25,7 @@ Log in to the console as `project-regular`. Go to **Jobs** under **Application W

-### Step 2: Input basic information
+### Step 2: Enter basic information
Enter the basic information. Refer to the image below as an example.
@@ -62,7 +62,7 @@ You can set the values in this step as below or click **Next** to use the defaul

-3. On the same page, scroll down to **Start Command**. Input the following commands in the box which computes pi to 2000 places then prints it. Click **√** in the bottom right corner and select **Next** to continue.
+3. On the same page, scroll down to **Start Command**. Enter the following commands in the box which computes pi to 2000 places then prints it. Click **√** in the bottom-right corner and select **Next** to continue.
```bash
perl,-Mbignum=bpi,-wle,print bpi(2000)
@@ -76,7 +76,7 @@ For more information about setting images, see [Container Image Settings](../con
### Step 5: Inspect the Job manifest (optional)
-1. Enable **Edit Mode** in the top right corner which displays the manifest file of the Job. You can see all the values are set based on what you have specified in the previous steps.
+1. Enable **Edit Mode** in the top-right corner which displays the manifest file of the Job. You can see all the values are set based on what you have specified in the previous steps.
```yaml
apiVersion: batch/v1
@@ -145,7 +145,7 @@ You can rerun the Job if it fails, the reason of which displays under **Messages
{{< notice tip >}}
-- In **Resource Status**, the Pod list provides the Pod's detailed information (e.g. creation time, node, Pod IP and monitoring data).
+- In **Resource Status**, the Pod list provides the Pod's detailed information (for example, creation time, node, Pod IP and monitoring data).
- You can view the container information by clicking the Pod.
- Click the container log icon to view the output logs of the container.
- You can view the Pod detail page by clicking the Pod name.
diff --git a/content/en/docs/project-user-guide/application-workloads/services.md b/content/en/docs/project-user-guide/application-workloads/services.md
index 518db927f..8d64fd906 100644
--- a/content/en/docs/project-user-guide/application-workloads/services.md
+++ b/content/en/docs/project-user-guide/application-workloads/services.md
@@ -78,7 +78,7 @@ The steps of creating a stateful Service and a stateless Service are basically t
{{ notice >}}
-### Step 2: Input basic information
+### Step 2: Enter basic information
1. In the dialog that appears, you can see the field **Version** prepopulated with `v1`. You need to define a name for the Service, such as `demo-service`. When you finish, click **Next** to continue.
@@ -90,7 +90,7 @@ The steps of creating a stateful Service and a stateless Service are basically t
{{< notice tip >}}
-The value of **Name** is used in both configurations, one for Deployment and the other for Service. You can see the manifest file of the Deployment and the Service by enabling **Edit Mode** in the top right corner. Below is an example file for your reference.
+The value of **Name** is used in both configurations, one for Deployment and the other for Service. You can see the manifest file of the Deployment and the Service by enabling **Edit Mode** in the top-right corner. Below is an example file for your reference.
{{ notice>}}
diff --git a/content/en/docs/project-user-guide/application-workloads/statefulsets.md b/content/en/docs/project-user-guide/application-workloads/statefulsets.md
index 0b06d8753..a55a93b66 100644
--- a/content/en/docs/project-user-guide/application-workloads/statefulsets.md
+++ b/content/en/docs/project-user-guide/application-workloads/statefulsets.md
@@ -35,9 +35,9 @@ Log in to the console as `project-regular`. Go to **Application Workloads** of a

-### Step 2: Input basic information
+### Step 2: Enter basic information
-Specify a name for the StatefulSet (e.g. `demo-stateful`) and click **Next** to continue.
+Specify a name for the StatefulSet (for example, `demo-stateful`) and click **Next** to continue.

@@ -47,7 +47,7 @@ Specify a name for the StatefulSet (e.g. `demo-stateful`) and click **Next** to
{{< notice tip >}}
-You can see the StatefulSet manifest file in YAML format by enabling **Edit Mode** in the top right corner. KubeSphere allows you to edit the manifest file directly to create a StatefulSet. Alternatively, you can follow the steps below to create a StatefulSet via the dashboard.
+You can see the StatefulSet manifest file in YAML format by enabling **Edit Mode** in the top-right corner. KubeSphere allows you to edit the manifest file directly to create a StatefulSet. Alternatively, you can follow the steps below to create a StatefulSet via the dashboard.
{{ notice >}}
@@ -57,13 +57,13 @@ You can see the StatefulSet manifest file in YAML format by enabling **Edit Mode

-3. Input an image name from public Docker Hub or from a [private repository](../../configuration/image-registry/) you specified. For example, input `nginx` in the search bar and press **Enter**.
+3. Enter an image name from public Docker Hub or from a [private repository](../../configuration/image-registry/) you specified. For example, enter `nginx` in the search bar and press **Enter**.

{{< notice note >}}
-- Remember to press **Enter** on your keyboard after you input an image name in the search bar.
+- Remember to press **Enter** on your keyboard after you enter an image name in the search bar.
- If you want to use your private image repository, you should [create an Image Registry Secret](../../configuration/image-registry/) first in **Secrets** under **Configurations**.
{{ notice >}}
@@ -76,7 +76,7 @@ You can see the StatefulSet manifest file in YAML format by enabling **Edit Mode
6. Select a policy for image pulling from the drop-down menu. For more information, see [Image Pull Policy in Container Image Settings](../container-image-settings/#add-container-image).
-7. For other settings (**Health Checker**, **Start Command**, **Environment Variables**, **Container Security Context** and **Sync Host Timezone**), you can configure them on the dashboard as well. For more information, see detailed explanations of these properties in [Container Image Settings](../container-image-settings/#add-container-image). When you finish, click **√** in the bottom right corner to continue.
+7. For other settings (**Health Checker**, **Start Command**, **Environment Variables**, **Container Security Context** and **Sync Host Timezone**), you can configure them on the dashboard as well. For more information, see detailed explanations of these properties in [Container Image Settings](../container-image-settings/#add-container-image). When you finish, click **√** in the bottom-right corner to continue.
8. Select an update strategy from the drop-down menu. It is recommended you choose **RollingUpdate**. For more information, see [Update Strategy](../container-image-settings/#update-strategy).
diff --git a/content/en/docs/project-user-guide/application/app-template.md b/content/en/docs/project-user-guide/application/app-template.md
index 58b924dc9..1464327e1 100644
--- a/content/en/docs/project-user-guide/application/app-template.md
+++ b/content/en/docs/project-user-guide/application/app-template.md
@@ -6,7 +6,7 @@ linkTitle: "App Templates"
weight: 10110
---
-An app template serves as a way for users to upload, deliver and manage apps. Generally, an app is composed of one or more Kubernetes workloads (e.g. [Deployments](../../../project-user-guide/application-workloads/deployments/), [StatefulSets](../../../project-user-guide/application-workloads/statefulsets/) and [DaemonSets](../../../project-user-guide/application-workloads/daemonsets/)) and [Services](../../../project-user-guide/application-workloads/services/) based on how it functions and communicates with the external environment. Apps that are uploaded as app templates are built based on a [Helm](https://helm.sh/) package.
+An app template serves as a way for users to upload, deliver and manage apps. Generally, an app is composed of one or more Kubernetes workloads (for example, [Deployments](../../../project-user-guide/application-workloads/deployments/), [StatefulSets](../../../project-user-guide/application-workloads/statefulsets/) and [DaemonSets](../../../project-user-guide/application-workloads/daemonsets/)) and [Services](../../../project-user-guide/application-workloads/services/) based on how it functions and communicates with the external environment. Apps that are uploaded as app templates are built based on a [Helm](https://helm.sh/) package.
## How App Templates Work
@@ -30,7 +30,7 @@ KubeSphere deploys app repository services based on [OpenPitrix](https://github.
## Why App Templates
-App templates enable users to deploy and manage apps in a visualized way. Internally, they play an important role as shared resources (e.g. databases, middleware and operating systems) created by enterprises for the coordination and cooperation within teams. Externally, app templates set industry standards of building and delivery. Users can take advantage of app templates in different scenarios to meet their own needs through one-click deployment.
+App templates enable users to deploy and manage apps in a visualized way. Internally, they play an important role as shared resources (for example, databases, middleware and operating systems) created by enterprises for the coordination and cooperation within teams. Externally, app templates set industry standards of building and delivery. Users can take advantage of app templates in different scenarios to meet their own needs through one-click deployment.
In addition, as OpenPitrix is integrated to KubeSphere to provide application management across the entire lifecycle, the platform allows ISVs, developers and regular users to all participate in the process. Backed by the multi-tenant system of KubeSphere, each tenant is only responsible for their own part, such as app uploading, app review, release, test, and version management. Ultimately, enterprises can build their own App Store and enrich their application pools with their customized standards. As such, apps can also be delivered in a standardized fashion.
diff --git a/content/en/docs/project-user-guide/application/compose-app.md b/content/en/docs/project-user-guide/application/compose-app.md
index e4bd9c27b..2037dd63c 100644
--- a/content/en/docs/project-user-guide/application/compose-app.md
+++ b/content/en/docs/project-user-guide/application/compose-app.md
@@ -19,7 +19,7 @@ This tutorial demonstrates how to create a microservices-based app Bookinfo, whi
1. Log in to the web console of KubeSphere and navigate to **Apps** in **Application Workloads** of your project. On the **Composing Apps** tab, click **Create Composing App**.
-2. Set a name for the app (e.g. `bookinfo`) and click **Next**.
+2. Set a name for the app (for example, `bookinfo`) and click **Next**.
3. On the **Components** page, you need to create microservices that compose the app. Click **Add Service** and select **Stateless Service**.
@@ -27,7 +27,7 @@ This tutorial demonstrates how to create a microservices-based app Bookinfo, whi
{{< notice note >}}
- You can create a Service on the dashboard directly or enable **Edit Mode** in the top right corner to edit the YAML file.
+ You can create a Service on the dashboard directly or enable **Edit Mode** in the top-right corner to edit the YAML file.
{{ notice >}}
@@ -39,7 +39,7 @@ This tutorial demonstrates how to create a microservices-based app Bookinfo, whi
{{ notice >}}
-6. Click **Use Default Ports**. For more information about image settings, see [Container Image Settings](../../../project-user-guide/application-workloads/container-image-settings/). Click **√** in the bottom right corner and **Next** to continue.
+6. Click **Use Default Ports**. For more information about image settings, see [Container Image Settings](../../../project-user-guide/application-workloads/container-image-settings/). Click **√** in the bottom-right corner and **Next** to continue.
7. On the **Mount Volumes** page, [add a volume](../../../project-user-guide/storage/volumes/) or click **Next** to continue.
@@ -55,7 +55,7 @@ This tutorial demonstrates how to create a microservices-based app Bookinfo, whi
10. When you finish adding microservices, click **Next**.
-11. On the **Internet Access** page, click **Add Route Rule**. On the **Specify Domain** tab, set a domain name for your app (e.g. `demo.bookinfo`) and select `http` in the **Protocol** field. For `Paths`, select the Service `productpage` and port `9080`. Click **OK** to continue.
+11. On the **Internet Access** page, click **Add Route Rule**. On the **Specify Domain** tab, set a domain name for your app (for example, `demo.bookinfo`) and select `http` in the **Protocol** field. For `Paths`, select the Service `productpage` and port `9080`. Click **OK** to continue.

diff --git a/content/en/docs/project-user-guide/application/deploy-app-from-appstore.md b/content/en/docs/project-user-guide/application/deploy-app-from-appstore.md
index 40116c60e..4f93a24c4 100644
--- a/content/en/docs/project-user-guide/application/deploy-app-from-appstore.md
+++ b/content/en/docs/project-user-guide/application/deploy-app-from-appstore.md
@@ -19,7 +19,7 @@ This tutorial demonstrates how to quickly deploy [NGINX](https://www.nginx.com/)
### Step 1: Deploy NGINX from the App Store
-1. On the **Overview** page of the project `demo-project`, click **App Store** in the top left corner.
+1. On the **Overview** page of the project `demo-project`, click **App Store** in the top-left corner.

diff --git a/content/en/docs/project-user-guide/application/deploy-app-from-template.md b/content/en/docs/project-user-guide/application/deploy-app-from-template.md
index 529494a34..044b8a8db 100644
--- a/content/en/docs/project-user-guide/application/deploy-app-from-template.md
+++ b/content/en/docs/project-user-guide/application/deploy-app-from-template.md
@@ -61,7 +61,7 @@ This tutorial demonstrates how to quickly deploy [Grafana](https://grafana.com/)
{{ notice >}}
-4. Input `Grafana` in the search bar to find the app, and then click it to deploy it.
+4. Enter `Grafana` in the search bar to find the app, and then click it to deploy it.

diff --git a/content/en/docs/project-user-guide/configuration/configmaps.md b/content/en/docs/project-user-guide/configuration/configmaps.md
index 7293b32b5..76ad5b37d 100644
--- a/content/en/docs/project-user-guide/configuration/configmaps.md
+++ b/content/en/docs/project-user-guide/configuration/configmaps.md
@@ -20,75 +20,55 @@ You need to create a workspace, a project and an account (`project-regular`). Th
## Create a ConfigMap
-### Step 1: Open the dashboard
+1. Log in to the console as `project-regular`. Go to **Configurations** of a project, choose **ConfigMaps** and click **Create**.
-Log in to the console as `project-regular`. Go to **Configurations** of a project, choose **ConfigMaps** and click **Create**.
+2. In the dialog that appears, specify a name for the ConfigMap (for example, `demo-configmap`) and click **Next** to continue.
-
+ {{< notice tip >}}
-### Step 2: Input basic information
-
-Specify a name for the ConfigMap (e.g. `demo-configmap`) and click **Next** to continue.
-
-{{< notice tip >}}
-
-You can see the ConfigMap manifest file in YAML format by enabling **Edit Mode** in the top right corner. KubeSphere allows you to edit the manifest file directly to create a ConfigMap. Alternatively, you can follow the steps below to create a ConfigMap via the dashboard.
+You can see the ConfigMap manifest file in YAML format by enabling **Edit Mode** in the top-right corner. KubeSphere allows you to edit the manifest file directly to create a ConfigMap. Alternatively, you can follow the steps below to create a ConfigMap via the dashboard.
{{ notice >}}
-
+3. On the **ConfigMap Settings** tab, configure values by clicking **Add Data**.
-### Step 3: Input configuration values
+4. Enter a key-value pair. For example:
-1. Under the tab **ConfigMap Settings**, configure values by clicking **Add Data**.
-
- 
-
-2. Input a key-value pair. For example:
-
- 
+ 
{{< notice note >}}
- - key-value pairs displays under the field `data` in the manifest.
+- key-value pairs displays under the field `data` in the manifest.
- - On the KubeSphere dashboard, you can only add key-value pairs for a ConfigMap currently. In future releases, you will be able to add a path to a directory containing configuration files to create ConfigMaps directly on the dashboard.
+- On the KubeSphere dashboard, you can only add key-value pairs for a ConfigMap currently. In future releases, you will be able to add a path to a directory containing configuration files to create ConfigMaps directly on the dashboard.
- {{ notice >}}
+{{ notice >}}
-3. Click **√** in the bottom right corner to save it and click **Add Data** again if you want to add more key-value pairs.
+5. Click **√** in the bottom-right corner to save it and click **Add Data** again if you want to add more key-value pairs.
- 
+6. Click **Create** to generate the ConfigMap.
-4. When you finish, click **Create** to generate the ConfigMap.
+## View ConfigMap Details
-## Check ConfigMap Details
-
-1. After a ConfigMap is created, it displays in the list as below. You can click the three dots on the right and select the operation from the menu to modify it.
-
- 
+1. After a ConfigMap is created, it displays on the **ConfigMaps** page. You can click
on the right and select the operation below from the drop-down list.
- **Edit**: View and edit the basic information.
- **Edit YAML**: View, upload, download, or update the YAML file.
- **Modify Config**: Modify the key-value pair of the ConfigMap.
- **Delete**: Delete the ConfigMap.
+
+2. Click the name of the ConfigMap to go to its detail page. Under the tab **Detail**, you can see all the key-value pairs you have added for the ConfigMap.
-2. Click the name of the ConfigMap and you can go to its detail page. Under the tab **Detail**, you can see all the key-value pairs you have added for the ConfigMap.
-
- 
+ 
3. Click **More** to display what operations about this ConfigMap you can do.
- 
-
- **Edit YAML**: View, upload, download, or update the YAML file.
- **Modify Config**: Modify the key-value pair of the ConfigMap.
- **Delete**: Delete the ConfigMap, and return to the list page.
-4. Click the **Edit Info** to view and edit the basic information.
+4. Click **Edit Information** to view and edit the basic information.
- 
-
## Use a ConfigMap
diff --git a/content/en/docs/project-user-guide/configuration/image-registry.md b/content/en/docs/project-user-guide/configuration/image-registry.md
index a9bfd820c..de11817c0 100644
--- a/content/en/docs/project-user-guide/configuration/image-registry.md
+++ b/content/en/docs/project-user-guide/configuration/image-registry.md
@@ -24,13 +24,13 @@ Log in to the web console of KubeSphere as `project-regular`. Go to **Configurat

-### Step 2: Input basic information
+### Step 2: Enter basic information
-Specify a name for the Secret (e.g. `demo-registry-secret`) and click **Next** to continue.
+Specify a name for the Secret (for example, `demo-registry-secret`) and click **Next** to continue.
{{< notice tip >}}
-You can see the Secret's manifest file in YAML format by enabling **Edit Mode** in the top right corner. KubeSphere allows you to edit the manifest file directly to create a Secret. Alternatively, you can follow the steps below to create a Secret via the dashboard.
+You can see the Secret's manifest file in YAML format by enabling **Edit Mode** in the top-right corner. KubeSphere allows you to edit the manifest file directly to create a Secret. Alternatively, you can follow the steps below to create a Secret via the dashboard.
{{ notice >}}
@@ -49,7 +49,7 @@ Select **Image Registry Secret** for **Type**. To use images from your private r
#### Add the Docker Hub registry
-1. Before you add your image registry in [Docker Hub](https://hub.docker.com/), make sure you have an available Docker Hub account. On the **Secret Settings** page, input `docker.io` for **Registry Address** and enter your Docker ID and password for **User Name** and **Password**. Click **Validate** to check whether the address is available.
+1. Before you add your image registry in [Docker Hub](https://hub.docker.com/), make sure you have an available Docker Hub account. On the **Secret Settings** page, enter `docker.io` for **Registry Address** and enter your Docker ID and password for **User Name** and **Password**. Click **Validate** to check whether the address is available.

@@ -89,7 +89,7 @@ Select **Image Registry Secret** for **Type**. To use images from your private r
sudo systemctl restart docker
```
-3. Go back to the **Secret Settings** page and select **Image Registry Secret** for **Type**. Input your Harbor IP address for **Registry Address** and enter the username and password.
+3. Go back to the **Secret Settings** page and select **Image Registry Secret** for **Type**. Enter your Harbor IP address for **Registry Address** and enter the username and password.

diff --git a/content/en/docs/project-user-guide/configuration/secrets.md b/content/en/docs/project-user-guide/configuration/secrets.md
index ba3d75477..2dcaad852 100644
--- a/content/en/docs/project-user-guide/configuration/secrets.md
+++ b/content/en/docs/project-user-guide/configuration/secrets.md
@@ -26,13 +26,13 @@ Log in to the console as `project-regular`. Go to **Configurations** of a projec

-### Step 2: Input basic information
+### Step 2: Enter basic information
-Specify a name for the Secret (e.g. `demo-secret`) and click **Next** to continue.
+Specify a name for the Secret (for example, `demo-secret`) and click **Next** to continue.
{{< notice tip >}}
-You can see the Secret's manifest file in YAML format by enabling **Edit Mode** in the top right corner. KubeSphere allows you to edit the manifest file directly to create a Secret. Alternatively, you can follow the steps below to create a Secret via the dashboard.
+You can see the Secret's manifest file in YAML format by enabling **Edit Mode** in the top-right corner. KubeSphere allows you to edit the manifest file directly to create a Secret. Alternatively, you can follow the steps below to create a Secret via the dashboard.
{{ notice >}}
@@ -46,7 +46,7 @@ You can see the Secret's manifest file in YAML format by enabling **Edit Mode**
{{< notice note >}}
- For all Secret types, values for all keys under the field `data` in the manifest must be base64-encoded strings. After you specify values on the KubeSphere dashboard, KubeSphere converts them into corresponding base64 character values in the YAML file. For example, if you input `password` and `hello123` for **Key** and **Value** respectively on the **Edit Data** page when you create the default type of Secret, the actual value displaying in the YAML file is `aGVsbG8xMjM=` (i.e. `hello123` in base64 format), automatically created by KubeSphere.
+ For all Secret types, values for all keys under the field `data` in the manifest must be base64-encoded strings. After you specify values on the KubeSphere dashboard, KubeSphere converts them into corresponding base64 character values in the YAML file. For example, if you enter `password` and `hello123` for **Key** and **Value** respectively on the **Edit Data** page when you create the default type of Secret, the actual value displaying in the YAML file is `aGVsbG8xMjM=` (i.e. `hello123` in base64 format), automatically created by KubeSphere.
{{ notice >}}
@@ -66,17 +66,17 @@ You can see the Secret's manifest file in YAML format by enabling **Edit Mode**

- - **Custom**. You can input [any type of Secrets supported by Kubernetes](https://kubernetes.io/docs/concepts/configuration/secret/#secret-types) in the box. Click **Add Data** to add key-value pairs for it.
+ - **Custom**. You can enter [any type of Secrets supported by Kubernetes](https://kubernetes.io/docs/concepts/configuration/secret/#secret-types) in the box. Click **Add Data** to add key-value pairs for it.

-2. For this tutorial, select the default type of Secret. Click **Add Data** and input the **Key** (`MYSQL_ROOT_PASSWORD`) and **Value** (`123456`) as below to specify a Secret for MySQL.
+2. For this tutorial, select the default type of Secret. Click **Add Data** and enter the **Key** (`MYSQL_ROOT_PASSWORD`) and **Value** (`123456`) as below to specify a Secret for MySQL.


-3. Click **√** in the bottom right corner to confirm. You can continue to add key-value pairs to the Secret or click **Create** to finish the creation. For more information about how to use the Secret, see [Compose and Deploy WordPress](../../../quick-start/wordpress-deployment/#task-3-create-an-application).
+3. Click **√** in the bottom-right corner to confirm. You can continue to add key-value pairs to the Secret or click **Create** to finish the creation. For more information about how to use the Secret, see [Compose and Deploy WordPress](../../../quick-start/wordpress-deployment/#task-3-create-an-application).
## Check Secret Details
diff --git a/content/en/docs/project-user-guide/custom-application-monitoring/examples/monitor-mysql.md b/content/en/docs/project-user-guide/custom-application-monitoring/examples/monitor-mysql.md
index 50bb4afce..d397576fc 100644
--- a/content/en/docs/project-user-guide/custom-application-monitoring/examples/monitor-mysql.md
+++ b/content/en/docs/project-user-guide/custom-application-monitoring/examples/monitor-mysql.md
@@ -20,7 +20,7 @@ This tutorial walks you through an example of how to monitor MySQL metrics and v
To begin with, you [deploy MySQL from the App Store](../../../../application-store/built-in-apps/mysql-app/) and set the root password to `testing`.
-1. Go to the project `demo` and click **App Store** in the top left corner.
+1. Go to the project `demo` and click **App Store** in the top-left corner.

@@ -61,7 +61,7 @@ You need to deploy MySQL exporter in `demo` on the same cluster. MySQL exporter

{{< notice warning >}}
-Don't forget to enable the ServiceMonitor CRD if you are using external exporter Helm charts. Those charts usually disable ServiceMonitor by default and require manual modification.
+Don't forget to enable the ServiceMonitor CRD if you are using external exporter Helm charts. Those charts usually disable ServiceMonitors by default and require manual modification.
{{ notice >}}
4. Modify MySQL connection parameters. MySQL exporter needs to connect to the target MySQL. In this tutorial, MySQL is installed with the service name `mysql-a8xgvx`. Set `mysql.host` to `mysql-a8xgvx`, `mysql.pass` to `testing`, and `user` to `root` as below. Note that your MySQL service may be created with **a different name**.
@@ -86,7 +86,7 @@ After about two minutes, you can create a monitoring dashboard for MySQL and vis

-3. Save the template by clicking **Save Template** in the top right corner. A newly-created dashboard displays in the dashboard list as below.
+3. Save the template by clicking **Save Template** in the top-right corner. A newly-created dashboard displays in the dashboard list as below.

diff --git a/content/en/docs/project-user-guide/custom-application-monitoring/examples/monitor-sample-web.md b/content/en/docs/project-user-guide/custom-application-monitoring/examples/monitor-sample-web.md
index 1a49949d0..aac17352b 100644
--- a/content/en/docs/project-user-guide/custom-application-monitoring/examples/monitor-sample-web.md
+++ b/content/en/docs/project-user-guide/custom-application-monitoring/examples/monitor-sample-web.md
@@ -11,7 +11,7 @@ This section walks you through monitoring a sample web application. The applicat
## Prerequisites
- Please make sure you [enable the OpenPitrix system](../../../../pluggable-components/app-store/).
-- You need to create a workspace, a project, and a user account for this tutorial. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../../quick-start/create-workspace-and-project/). The account needs to be a platform regular user and to be invited to the workspace with the `self-provisioner` role. Namely, create an account `workspace-self-provisioner` of the `self-provisioner` role, and use this account to create a project (e.g. `test`). In this tutorial, you log in as `workspace-self-provisioner` and work in the project `test` in the workspace `demo-workspace`.
+- You need to create a workspace, a project, and a user account for this tutorial. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../../quick-start/create-workspace-and-project/). The account needs to be a platform regular user and to be invited to the workspace with the `self-provisioner` role. Namely, create an account `workspace-self-provisioner` of the `self-provisioner` role, and use this account to create a project (for example, `test`). In this tutorial, you log in as `workspace-self-provisioner` and work in the project `test` in the workspace `demo-workspace`.
- Knowledge of Helm charts and [PromQL](https://prometheus.io/docs/prometheus/latest/querying/examples/).
@@ -27,7 +27,7 @@ In this tutorial, you use the made-ready image `kubespheredev/promethues-example
### Step 2: Pack the application into a Helm chart
-Pack the Deployment, Service, and ServiceMonitor YAML template into a Helm chart for reuse. In the Deployment and Service template, you define the sample web container and the port for the metrics endpoint. ServiceMonitor is a custom resource defined and used by Prometheus Operator. It connects your application and KubeSphere monitoring engine (Prometheus) so that the engine knows where and how to scrape metrics. In future releases, KubeSphere will provide a graphical user interface for easy operation.
+Pack the Deployment, Service, and ServiceMonitor YAML template into a Helm chart for reuse. In the Deployment and Service template, you define the sample web container and the port for the metrics endpoint. A ServiceMonitor is a custom resource defined and used by Prometheus Operator. It connects your application and KubeSphere monitoring engine (Prometheus) so that the engine knows where and how to scrape metrics. In future releases, KubeSphere will provide a graphical user interface for easy operation.
Find the source code in the folder `helm` in [kubesphere/prometheus-example-app](https://github.com/kubesphere/prometheus-example-app). The Helm chart package is made ready and is named `prometheus-example-app-0.1.0.tgz`. Please download the .tgz file and you will use it in the next step.
@@ -87,11 +87,11 @@ This section guides you on how to create a dashboard from scratch. You will crea

-2. Set a name (e.g. `sample-web`) and click **Create**.
+2. Set a name (for example, `sample-web`) and click **Create**.

-3. Enter a title in the top left corner (e.g. `Sample Web Overview`).
+3. Enter a title in the top-left corner (for example, `Sample Web Overview`).

@@ -99,7 +99,7 @@ This section guides you on how to create a dashboard from scratch. You will crea

-5. Type the PromQL expression `myapp_processed_ops_total` in the field **Monitoring Metrics** and give a chart name (e.g. `Operation Count`). Click **√** in the bottom right corner to continue.
+5. Type the PromQL expression `myapp_processed_ops_total` in the field **Monitoring Metrics** and give a chart name (for example, `Operation Count`). Click **√** in the bottom-right corner to continue.

diff --git a/content/en/docs/project-user-guide/custom-application-monitoring/introduction.md b/content/en/docs/project-user-guide/custom-application-monitoring/introduction.md
index 8dd20f9ac..1d305c8b0 100644
--- a/content/en/docs/project-user-guide/custom-application-monitoring/introduction.md
+++ b/content/en/docs/project-user-guide/custom-application-monitoring/introduction.md
@@ -40,11 +40,11 @@ Writing an exporter is nothing short of instrumenting an application with Promet
In the previous step, you expose metric endpoints in a Kubernetes Service object. Next, you need to inform the KubeSphere monitoring engine of your new changes.
-The ServiceMonitor CRD is defined by [Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator). ServiceMonitor contains information about the metrics endpoints. With ServiceMonitor objects, the KubeSphere monitoring engine knows where and how to scape metrics. For each monitoring target, you apply a ServiceMonitor object to hook your application (or exporters) up to KubeSphere.
+The ServiceMonitor CRD is defined by [Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator). A ServiceMonitor contains information about the metrics endpoints. With ServiceMonitor objects, the KubeSphere monitoring engine knows where and how to scape metrics. For each monitoring target, you apply a ServiceMonitor object to hook your application (or exporters) up to KubeSphere.
-In KubeSphere v3.0.0, you need to pack ServiceMonitor with your applications (or exporters) into a Helm chart for reuse. In future releases, KubeSphere will provide graphical interfaces for easy operation.
+In KubeSphere v3.0.0, you need to pack a ServiceMonitor with your applications (or exporters) into a Helm chart for reuse. In future releases, KubeSphere will provide graphical interfaces for easy operation.
-Please read [Monitor a Sample Web Application](../examples/monitor-sample-web/) to learn how to pack ServiceMonitor with your application.
+Please read [Monitor a Sample Web Application](../examples/monitor-sample-web/) to learn how to pack a ServiceMonitor with your application.
### Step 3: Visualize Metrics
diff --git a/content/en/docs/project-user-guide/custom-application-monitoring/visualization/overview.md b/content/en/docs/project-user-guide/custom-application-monitoring/visualization/overview.md
index cbcb7780f..1103641ee 100644
--- a/content/en/docs/project-user-guide/custom-application-monitoring/visualization/overview.md
+++ b/content/en/docs/project-user-guide/custom-application-monitoring/visualization/overview.md
@@ -28,7 +28,7 @@ To start with a blank template, click **Create**.
### From a YAML file
-Toggle to **Edit Mode** in the top right corner then paste your dashboard YAML file.
+Toggle to **Edit Mode** in the top-right corner then paste your dashboard YAML file.

@@ -62,11 +62,11 @@ You can view chart details in the right-most column. It shows the **max**, **min
## Edit the monitoring dashboard
-You can modify an existing template by clicking **Edit Template** in the top right corner.
+You can modify an existing template by clicking **Edit Template** in the top-right corner.
### Add a chart
-To add text charts, click the **add icon** in the left column. To add charts in the middle column, click **Add Monitoring Item** in the bottom right corner.
+To add text charts, click the **add icon** in the left column. To add charts in the middle column, click **Add Monitoring Item** in the bottom-right corner.

diff --git a/content/en/docs/project-user-guide/custom-application-monitoring/visualization/panel.md b/content/en/docs/project-user-guide/custom-application-monitoring/visualization/panel.md
index bf8293104..fb874dfed 100644
--- a/content/en/docs/project-user-guide/custom-application-monitoring/visualization/panel.md
+++ b/content/en/docs/project-user-guide/custom-application-monitoring/visualization/panel.md
@@ -10,7 +10,7 @@ KubeSphere currently supports two kinds of charts: text charts and graphs.
## Text Chart
-A text chart is preferable for displaying a single metric value. The editing window for the text chart is composed of two parts. The upper part displays the real-time metric value, and the lower part is for editing. You can input a PromQL expression to fetch a single metric value.
+A text chart is preferable for displaying a single metric value. The editing window for the text chart is composed of two parts. The upper part displays the real-time metric value, and the lower part is for editing. You can enter a PromQL expression to fetch a single metric value.
- **Chart Name**: The name of the text chart.
- **Unit**: The metric data unit.
diff --git a/content/en/docs/project-user-guide/custom-application-monitoring/visualization/querying.md b/content/en/docs/project-user-guide/custom-application-monitoring/visualization/querying.md
index 3d40058be..05c3f153d 100644
--- a/content/en/docs/project-user-guide/custom-application-monitoring/visualization/querying.md
+++ b/content/en/docs/project-user-guide/custom-application-monitoring/visualization/querying.md
@@ -6,7 +6,7 @@ linkTitle: "Querying"
weight: 10817
---
-In the query editor, you can input PromQL expressions to process and fetch metrics. To learn how to write PromQL, read [Query Examples](https://prometheus.io/docs/prometheus/latest/querying/examples/).
+In the query editor, you can enter PromQL expressions to process and fetch metrics. To learn how to write PromQL, read [Query Examples](https://prometheus.io/docs/prometheus/latest/querying/examples/).

diff --git a/content/en/docs/project-user-guide/grayscale-release/blue-green-deployment.md b/content/en/docs/project-user-guide/grayscale-release/blue-green-deployment.md
index 695d26871..611529fe0 100644
--- a/content/en/docs/project-user-guide/grayscale-release/blue-green-deployment.md
+++ b/content/en/docs/project-user-guide/grayscale-release/blue-green-deployment.md
@@ -20,19 +20,13 @@ The blue-green release provides a zero downtime deployment, which means the new
## Create a Blue-green Deployment Job
-1. Log in to KubeSphere as `project-regular`. Under **Categories**, click **Create Job** on the right of **Blue-green Deployment**.
-
- 
+1. Log in to KubeSphere as `project-regular` and navigate to **Grayscale Release**. Under **Categories**, click **Create Job** on the right of **Blue-green Deployment**.
2. Set a name for it and click **Next**.
- 
+3. On the **Grayscale Release Components** tab, select your app from the drop-down list and the Service for which you want to implement the blue-green deployment. If you also use the sample app Bookinfo, select **reviews** and click **Next**.
-3. Select your app from the drop-down list and the service for which you want to implement the blue-green deployment. If you also use the sample app Bookinfo, select **reviews** and click **Next**.
-
- 
-
-4. On the **Grayscale Release Version** page, add another version of it (e.g `v2`) as shown in the image below and click **Next**:
+4. On the **Grayscale Release Version** tab, add another version (e.g `v2`) as shown in the image below and click **Next**:

@@ -42,9 +36,7 @@ The blue-green release provides a zero downtime deployment, which means the new
{{ notice >}}
-5. To allow the app version `v2` to take over all the traffic, select **Take over all traffic** and click **Create**.
-
- 
+5. On the **Policy Config** tab, to allow the app version `v2` to take over all the traffic, select **Take over all traffic** and click **Create**.
6. The blue-green deployment job created displays under the tab **Job Status**. Click it to view details.
diff --git a/content/en/docs/project-user-guide/grayscale-release/canary-release.md b/content/en/docs/project-user-guide/grayscale-release/canary-release.md
index 46957a0cb..b2dce01d4 100644
--- a/content/en/docs/project-user-guide/grayscale-release/canary-release.md
+++ b/content/en/docs/project-user-guide/grayscale-release/canary-release.md
@@ -21,17 +21,11 @@ This method serves as an efficient way to test performance and reliability of a
## Step 1: Create a Canary Release Job
-1. Log in to KubeSphere as `project-regular`. Under **Categories**, click **Create Job** on the right of **Canary Release**.
-
- 
+1. Log in to KubeSphere as `project-regular` and navigate to **Grayscale Release**. Under **Categories**, click **Create Job** on the right of **Canary Release**.
2. Set a name for it and click **Next**.
- 
-
-3. Select your app from the drop-down list and the Service for which you want to implement the canary release. If you also use the sample app Bookinfo, select **reviews** and click **Next**.
-
- 
+3. On the **Grayscale Release Components** tab, select your app from the drop-down list and the Service for which you want to implement the canary release. If you also use the sample app Bookinfo, select **reviews** and click **Next**.
4. On the **Grayscale Release Version** tab, add another version of it (e.g `kubesphere/examples-bookinfo-reviews-v2:1.13.0`; change `v1` to `v2`) as shown in the image below and click **Next**:
@@ -43,7 +37,7 @@ This method serves as an efficient way to test performance and reliability of a
{{ notice >}}
-5. You send traffic to these two versions (`v1` and `v2`) either by a specific percentage or by the request content such as `Http Header`, `Cookie` and `URI`. Select **Forward by traffic ratio** and drag the icon in the middle to change the percentage of traffic sent to these two versions respectively (e.g. set 50% for either one). When you finish, click **Create**.
+5. You send traffic to these two versions (`v1` and `v2`) either by a specific percentage or by the request content such as `Http Header`, `Cookie` and `URI`. Select **Forward by traffic ratio** and drag the icon in the middle to change the percentage of traffic sent to these two versions respectively (for example, set 50% for either one). When you finish, click **Create**.

@@ -120,7 +114,7 @@ Now that you have two available app versions, access the app to verify the canar

-3. Click a component (e.g. **reviews**) and you can see the information of traffic monitoring on the right, displaying real-time data of **Traffic**, **Success rate** and **Duration**.
+3. Click a component (for example, **reviews**) and you can see the information of traffic monitoring on the right, displaying real-time data of **Traffic**, **Success rate** and **Duration**.

diff --git a/content/en/docs/project-user-guide/grayscale-release/traffic-mirroring.md b/content/en/docs/project-user-guide/grayscale-release/traffic-mirroring.md
index 85e86462e..9da3c73f5 100644
--- a/content/en/docs/project-user-guide/grayscale-release/traffic-mirroring.md
+++ b/content/en/docs/project-user-guide/grayscale-release/traffic-mirroring.md
@@ -16,19 +16,13 @@ Traffic mirroring, also called shadowing, is a powerful, risk-free method of tes
## Create a Traffic Mirroring Job
-1. Log in to KubeSphere as `project-regular`. Under **Categories**, click **Create Job** on the right of **Traffic Mirroring**.
-
- 
+1. Log in to KubeSphere as `project-regular` and navigate to **Grayscale Release**. Under **Categories**, click **Create Job** on the right of **Traffic Mirroring**.
2. Set a name for it and click **Next**.
- 
+3. On the **Grayscale Release Components** tab, select your app from the drop-down list and the Service of which you want to mirror the traffic. If you also use the sample app Bookinfo, select **reviews** and click **Next**.
-3. Select your app from the drop-down list and the service of which you want to mirror the traffic. If you also use the sample app Bookinfo, select **reviews** and click **Next**.
-
- 
-
-4. On the **Grayscale Release Version** page, add another version of it (e.g. `v2`) as shown in the image below and click **Next**:
+4. On the **Grayscale Release Version** tab, add another version of it (for example, `v2`) as shown in the image below and click **Next**:

@@ -38,9 +32,7 @@ Traffic mirroring, also called shadowing, is a powerful, risk-free method of tes
{{ notice >}}
-5. Click **Create** in the final step.
-
- 
+5. On the **Policy Config** tab, click **Create**.
6. The traffic mirroring job created displays under the tab **Job Status**. Click it to view details.
diff --git a/content/en/docs/project-user-guide/image-builder/binary-to-image.md b/content/en/docs/project-user-guide/image-builder/binary-to-image.md
index 3942e9c5a..05e8fddcc 100644
--- a/content/en/docs/project-user-guide/image-builder/binary-to-image.md
+++ b/content/en/docs/project-user-guide/image-builder/binary-to-image.md
@@ -63,7 +63,7 @@ You must create a Docker Hub Secret so that the Docker image created through B2I
**Target image repository**: Select the Docker Hub Secret as the image is pushed to Docker Hub.
-4. On the **Container Settings** page, scroll down to **Service Settings** to set the access policy for the container. Select **HTTP** for **Protocol**, customize the name (for example, `http-port`), and input `8080` for both **Container Port** and **Service Port**. Click **Next** to continue.
+4. On the **Container Settings** page, scroll down to **Service Settings** to set the access policy for the container. Select **HTTP** for **Protocol**, customize the name (for example, `http-port`), and enter `8080` for both **Container Port** and **Service Port**. Click **Next** to continue.

diff --git a/content/en/docs/project-user-guide/image-builder/s2i-and-b2i-webhooks.md b/content/en/docs/project-user-guide/image-builder/s2i-and-b2i-webhooks.md
index c1bcce012..d161a3b94 100644
--- a/content/en/docs/project-user-guide/image-builder/s2i-and-b2i-webhooks.md
+++ b/content/en/docs/project-user-guide/image-builder/s2i-and-b2i-webhooks.md
@@ -20,7 +20,7 @@ This tutorial demonstrates how to configure S2I and B2I webhooks.
### Step 1: Expose the S2I trigger Service
-1. Log in to the KubeSphere web console as `admin`. Click **Platform** in the top left corner and then select **Cluster Management**.
+1. Log in to the KubeSphere web console as `admin`. Click **Platform** in the top-left corner and then select **Cluster Management**.
2. In **Services** under **Application Workloads**, select **kubesphere-devops-system** from the drop-down list and click **s2ioperator-trigger-service** to go to its detail page.
diff --git a/content/en/docs/project-user-guide/image-builder/source-to-image.md b/content/en/docs/project-user-guide/image-builder/source-to-image.md
index 7400885b7..20fd9f861 100644
--- a/content/en/docs/project-user-guide/image-builder/source-to-image.md
+++ b/content/en/docs/project-user-guide/image-builder/source-to-image.md
@@ -77,7 +77,7 @@ You do not need to create the GitHub Secret if your forked repository is open to
**Advanced Settings**: You can define the code relative path. Use the default `/` for this field.
-4. On the **Container Settings** page, scroll down to **Service Settings** to set the access policy for the container. Select **HTTP** for **Protocol**, customize the name (for example, `http-1`), and input `8080` for both **Container Port** and **Service Port**.
+4. On the **Container Settings** page, scroll down to **Service Settings** to set the access policy for the container. Select **HTTP** for **Protocol**, customize the name (for example, `http-1`), and enter `8080` for both **Container Port** and **Service Port**.

@@ -85,7 +85,7 @@ You do not need to create the GitHub Secret if your forked repository is open to

- **HTTP Request**: Select **HTTP** as the protocol, enter `/` as the path (root path in this tutorial), and input `8080` as the port exposed.
+ **HTTP Request**: Select **HTTP** as the protocol, enter `/` as the path (root path in this tutorial), and enter `8080` as the port exposed.
**Initial Delays**: The number of seconds after the container has started before the liveness probe is initiated. Enter `30` for this field.
diff --git a/content/en/docs/project-user-guide/storage/volumes.md b/content/en/docs/project-user-guide/storage/volumes.md
index 9c77782f6..ce1b280c9 100644
--- a/content/en/docs/project-user-guide/storage/volumes.md
+++ b/content/en/docs/project-user-guide/storage/volumes.md
@@ -26,11 +26,11 @@ All the volumes that are created on the **Volumes** page are PersistentVolumeCla
2. To create a volume, click **Create** on the **Volumes** page.
-3. In the dialog that appears, set a name (e.g. `demo-volume`) for the volume and click **Next**.
+3. In the dialog that appears, set a name (for example, `demo-volume`) for the volume and click **Next**.
{{< notice note >}}
- You can see the volume's manifest file in YAML format by enabling **Edit Mode** in the top right corner. KubeSphere allows you to edit the manifest file directly to create a volume. Alternatively, you can follow the steps below to create a volume via the dashboard.
+ You can see the volume's manifest file in YAML format by enabling **Edit Mode** in the top-right corner. KubeSphere allows you to edit the manifest file directly to create a volume. Alternatively, you can follow the steps below to create a volume via the dashboard.
{{ notice >}}
diff --git a/content/en/docs/quick-start/all-in-one-on-linux.md b/content/en/docs/quick-start/all-in-one-on-linux.md
index 3daa7b77b..009fbddf2 100644
--- a/content/en/docs/quick-start/all-in-one-on-linux.md
+++ b/content/en/docs/quick-start/all-in-one-on-linux.md
@@ -1,5 +1,5 @@
---
-title: "All-in-One Installation on Linux"
+title: "All-in-one Installation of Kubernetes and KubeSphere on Linux"
keywords: 'KubeSphere, Kubernetes, All-in-one, Installation'
description: 'Install KubeSphere on Linux with a minimal installation package. The tutorial serves as a basic kick-starter for you to understand the container platform, paving the way for learning the following guides.'
linkTitle: "All-in-One Installation on Linux"
@@ -159,15 +159,15 @@ To create a Kubernetes cluster with KubeSphere installed, refer to the following
{{ notice >}}
-After you execute the command, you will see a table for environment check. For details, read [Node requirements](#node-requirements) and [Dependency requirements](#dependency-requirements) above. Input `yes` to continue.
+After you execute the command, you will see a table for environment check. For details, read [Node requirements](#node-requirements) and [Dependency requirements](#dependency-requirements) above. Type `yes` to continue.
## Step 4: Verify the Installation
-When you see the output as below, it means the installation finishes.
+When you see the output as below, it means the installation of Kubernetes and KubeSphere finishes.

-Input the following command to check the result.
+Run the following command to check the result.
```bash
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
diff --git a/content/en/docs/quick-start/create-workspace-and-project.md b/content/en/docs/quick-start/create-workspace-and-project.md
index dd718d57c..0aa289b96 100644
--- a/content/en/docs/quick-start/create-workspace-and-project.md
+++ b/content/en/docs/quick-start/create-workspace-and-project.md
@@ -33,7 +33,7 @@ After KubeSphere is installed, you need to add different users with varied roles
1. Log in to the web console as `admin` with the default account and password (`admin/P@88w0rd`).
{{< notice tip >}}
- For account security, it is highly recommended that you change your password the first time you log in to the console. To change your password, select **User Settings** in the drop-down menu in the top right corner. In **Password Setting**, set a new password. You also can change the console language in **User Settings**.
+ For account security, it is highly recommended that you change your password the first time you log in to the console. To change your password, select **User Settings** in the drop-down menu in the top-right corner. In **Password Setting**, set a new password. You also can change the console language in **User Settings**.
{{ notice >}}

@@ -126,7 +126,7 @@ In this step, you create a project using the account `project-admin` created in

-2. Enter the project name (e.g. `demo-project`) and click **OK** to finish. You can also add an alias and description for the project.
+2. Enter the project name (for example, `demo-project`) and click **OK** to finish. You can also add an alias and description for the project.

@@ -134,7 +134,7 @@ In this step, you create a project using the account `project-admin` created in

-4. On the **Overview** page of the project, the project quota remains unset by default. You can click **Set** and specify [resource requests and limits](../../workspace-administration/project-quotas/) as needed (e.g. 1 core for CPU and 1000Gi for memory).
+4. On the **Overview** page of the project, the project quota remains unset by default. You can click **Set** and specify [resource requests and limits](../../workspace-administration/project-quotas/) as needed (for example, 1 core for CPU and 1000Gi for memory).

@@ -214,7 +214,7 @@ To create a DevOps project, you must install the KubeSphere DevOps system in adv

-2. Enter the DevOps project name (e.g. `demo-devops`) and click **OK**. You can also add an alias and description for the project.
+2. Enter the DevOps project name (for example, `demo-devops`) and click **OK**. You can also add an alias and description for the project.

diff --git a/content/en/docs/quick-start/deploy-bookinfo-to-k8s.md b/content/en/docs/quick-start/deploy-bookinfo-to-k8s.md
index 0810ace98..eaba71e99 100644
--- a/content/en/docs/quick-start/deploy-bookinfo-to-k8s.md
+++ b/content/en/docs/quick-start/deploy-bookinfo-to-k8s.md
@@ -23,7 +23,7 @@ To provide consistent user experiences of managing microservices, KubeSphere int
Log in to the console as `project-admin` and go to your project. Navigate to **Advanced Settings** under **Project Settings**, click **Edit**, and select **Edit Gateway**. In the dialog that appears, flip on the toggle switch next to **Application Governance**.
{{< notice note >}}
-You need to enable **Application Governance** so that you can use the Tracing feature. Once it is enabled, check whether an annotation (e.g. `nginx.ingress.kubernetes.io/service-upstream: true`) is added for your Route (Ingress) if the Route is inaccessible.
+You need to enable **Application Governance** so that you can use the Tracing feature. Once it is enabled, check whether an annotation (for example, `nginx.ingress.kubernetes.io/service-upstream: true`) is added for your Route (Ingress) if the Route is inaccessible.
{{ notice >}}
## What is Bookinfo
diff --git a/content/en/docs/quick-start/enable-pluggable-components.md b/content/en/docs/quick-start/enable-pluggable-components.md
index 9eaa2cfc6..ca773b6a8 100644
--- a/content/en/docs/quick-start/enable-pluggable-components.md
+++ b/content/en/docs/quick-start/enable-pluggable-components.md
@@ -47,7 +47,7 @@ When you implement multi-node installation of KubeSphere on Linux, you need to c
```
{{< notice note >}}
-If you adopt [All-in-one Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable pluggable components in this mode (e.g. for testing purpose), refer to the [following section](#enable-pluggable-components-after-installation) to see how pluggable components can be installed after installation.
+If you adopt [All-in-one Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable pluggable components in this mode (for example, for testing purpose), refer to the [following section](#enable-pluggable-components-after-installation) to see how pluggable components can be installed after installation.
{{ notice >}}
2. In this file, enable the pluggable components you want to install by changing `false` to `true` for `enabled`. Here is [the complete file](https://github.com/kubesphere/kubekey/blob/release-1.1/docs/config-example.md) for your reference. Save the file after you finish.
diff --git a/content/en/docs/quick-start/wordpress-deployment.md b/content/en/docs/quick-start/wordpress-deployment.md
index e900a8fad..ed8fa94e0 100644
--- a/content/en/docs/quick-start/wordpress-deployment.md
+++ b/content/en/docs/quick-start/wordpress-deployment.md
@@ -36,7 +36,7 @@ The environment variable `WORDPRESS_DB_PASSWORD` is the password to connect to t

-2. Enter the basic information (e.g. name it `mysql-secret`) and click **Next**. On the next page, select **Opaque (Default)** for **Type** and click **Add Data** to add a key-value pair. Input the Key (`MYSQL_ROOT_PASSWORD`) and Value (`123456`) as below and click **√** in the bottom-right corner to confirm. When you finish, click **Create** to continue.
+2. Enter the basic information (for example, name it `mysql-secret`) and click **Next**. On the next page, select **Opaque (Default)** for **Type** and click **Add Data** to add a key-value pair. Enter the Key (`MYSQL_ROOT_PASSWORD`) and Value (`123456`) as below and click **√** in the bottom-right corner to confirm. When you finish, click **Create** to continue.

@@ -52,7 +52,7 @@ Follow the same steps above to create a WordPress Secret `wordpress-secret` with

-2. Enter the basic information of the volume (e.g. name it `wordpress-pvc`) and click **Next**.
+2. Enter the basic information of the volume (for example, name it `wordpress-pvc`) and click **Next**.
3. In **Volume Settings**, you need to choose an available **Storage Class**, and set **Access Mode** and **Volume Capacity**. You can use the default value directly as shown below. Click **Next** to continue.
@@ -68,7 +68,7 @@ Follow the same steps above to create a WordPress Secret `wordpress-secret` with

-2. Enter the basic information (e.g. `wordpress` for **App Name**) and click **Next**.
+2. Enter the basic information (for example, `wordpress` for **App Name**) and click **Next**.

@@ -78,7 +78,7 @@ Follow the same steps above to create a WordPress Secret `wordpress-secret` with
4. Define a service type for the component. Select **Stateful Service** here.
-5. Enter the name for the stateful service (e.g. **mysql**) and click **Next**.
+5. Enter the name for the stateful service (for example, **mysql**) and click **Next**.

@@ -96,11 +96,11 @@ In **Advanced Settings**, make sure the memory limit is no less than 1000 Mi or
{{ notice >}}
-8. Scroll down to **Environment Variables** and click **Use ConfigMap or Secret**. Input the name `MYSQL_ROOT_PASSWORD` and choose the resource `mysql-secret` and the key `MYSQL_ROOT_PASSWORD` created in the previous step. Click **√** after you finish and **Next** to continue.
+8. Scroll down to **Environment Variables** and click **Use ConfigMap or Secret**. Enter the name `MYSQL_ROOT_PASSWORD` and choose the resource `mysql-secret` and the key `MYSQL_ROOT_PASSWORD` created in the previous step. Click **√** after you finish and **Next** to continue.

-9. Select **Add Volume Template** in **Mount Volumes**. Input the value of **Volume Name** (`mysql`) and **Mount Path** (mode: `ReadAndWrite`, path: `/var/lib/mysql`) as below:
+9. Select **Add Volume Template** in **Mount Volumes**. Enter the value of **Volume Name** (`mysql`) and **Mount Path** (mode: `ReadAndWrite`, path: `/var/lib/mysql`) as below:

@@ -146,7 +146,7 @@ For the second environment variable added here, the value must be exactly the sa

-16. Select `wordpress-pvc` created in the previous step, set the mode as `ReadAndWrite`, and input `/var/www/html` as its mount path. Click **√** to save it and **Next** to continue.
+16. Select `wordpress-pvc` created in the previous step, set the mode as `ReadAndWrite`, and enter `/var/www/html` as its mount path. Click **√** to save it and **Next** to continue.

diff --git a/content/en/docs/reference/api-docs.md b/content/en/docs/reference/api-docs.md
index 701d85711..d91f7ce89 100644
--- a/content/en/docs/reference/api-docs.md
+++ b/content/en/docs/reference/api-docs.md
@@ -112,9 +112,9 @@ Replace `[node ip]` with your actual IP address.
## API Reference
-The KubeSphere API swagger JSON file can be found in the repository https://github.com/kubesphere/kubesphere/tree/release-3.0/api.
+The KubeSphere API swagger JSON file can be found in the repository https://github.com/kubesphere/kubesphere/tree/release-3.1/api.
-- KubeSphere specified the API [swagger json](https://github.com/kubesphere/kubesphere/blob/release-3.0/api/ks-openapi-spec/swagger.json) file. It contains all the APIs that are only applied to KubeSphere.
-- KubeSphere specified the CRD [swagger json](https://github.com/kubesphere/kubesphere/blob/release-3.0/api/openapi-spec/swagger.json) file. It contains all the generated CRDs API documentation. It is same as Kubernetes API objects.
+- KubeSphere specified the API [swagger json](https://github.com/kubesphere/kubesphere/blob/release-3.1/api/ks-openapi-spec/swagger.json) file. It contains all the APIs that are only applied to KubeSphere.
+- KubeSphere specified the CRD [swagger json](https://github.com/kubesphere/kubesphere/blob/release-3.1/api/openapi-spec/swagger.json) file. It contains all the generated CRDs API documentation. It is same as Kubernetes API objects.
You can explore the KubeSphere API document from [here](https://kubesphere.io/api/kubesphere) as well.
diff --git a/content/en/docs/release/release-v310.md b/content/en/docs/release/release-v310.md
index 18585703b..870a23b86 100644
--- a/content/en/docs/release/release-v310.md
+++ b/content/en/docs/release/release-v310.md
@@ -15,7 +15,7 @@ weight: 18100
### Multi-cluster management
-- Simplified the steps to import Member Clusters with configuration validation (e.g. `jwtSecret`) added. ([#3232](https://github.com/kubesphere/kubesphere/issues/3232))
+- Simplified the steps to import Member Clusters with configuration validation (for example, `jwtSecret`) added. ([#3232](https://github.com/kubesphere/kubesphere/issues/3232))
- Refactored the cluster controller and optimized the logic. ([#3234](https://github.com/kubesphere/kubesphere/issues/3234))
- Upgraded the built-in web Kubectl, the version of which is now consistent with your Kubernetes cluster version. ([#3103](https://github.com/kubesphere/kubesphere/issues/3103))
- Support customized resynchronization period of cluster controller. ([#3213](https://github.com/kubesphere/kubesphere/issues/3213))
@@ -72,7 +72,7 @@ You can now enable KubeEdge in your cluster and manage edge nodes on the KubeSph
#### Monitoring
-- Support configurations of ServiceMonitor on the KubeSphere console. ([#1031](https://github.com/kubesphere/console/pull/1301))
+- Support configurations of ServiceMonitors on the KubeSphere console. ([#1031](https://github.com/kubesphere/console/pull/1301))
- Support PromQL auto-completion and syntax highlighting. ([#1307](https://github.com/kubesphere/console/pull/1307))
- Support customized monitoring at the cluster level. ([#3193](https://github.com/kubesphere/kubesphere/pull/3193))
- Changed the HTTP ports of kube-scheduler and kube-controller-manager from `10251` and `10252` to the HTTPS ports of `10259` and `10257` respectively for data scraping. ([#1367](https://github.com/kubesphere/ks-installer/pull/1367))
diff --git a/content/en/docs/toolbox/auditing/auditing-query.md b/content/en/docs/toolbox/auditing/auditing-query.md
index e2972e273..55827e81a 100644
--- a/content/en/docs/toolbox/auditing/auditing-query.md
+++ b/content/en/docs/toolbox/auditing/auditing-query.md
@@ -14,7 +14,7 @@ You need to enable [KubeSphere Auditing Logs](../../../pluggable-components/audi
## Enter the Query Interface
-1. The query function is available for all users. Log in to the console with any account, hover over the **Toolbox** in the lower right corner and select **Auditing Operating**.
+1. The query function is available for all users. Log in to the console with any account, hover over the **Toolbox** in the lower-right corner and select **Auditing Operating**.
{{< notice note >}}
@@ -58,7 +58,7 @@ Any account has the authorization to query auditing logs, while the logs each ac
## Enter Query Parameters
-1. Select a filter and input the keyword you want to search. For example, query auditing logs containing the information of `user` changed as shown in the following screenshot:
+1. Select a filter and enter the keyword you want to search. For example, query auditing logs containing the information of `user` changed as shown in the following screenshot:

diff --git a/content/en/docs/toolbox/auditing/auditing-rule.md b/content/en/docs/toolbox/auditing/auditing-rule.md
index 80e36be23..d548d0214 100644
--- a/content/en/docs/toolbox/auditing/auditing-rule.md
+++ b/content/en/docs/toolbox/auditing/auditing-rule.md
@@ -8,7 +8,7 @@ weight: 15320
An auditing rule defines the policy for processing auditing logs. KubeSphere Auditing Logs provide users with two CRD rules (`archiving-rule` and `alerting-rule`) for customization.
-After you enable [KubeSphere Auditing Logs](../../../pluggable-components/auditing-logs/), log in to the console with an account of `platform-admin` role. In **CRDs** on the **Cluster Management** page, input `rules.auditing.kubesphere.io` in the search bar. Click the result **Rule** as below and you can see the two CRD rules.
+After you enable [KubeSphere Auditing Logs](../../../pluggable-components/auditing-logs/), log in to the console with an account of `platform-admin` role. In **CRDs** on the **Cluster Management** page, enter `rules.auditing.kubesphere.io` in the search bar. Click the result **Rule** as below and you can see the two CRD rules.

diff --git a/content/en/docs/toolbox/events-query.md b/content/en/docs/toolbox/events-query.md
index adbb840a9..270fb241a 100644
--- a/content/en/docs/toolbox/events-query.md
+++ b/content/en/docs/toolbox/events-query.md
@@ -16,7 +16,7 @@ This guide demonstrates how you can do multi-level, fine-grained event queries t
## Query Events
-1. The event query function is available for all users. Log in to the console with any account, hover over the **Toolbox** in the lower right corner and select **Event Search**.
+1. The event query function is available for all users. Log in to the console with any account, hover over the **Toolbox** in the lower-right corner and select **Event Search**.

diff --git a/content/en/docs/toolbox/log-query.md b/content/en/docs/toolbox/log-query.md
index 046e4ba94..aeb6fb286 100644
--- a/content/en/docs/toolbox/log-query.md
+++ b/content/en/docs/toolbox/log-query.md
@@ -16,7 +16,7 @@ You need to enable the [KubeSphere Logging System](../../pluggable-components/lo
## Enter the Log Query Interface
-1. The log query function is available for all users. Log in to the console with any account, hover over the **Toolbox** in the lower right corner and select **Log Search**.
+1. The log query function is available for all users. Log in to the console with any account, hover over the **Toolbox** in the lower-right corner and select **Log Search**.

diff --git a/content/en/docs/toolbox/metering-and-billing/view-resource-consumption.md b/content/en/docs/toolbox/metering-and-billing/view-resource-consumption.md
index fd57fa6c2..2040be685 100644
--- a/content/en/docs/toolbox/metering-and-billing/view-resource-consumption.md
+++ b/content/en/docs/toolbox/metering-and-billing/view-resource-consumption.md
@@ -17,7 +17,7 @@ KubeSphere metering helps you track resource consumption within a given cluster
**Cluster Resource Consumption** contains resource usage information of clusters (and nodes included), such as CPU, memory and storage.
-1. Log in to the KubeSphere console as `admin`, click the hammer icon in the bottom right corner and select **Metering and Billing**.
+1. Log in to the KubeSphere console as `admin`, click the hammer icon in the bottom-right corner and select **Metering and Billing**.
2. Click **View Consumption** in the **Cluster Resource Consumption** section.
@@ -55,7 +55,7 @@ KubeSphere metering helps you track resource consumption within a given cluster
**Workspace (Project) Resource Consumption** contains resource usage information of workspaces (and projects included), such as CPU, memory and storage.
-1. Log in to the KubeSphere console as `admin`, click the hammer icon in the bottom right corner and select **Metering and Billing**.
+1. Log in to the KubeSphere console as `admin`, click the hammer icon in the bottom-right corner and select **Metering and Billing**.
2. Click **View Consumption** in the **Workspace (Project) Resource Consumption** section.
diff --git a/content/en/docs/toolbox/web-kubectl.md b/content/en/docs/toolbox/web-kubectl.md
index d4f0c7170..b0bb27781 100644
--- a/content/en/docs/toolbox/web-kubectl.md
+++ b/content/en/docs/toolbox/web-kubectl.md
@@ -14,11 +14,11 @@ This tutorial demonstrates how to use web kubectl to operate on and manage clust
## Use Web Kubectl
-1. Log in to KubeSphere with an account granted the `platform-admin` role, hover over the **Toolbox** in the lower right corner and select **Kubectl**.
+1. Log in to KubeSphere with an account granted the `platform-admin` role, hover over the **Toolbox** in the lower-right corner and select **Kubectl**.

-2. You can see the kubectl interface as shown in the pop-up window. If you have enabled the multi-cluster feature, you need to select the target cluster first from the drop-down list in the upper right corner. This drop-down list is not visible if the multi-cluster feature is not enabled.
+2. You can see the kubectl interface as shown in the pop-up window. If you have enabled the multi-cluster feature, you need to select the target cluster first from the drop-down list in the upper-right corner. This drop-down list is not visible if the multi-cluster feature is not enabled.

diff --git a/content/en/docs/upgrade/_index.md b/content/en/docs/upgrade/_index.md
index fa27cf86e..23b40f8dd 100644
--- a/content/en/docs/upgrade/_index.md
+++ b/content/en/docs/upgrade/_index.md
@@ -11,4 +11,4 @@ icon: "/images/docs/docs.svg"
---
-This chapter demonstrates how cluster operators can upgrade KubeSphere to v3.0.0.
\ No newline at end of file
+This chapter demonstrates how cluster operators can upgrade KubeSphere to v3.1.0.
\ No newline at end of file
diff --git a/content/en/docs/workspace-administration/department-management.md b/content/en/docs/workspace-administration/department-management.md
index aaf6d8485..5981fc796 100644
--- a/content/en/docs/workspace-administration/department-management.md
+++ b/content/en/docs/workspace-administration/department-management.md
@@ -3,7 +3,7 @@ title: "Department Management"
keywords: 'KubeSphere, Kubernetes, Department, Role, Permission, Group'
description: 'Create departments in a workspace and assign users to different departments to implement permission control.'
linkTitle: "Department Management"
-weight: 9700
+weight: 9800
---
This document describes how to manage workspace departments.
diff --git a/content/en/docs/workspace-administration/project-quotas.md b/content/en/docs/workspace-administration/project-quotas.md
index 1cb82f5cc..ee3d200b1 100644
--- a/content/en/docs/workspace-administration/project-quotas.md
+++ b/content/en/docs/workspace-administration/project-quotas.md
@@ -6,7 +6,7 @@ linkTitle: "Project Quotas"
weight: 9600
---
-KubeSphere uses requests and limits to control resource (e.g. CPU and memory) usage in a project, also known as [ResourceQuotas](https://kubernetes.io/docs/concepts/policy/resource-quotas/) in Kubernetes. Requests make sure a project can get the resources it needs as they are specifically guaranteed and reserved. On the contrary, limits ensure that a project can never use resources above a certain value.
+KubeSphere uses requests and limits to control resource (for example, CPU and memory) usage in a project, also known as [ResourceQuotas](https://kubernetes.io/docs/concepts/policy/resource-quotas/) in Kubernetes. Requests make sure a project can get the resources it needs as they are specifically guaranteed and reserved. On the contrary, limits ensure that a project can never use resources above a certain value.
Besides CPU and memory, you can also set resource quotas for other objects separately such as Pods, [Deployments](../../project-user-guide/application-workloads/deployments/), [Jobs](../../project-user-guide/application-workloads/jobs/), [Services](../../project-user-guide/application-workloads/services/) and [ConfigMaps](../../project-user-guide/configuration/configmaps/) in a project.
@@ -48,7 +48,13 @@ If you use the account `project-admin` (an account of the `admin` role at the pr
6. To change project quotas, click **Manage Project** on the **Basic Information** page and select **Edit Quota**.
-7. Change project quotas directly in the dialog that appears and click **OK**.
+ {{< notice note >}}
+
+ For [a multi-cluster project](../../project-administration/project-and-multicluster-project/#multi-cluster-projects), the option **Edit Quota** does not display in the **Manage Project** drop-down menu. To set quotas for a multi-cluster project, go to **Quota Management** under **Project Settings** and click **Edit Quota**. Note that as a multi-cluster project runs across clusters, you can set resource quotas on different clusters separately.
+
+ {{ notice >}}
+
+7. Change project quotas in the dialog that appears and click **OK**.
## See Also
diff --git a/content/en/docs/workspace-administration/role-and-member-management.md b/content/en/docs/workspace-administration/role-and-member-management.md
index d88b90b0d..084de5a32 100644
--- a/content/en/docs/workspace-administration/role-and-member-management.md
+++ b/content/en/docs/workspace-administration/role-and-member-management.md
@@ -16,7 +16,7 @@ This tutorial demonstrates how to manage roles and members in a workspace. At th
## Prerequisites
-At least one workspace has been created, such as `demo-workspace`. Besides, you need an account of the `workspace-admin` role (e.g. `ws-admin`) at the workspace level. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../quick-start/create-workspace-and-project/).
+At least one workspace has been created, such as `demo-workspace`. Besides, you need an account of the `workspace-admin` role (for example, `ws-admin`) at the workspace level. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../quick-start/create-workspace-and-project/).
{{< notice note >}}
diff --git a/content/en/docs/workspace-administration/workspace-quotas.md b/content/en/docs/workspace-administration/workspace-quotas.md
new file mode 100644
index 000000000..24725c3e9
--- /dev/null
+++ b/content/en/docs/workspace-administration/workspace-quotas.md
@@ -0,0 +1,43 @@
+---
+title: "Workspace Quotas"
+keywords: 'KubeSphere, Kubernetes, workspace, quotas'
+description: 'Set workspace quotas to control the total resource usage of projects and DevOps projects in a workspace.'
+linkTitle: "Workspace Quotas"
+weight: 9700
+---
+
+Workspace quotas are used to control the total resource usage of all projects and DevOps projects in a workspace. Similar to [project quotas](../project-quotas/), workspace quotas contain requests and limits of CPU and memory. Requests make sure projects in the workspace can get the resources they needs as they are specifically guaranteed and reserved. On the contrary, limits ensure that the resource usage of all projects in the workspace can never go above a certain value.
+
+In [a multi-cluster architecture](../../multicluster-management/), as you need to [assign one or multiple clusters to a workspace](../../cluster-administration/cluster-settings/cluster-visibility-and-authorization/), you can decide the amount of resources that can be used by the workspace on different clusters.
+
+This tutorial demonstrates how to manage resource quotas for a workspace.
+
+## Prerequisites
+
+You have an available workspace and an account (`ws-manager`). The account must have the `workspaces-manager` role at the platform level. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../quick-start/create-workspace-and-project/).
+
+## Set Workspace Quotas
+
+1. Log in to the KubeSphere web console as `ws-manager` and go to a workspace.
+
+2. Navigate to **Quota Management** under **Workspace Settings**.
+
+3. The **Quota Management** page lists all the available clusters assigned to the workspace and their respective requests and limits of CPU and memory. Click **Edit Quota** on the right of a cluster.
+
+4. In the dialog that appears, you can see that KubeSphere does not set any requests or limits for the workspace by default. To set requests and limits to control CPU and memory resources, use the slider to move to a desired value or enter numbers directly. Leaving a field blank means you do not set any requests or limits.
+
+ 
+
+ {{< notice note >}}
+
+ The limit can never be lower than the request.
+
+ {{ notice >}}
+
+5. Click **OK** to finish setting quotas.
+
+## See Also
+
+[Project Quotas](../project-quotas/)
+
+[Container Limit Ranges](../../project-administration/container-limit-ranges/)
\ No newline at end of file
diff --git a/content/zh/devops/_index.md b/content/zh/devops/_index.md
index 69d0aa3ee..0756abc7e 100644
--- a/content/zh/devops/_index.md
+++ b/content/zh/devops/_index.md
@@ -7,7 +7,10 @@ css: "scss/scenario.scss"
section1:
title: KubeSphere DevOps 提供端到端的工作流,集成主流 CI/CD 工具,提升交付能力
content: KubeSphere DevOps 提供基于 Jenkins 的 CI/CD 流水线,支持自动化工作流,包括 Binary-to-Image (B2I) 和 Source-to-Image (S2I) 等,帮助不同的组织加快产品上市时间。
- image: /images/devops/banner.jpg
+ content2:
+ image: /images/devops/banner.png
+ showDownload: true
+ inCenter: true
image: /images/devops/dev-ops.png
@@ -44,6 +47,7 @@ section3:
title: 观看 KubeSphere 一站式 DevOps 工作流操作演示
videoLink: https://www.youtube.com/embed/c3V-2RX9yGY
image: /images/service-mesh/15.jpg
+ showDownload: true
content: 想自己动手体验实际操作?
btnContent: 开始动手实验
link: docs/pluggable-components/devops/
diff --git a/content/zh/docs/cluster-administration/cluster-settings/log-collections/add-es-as-receiver.md b/content/zh/docs/cluster-administration/cluster-settings/log-collections/add-es-as-receiver.md
index bacb3d8dc..6e4544d72 100644
--- a/content/zh/docs/cluster-administration/cluster-settings/log-collections/add-es-as-receiver.md
+++ b/content/zh/docs/cluster-administration/cluster-settings/log-collections/add-es-as-receiver.md
@@ -16,21 +16,21 @@ weight: 8622
1. 以 `admin` 身份登录 KubeSphere 的 Web 控制台。点击左上角的**平台管理**,然后选择**集群管理**。
-2. 如果您启用了[多集群功能](../../../../multicluster-management/),您可以选择一个集群。如果尚未启用该功能,请直接进行下一步。
+ {{< notice note >}}
-3. 在**集群管理**页面,选择**集群设置**下的**日志收集**。
+如果您启用了[多集群功能](../../../../multicluster-management/),您可以选择一个集群。
-4. 点击**添加日志接收器**并选择 **Elasticsearch**。
+{{ notice >}}
- 
+2. 在**集群管理**页面,选择**集群设置**下的**日志收集**。
-5. 提供 Elasticsearch 服务地址和端口信息,如下所示:
+3. 点击**添加日志接收器**并选择 **Elasticsearch**。
- 
+4. 提供 Elasticsearch 服务地址和端口信息,如下所示:
-6. Elasticsearch 会显示在**日志收集**页面的接收器列表中,状态为**收集中**。
+ 
- 
+5. Elasticsearch 会显示在**日志收集**页面的接收器列表中,状态为**收集中**。
-7. 若要验证 Elasticsearch 是否从 Fluent Bit 接收日志,从右下角的**工具箱**中点击**日志查询**,在控制台中搜索日志。有关更多信息,请参阅[日志查询](../../../../toolbox/log-query/)。
+6. 若要验证 Elasticsearch 是否从 Fluent Bit 接收日志,从右下角的**工具箱**中点击**日志查询**,在控制台中搜索日志。有关更多信息,请参阅[日志查询](../../../../toolbox/log-query/)。
diff --git a/content/zh/docs/cluster-administration/cluster-settings/log-collections/add-fluentd-as-receiver.md b/content/zh/docs/cluster-administration/cluster-settings/log-collections/add-fluentd-as-receiver.md
index 06569c39a..5ecbf9fdf 100644
--- a/content/zh/docs/cluster-administration/cluster-settings/log-collections/add-fluentd-as-receiver.md
+++ b/content/zh/docs/cluster-administration/cluster-settings/log-collections/add-fluentd-as-receiver.md
@@ -123,20 +123,23 @@ EOF
## 步骤 2:添加 Fluentd 作为日志接收器
1. 以 `admin` 身份登录 KubeSphere 的 Web 控制台。点击左上角的**平台管理**,然后选择**集群管理**。
-2. 如果您启用了[多集群功能](../../../../multicluster-management/),您可以选择一个集群。如果尚未启用该功能,请直接进行下一步。
-3. 在**集群管理**页面,选择**集群设置**下的**日志收集**。
-4. 点击**添加日志接收器**并选择 **Fluentd**。
+ {{< notice note >}}
- 
+ 如果您启用了[多集群功能](../../../../multicluster-management/),您可以选择一个集群。
-5. 输入 **Fluentd** 服务地址和端口信息,如下所示:
+ {{ notice >}}
- 
+2. 在**集群管理**页面,选择**集群设置**下的**日志收集**。
-6. Fluentd 会显示在**日志收集**页面的接收器列表中,状态为**收集中**。
+3. 点击**添加日志接收器**并选择 **Fluentd**。
+
+4. 输入 **Fluentd** 服务地址和端口信息,如下所示:
+
+ 
+
+5. Fluentd 会显示在**日志收集**页面的接收器列表中,状态为**收集中**。
- 
## 步骤 3:验证 Fluentd 能否从 Fluent Bit 接收日志
@@ -152,4 +155,4 @@ EOF
6. 您可以看到日志持续滚动输出。
- 
\ No newline at end of file
+ 
\ No newline at end of file
diff --git a/content/zh/docs/cluster-administration/cluster-settings/log-collections/add-kafka-as-receiver.md b/content/zh/docs/cluster-administration/cluster-settings/log-collections/add-kafka-as-receiver.md
index 7f4ec27dc..4eedab994 100644
--- a/content/zh/docs/cluster-administration/cluster-settings/log-collections/add-kafka-as-receiver.md
+++ b/content/zh/docs/cluster-administration/cluster-settings/log-collections/add-kafka-as-receiver.md
@@ -103,21 +103,25 @@ weight: 8623
1. 以 `admin` 身份登录 KubeSphere 的 Web 控制台。点击左上角的**平台管理**,然后选择**集群管理**。
-2. 如果您启用了[多集群功能](../../../../multicluster-management/),您可以选择一个集群。如果尚未启用该功能,请直接进行下一步。
+ {{< notice note >}}
-3. 在**集群管理**页面,选择**集群设置**下的**日志收集**。
+ 如果您启用了[多集群功能](../../../../multicluster-management/),您可以选择一个集群。
-4. 点击**添加日志接收器**并选择 **Kafka**。输入 Kafka 代理地址和端口信息,然后点击**确定**继续。
+ {{ notice >}}
- ```bash
- my-cluster-kafka-0.my-cluster-kafka-brokers.default.svc 9092
- my-cluster-kafka-1.my-cluster-kafka-brokers.default.svc 9092
- my-cluster-kafka-2.my-cluster-kafka-brokers.default.svc 9092
- ```
+2. 在**集群管理**页面,选择**集群设置**下的**日志收集**。
- 
+3. 点击**添加日志接收器**并选择 **Kafka**。输入 Kafka 代理地址和端口信息,然后点击**确定**继续。
-5. 运行以下命令验证 Kafka 集群是否能从 Fluent Bit 接收日志:
+ | 地址 | 端口 |
+ | ------------------------------------------------------- | ---- |
+ | my-cluster-kafka-0.my-cluster-kafka-brokers.default.svc | 9092 |
+ | my-cluster-kafka-1.my-cluster-kafka-brokers.default.svc | 9092 |
+ | my-cluster-kafka-2.my-cluster-kafka-brokers.default.svc | 9092 |
+
+ 
+
+4. 运行以下命令验证 Kafka 集群是否能从 Fluent Bit 接收日志:
```bash
# Start a util container
diff --git a/content/zh/docs/cluster-administration/cluster-settings/log-collections/introduction.md b/content/zh/docs/cluster-administration/cluster-settings/log-collections/introduction.md
index e146ad246..e068ff3c7 100644
--- a/content/zh/docs/cluster-administration/cluster-settings/log-collections/introduction.md
+++ b/content/zh/docs/cluster-administration/cluster-settings/log-collections/introduction.md
@@ -24,13 +24,15 @@ KubeSphere 提供灵活的日志收集配置方式。基于 [FluentBit Operator]
2. 点击左上角的**平台管理**,然后选择**集群管理**。
-3. 如果您启用了[多集群功能](../../../../multicluster-management/),您可以选择一个集群。如果尚未启用该功能,请直接进行下一步。
+ {{< notice note >}}
-4. 选择**集群设置**下的**日志收集**。
+ 如果您启用了[多集群功能](../../../../multicluster-management/),您可以选择一个集群。
-5. 在**日志**选项卡下点击**添加日志接收器**。
+ {{ notice >}}
- 
+3. 选择**集群设置**下的**日志收集**。
+
+4. 在**日志**选项卡下点击**添加日志接收器**。
{{< notice note >}}
@@ -61,8 +63,6 @@ Kafka 往往用于接收日志,并作为 Spark 等处理系统的代理 (Broke
自 KubeSphere v3.0.0 起,Kubernetes 事件和 Kubernetes 以及 KubeSphere 审计日志可以通过和容器日志相同的方式进行存档。如果在 [ClusterConfiguration](https://github.com/kubesphere/kubekey/blob/release-1.1/docs/config-example.md) 中启用了 `events` 或 `auditing`,**日志收集**页面会对应显示**事件**或**审计**选项卡。您可以前往对应选项卡为 Kubernetes 事件或 Kubernetes 以及 KubeSphere 审计日志配置日志接收器。
-
-
容器日志、Kubernetes 事件和 Kubernetes 以及 KubeSphere 审计日志应存储在不同的 Elasticsearch 索引中以便在 KubeSphere 中进行搜索,索引前缀如下:
- 容器日志:`ks-logstash-log`
@@ -76,15 +76,10 @@ Kafka 往往用于接收日志,并作为 Spark 等处理系统的代理 (Broke
1. 在**日志收集**页面,点击一个日志接收器并进入其详情页面。
2. 点击**更多操作**并选择**更改状态**。
- 
-
3. 选择**激活**或**关闭**以启用或停用该日志接收器。
- 
-
4. 停用后,日志接收器的状态会变为**关闭**,激活时状态为**收集中**。
- 
## 修改或删除日志接收器
@@ -93,6 +88,4 @@ Kafka 往往用于接收日志,并作为 Spark 等处理系统的代理 (Broke
1. 在**日志收集**页面,点击一个日志接收器并进入其详情页面。
2. 点击**编辑**或从下拉菜单中选择**编辑配置文件**以编辑日志接收器。
- 
-
3. 点击**删除日志接收器**进行删除。
diff --git a/content/zh/docs/cluster-administration/platform-settings/notification-management/_index.md b/content/zh/docs/cluster-administration/platform-settings/notification-management/_index.md
index 4d4e25b09..97532a77f 100644
--- a/content/zh/docs/cluster-administration/platform-settings/notification-management/_index.md
+++ b/content/zh/docs/cluster-administration/platform-settings/notification-management/_index.md
@@ -1,5 +1,5 @@
---
-linkTitle: "Notification Management"
+linkTitle: "通知管理"
weight: 8720
_build:
diff --git a/content/zh/docs/devops-user-guide/examples/create-multi-cluster-pipeline.md b/content/zh/docs/devops-user-guide/examples/create-multi-cluster-pipeline.md
index 0afe4d772..875977cbe 100644
--- a/content/zh/docs/devops-user-guide/examples/create-multi-cluster-pipeline.md
+++ b/content/zh/docs/devops-user-guide/examples/create-multi-cluster-pipeline.md
@@ -243,7 +243,7 @@ You must create the projects as shown in the table below in advance. Make sure y

-3. Check the pipeline running logs by clicking **Show Logs** in the upper right corner. For each stage, you click it to inspect logs, which can be downloaded to your local machine for further analysis.
+3. Check the pipeline running logs by clicking **Show Logs** in the upper-right corner. For each stage, you click it to inspect logs, which can be downloaded to your local machine for further analysis.

diff --git a/content/zh/docs/faq/installation/configure-booster.md b/content/zh/docs/faq/installation/configure-booster.md
index 1b1a88a74..9a1270979 100644
--- a/content/zh/docs/faq/installation/configure-booster.md
+++ b/content/zh/docs/faq/installation/configure-booster.md
@@ -76,15 +76,21 @@ weight: 16200
```yaml
registry:
- registryMirrors: [] # For users who need to speed up downloads
- insecureRegistries: [] # Set an address of insecure image registry. See https://docs.docker.com/registry/insecure/
- privateRegistry: "" # Configure a private image registry for air-gapped installation (e.g. docker local registry or Harbor)
+ registryMirrors: []
+ insecureRegistries: []
+ privateRegistry: ""
```
-2. 在 `registryMirrors` 处填入仓库的镜像地址并保存文件。关于安装过程的更多信息,请参见[多节点安装](../../../installing-on-linux/introduction/multioverview/)。
+ {{< notice note >}}
+
+ 有关 `registry` 部分各个参数的更多信息,请参见 [Kubernetes 集群配置](../../../installing-on-linux/introduction/vars/)。
+
+ {{ notice >}}
+
+2. 在 `registryMirrors` 处填入仓库的镜像地址并保存文件。有关安装的更多信息,请参见[多节点安装](../../../installing-on-linux/introduction/multioverview/)。
{{< notice note >}}
-[在 Linux 上通过 All-in-one 模式安装 KubeSphere](../../../quick-start/all-in-one-on-linux/) 不需要 `config-sample.yaml` 文件。该模式下请采用第一种方法进行配置。
+[在 Linux 上通过 All-in-One 模式安装 KubeSphere](../../../quick-start/all-in-one-on-linux/) 不需要 `config-sample.yaml` 文件。该模式下请采用第一种方法进行配置。
{{ notice >}}
\ No newline at end of file
diff --git a/content/zh/docs/faq/installation/telemetry.md b/content/zh/docs/faq/installation/telemetry.md
index 827d1f27f..7c9a65a6a 100644
--- a/content/zh/docs/faq/installation/telemetry.md
+++ b/content/zh/docs/faq/installation/telemetry.md
@@ -25,7 +25,7 @@ Telemetry 收集已安装 KubeSphere 集群的大小、KubeSphere 和 Kubernetes
## 禁用 Telemetry
-Telemetry 在安装 KubeSphere 时默认启用。同时,您也可以在安装前或安装后禁用 Telemetry。
+在安装 KubeSphere 时 Telemetry 默认启用。同时,您也可以在安装前或安装后禁用 Telemetry。
### 安装前禁用 Telemetry
@@ -37,7 +37,7 @@ Telemetry 在安装 KubeSphere 时默认启用。同时,您也可以在安装
{{ notice >}}
-1. 下载 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.1.0/cluster-configuration.yaml) 文件并打开编辑。
+1. 下载 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.1.0/cluster-configuration.yaml) 文件并编辑。
```bash
vi cluster-configuration.yaml
@@ -47,13 +47,14 @@ Telemetry 在安装 KubeSphere 时默认启用。同时,您也可以在安装
```yaml
openpitrix:
- enabled: false
+ store:
+ enabled: false
servicemesh:
enabled: false
- telemetry_enabled: false # Add this line here to disable Telemetry.
+ telemetry_enabled: false # 请手动添加此行以禁用 Telemetry。
```
-3. 保存文件并执行如下命令开始安装:
+3. 保存文件并执行以下命令开始安装:
```bash
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.1.0/kubesphere-installer.yaml
@@ -73,15 +74,10 @@ Telemetry 在安装 KubeSphere 时默认启用。同时,您也可以在安装
3. 在搜索框中输入 `clusterconfiguration`,点击搜索结果打开详情页。
- 
-
-4. 点击 `ks-installer` 右边的三个点,并选择**编辑配置文件**。
-
- 
+4. 点击 `ks-installer` 右侧的
,并选择**编辑配置文件**。
5. 在文件末尾添加 `telemetry_enabled: false` 字段,点击**更新**。
- 
{{< notice note >}}
diff --git a/content/zh/docs/project-administration/container-limit-ranges.md b/content/zh/docs/project-administration/container-limit-ranges.md
index b193b2710..3799ce809 100644
--- a/content/zh/docs/project-administration/container-limit-ranges.md
+++ b/content/zh/docs/project-administration/container-limit-ranges.md
@@ -18,11 +18,9 @@ weight: 13400
## 设置默认限制范围
-1. 以 `project-admin` 身份登录控制台,进入一个项目。如果该项目是新创建的项目,您在**概览**页面上会看到默认限制范围尚未设置。点击**设置**来配置限制范围。
+1. 以 `project-admin` 身份登录控制台,进入一个项目。如果该项目是新创建的项目,您在**概览**页面上会看到默认限制范围尚未设置。点击**容器资源默认请求未设置**旁的**设置**来配置限制范围。
- 
-
-2. 在弹出对话框中,您可以看到 KubeSphere 默认不设置任何请求或限制。要设置请求和限制来控制 CPU 和内存资源,请移动滑块至期望的值或者直接输入数值。字段留空意味着不设置任何请求或限制。
+2. 在弹出的对话框中,您可以看到 KubeSphere 默认不设置任何请求或限制。要设置请求和限制来控制 CPU 和内存资源,请移动滑块至期望的值或者直接输入数值。字段留空意味着不设置任何请求或限制。

@@ -40,8 +38,6 @@ weight: 13400
5. 要更改默认限制范围,请在**基本信息**页面点击**项目管理**,然后选择**编辑资源默认请求**。
- 
-
6. 在弹出的对话框中直接更改限制范围,然后点击**确定**。
7. 当您创建工作负载时,容器的请求和限制将预先填充对应的值。
diff --git a/content/zh/docs/project-administration/project-and-multicluster-project.md b/content/zh/docs/project-administration/project-and-multicluster-project.md
index 0afe6cb42..ac1c9b98e 100644
--- a/content/zh/docs/project-administration/project-and-multicluster-project.md
+++ b/content/zh/docs/project-administration/project-and-multicluster-project.md
@@ -7,67 +7,54 @@ linkTitle: "项目和多集群项目"
weight: 13100
---
-KubeSphere 中的一个项目是一个 Kubernetes [命名空间](https://kubernetes.io/zh/docs/concepts/overview/working-with-objects/namespaces/),用于将资源划分成互不重叠的分组。这一功能可在多个租户之间分配集群资源,从而是一种逻辑分区功能。
+KubeSphere 中的项目即 Kubernetes [命名空间](https://kubernetes.io/zh/docs/concepts/overview/working-with-objects/namespaces/),用于将资源划分成互不重叠的分组。这一功能可在多个租户之间分配集群资源,是一种逻辑分区功能。
多集群项目跨集群运行,能为用户提供高可用性,并在问题发生时将问题隔离在某个集群内,避免影响业务。有关更多信息,请参见[多集群管理](../../multicluster-management/)。
-本章介绍项目管理的基本操作,如创建项目和删除项目。
+本教程演示如何管理项目和多集群项目。
## 准备工作
-- 您需要准备一个可用的企业空间。
-- 您需要获取**项目管理**权限。该权限包含在内置角色 `workspace-self-provisioner` 中。
+- 您需要有一个可用的企业空间和一个帐户 (`project-admin`)。该帐户必须在该企业空间拥有 `workspace-self-provisioner` 角色。有关更多信息,请参见[创建企业空间、项目、帐户和角色](../../quick-start/create-workspace-and-project/)。
- 在创建多集群项目前,您需要通过[直接连接](../../multicluster-management/enable-multicluster/direct-connection/)或[代理连接](../../multicluster-management/enable-multicluster/agent-connection/)启用多集群功能。
## 项目
### 创建项目
-1. 打开企业空间的**项目管理**页面,点击**创建**。
-
- 
+1. 前往企业空间的**项目管理**页面,点击**项目**选项卡下的**创建**。
{{< notice note >}}
- 您可以在**集群**下拉列表中更改创建项目的集群。该下拉列表只有在启用多集群功能后才可见。
-- 如果页面上没有**创建**按钮,则表示您的企业空间没有可用的集群。您需要联系平台管理员或集群管理员,以便在集群中创建企业空间资源。平台管理员或集群管理员需要在**集群管理**页面设置[**集群可见性**](../../cluster-administration/cluster-settings/cluster-visibility-and-authorization/),才能将集群分配给企业空间。
+- 如果页面上没有**创建**按钮,则表示您的企业空间没有可用的集群。您需要联系平台管理员或集群管理员,以便在集群中创建企业空间资源。平台管理员或集群管理员需要在**集群管理**页面设置**集群可见性**,才能[将集群分配给企业空间](../../cluster-administration/cluster-settings/cluster-visibility-and-authorization/)。
{{ notice >}}
-2. 在弹出的**创建项目**对话框中输入项目名称,根据需要添加别名或说明,选择要创建项目的集群(如果没有启用多集群功能,则不会出现此选项),然后点击**确定**完成操作。
-
- 
+2. 在弹出的**创建项目**窗口中输入项目名称,根据需要添加别名或说明。在**集群设置**下,选择要创建项目的集群(如果没有启用多集群功能,则不会出现此选项),然后点击**确定**。
3. 创建的项目会显示在下图所示的列表中。您可以点击项目名称打开**概览**页面。
- 
+ 
-### 编辑项目信息
+### 编辑项目
-1. 在左侧导航栏中选择**项目设置**下的**基本信息**,在页面右侧点击**项目管理**。
+1. 前往您的项目,选择**项目设置**下的**基本信息**,在页面右侧点击**项目管理**。
- 
-
-2. 在下拉列表中选择**编辑信息**。
+2. 从下拉菜单中选择**编辑信息**。
+ 
+
{{< notice note >}}
-项目名称无法编辑。如需修改其他信息,请参考相应的文档章节。
+项目名称无法编辑。如需修改其他信息,请参考相应的文档教程。
{{ notice >}}
-### 删除项目
+3. 若要删除项目,选择该下拉菜单中的**删除项目**,在弹出的对话框中输入项目名称,点击**确定**。
-1. 在左侧导航栏中选择**项目设置**下的**基本信息**,在页面右侧点击**项目管理**。
-
- 
-
-2. 在下拉列表中选择**删除项目**。
-
-3. 在弹出的对话框中输入项目名称,点击**确定**。
-
- {{< notice warning >}}
+ {{< notice warning >}}
项目被删除后无法恢复,项目中的资源也会从项目中移除。
@@ -77,50 +64,38 @@ KubeSphere 中的一个项目是一个 Kubernetes [命名空间](https://kuberne
### 创建多集群项目
-1. 打开企业空间的**项目管理**页面,点击**多集群项目**,再点击**创建**。
-
- 
+1. 前往企业空间的**项目管理**页面,点击**多集群项目**选项卡,再点击**创建**。
{{< notice note >}}
-- 如果页面上没有**创建**按钮,则表示您的企业空间没有可用的集群。您需要联系平台管理员或集群管理员,以便在集群中创建企业空间资源。平台管理员或集群管理员需要在**集群管理**页面设置[**集群可见性**](../../cluster-administration/cluster-settings/cluster-visibility-and-authorization/),才能将集群分配给企业空间。
+- 如果页面上没有**创建**按钮,则表示您的企业空间没有可用的集群。您需要联系平台管理员或集群管理员,以便在集群中创建企业空间资源。平台管理员或集群管理员需要在**集群管理**页面设置**集群可见性**,才能[将集群分配给企业空间](../../cluster-administration/cluster-settings/cluster-visibility-and-authorization/)。
- 请确保至少有两个集群已分配给您的企业空间。
{{ notice >}}
-2. 在弹出的**创建多集群项目**对话框中输入项目名称,并根据需要添加别名或说明,点击**添加集群**为项目选择多个集群,然后点击**确定**完成操作。
-
- 
+2. 在弹出的**创建多集群项目**窗口中输入项目名称,并根据需要添加别名或说明。在**集群设置**下,点击**添加集群**为项目选择多个集群,然后点击**确定**。
3. 创建的多集群项目会显示在下图所示的列表中。您可以点击项目名称打开**概览**页面。
- 
+ 
-### 编辑多集群项目信息
+### 编辑多集群项目
-1. 在左侧导航栏中选择**项目设置**下的**基本信息**,在页面右侧点击**项目管理**。
+1. 前往您的多集群项目,选择**项目设置**下的**基本信息**,在页面右侧点击**项目管理**。
- 
-
-2. 在下拉列表中选择**编辑信息**。
+2. 从下拉菜单中选择**编辑信息**。
+ 
+
{{< notice note >}}
-项目名称无法编辑。如需修改其他信息,请参考相应的文档章节。
+项目名称无法编辑。如需修改其他信息,请参考相应的文档教程。
{{ notice >}}
-### 删除多集群项目
+3. 若要删除多集群项目,选择该下拉菜单中的**删除项目**,在弹出的对话框中输入项目名称,点击**确定**。
-1. 在左侧导航栏中选择**项目设置**下的**基本信息**,在页面右侧点击**项目管理**。
-
- 
-
-2. 在下拉列表中选择**删除项目**。
-
-3. 在弹出的对话框中输入项目名称,点击**确定**。
-
- {{< notice warning >}}
+ {{< notice warning >}}
多集群项目被删除后无法恢复,项目中的资源也会从项目中移除。
diff --git a/content/zh/docs/project-administration/role-and-member-management.md b/content/zh/docs/project-administration/role-and-member-management.md
index d5fd06ba8..2765a4aae 100644
--- a/content/zh/docs/project-administration/role-and-member-management.md
+++ b/content/zh/docs/project-administration/role-and-member-management.md
@@ -1,92 +1,84 @@
---
-title: "角色和成员管理"
+title: "项目角色和成员管理"
keywords: 'KubeSphere, Kubernetes, 角色, 成员, 管理, 项目'
description: '了解如何进行项目访问管理。'
-
-linkTitle: "角色和成员管理"
+linkTitle: "项目角色和成员管理"
weight: 13200
---
-本教程演示如何管理项目中的角色和成员。
+本教程演示如何在项目中管理角色和成员。在项目级别,您可以向角色授予以下模块中的权限:
-您可以在项目范围内向角色授予以下资源的权限:
-
-- 应用负载
-- 存储
-- 配置
-- 监控告警
-- 项目设置
-- 访问控制
+- **应用负载**
+- **存储管理**
+- **配置中心**
+- **监控告警**
+- **访问控制**
+- **项目设置**
## 准备工作
-您需要至少创建一个项目(例如 `demo-project`)。 此外,您还需要准备一个在项目层角色为 `admin` 的帐户(例如 `project-admin`)。有关详情请参见[创建企业空间、项目、帐户和角色](../../quick-start/create-workspace-and-project/)。
+您需要至少创建一个项目(例如 `demo-project`)。此外,您还需要准备一个在项目级别具有 `admin` 角色的帐户(例如 `project-admin`)。有关更多信息,请参见[创建企业空间、项目、帐户和角色](../../quick-start/create-workspace-and-project/)。
## 内置角色
-在**项目角色**页面有三个内置角色。内置角色由 KubeSphere 在项目创建时自动创建,不能编辑或删除。您只能查看其权限列表和授权用户列表。
+**项目角色**页面列出了以下三个可用的内置角色。创建项目时,KubeSphere 会自动创建内置角色,并且内置角色无法进行编辑或删除。您只能查看内置角色的权限或将其分配给用户。
-| 内置角色 | 描述 |
-| ------------------ | ------------------------------------------------------------ |
-| viewer | 项目观察者,可以查看项目下所有的资源。 |
-| operator | 项目维护者,可以管理项目下除用户和角色之外的资源。 |
-| admin | 项目管理员,可以对项目下的所有资源执行所有操作。此角色可以完全控制项目下的所有资源。 |
+| 内置角色 | +描述 | +
|---|---|
viewer |
+ 项目观察者,可以查看项目下所有的资源。 | +
operator |
+ 项目维护者,可以管理项目下除用户和角色之外的资源。 | +
admin |
+ 项目管理员,可以对项目下的所有资源执行所有操作。此角色可以完全控制项目下的所有资源。 | +
以编辑该角色。
- 
+ 
-3. 选择授予此角色的帐户的权限(例如**应用负载**中的**应用负载查看**,以及**监控告警**中的**告警消息查看**和**告警策略查看**),点击**确定**完成操作。
+## 邀请新成员
- 
+1. 转到**项目设置**下的**项目成员**,点击**邀请成员**。
- {{< notice note >}}
+2. 点击右侧的
以邀请一名成员加入项目,并为其分配一个角色。
-某些权限**依赖于**其他权限。要选择从属的权限,必须选择其依赖的权限。
+3. 将成员加入项目后,点击**确定**。您可以在**项目成员**列表中查看新邀请的成员。
- {{ notice >}}
+4. 若要编辑现有成员的角色或将其从项目中移除,点击右侧的
并选择对应的操作。
-4. 角色创建后会显示在**项目角色**页面。您可以点击角色右边的三个点对其进行编辑。
+ 
- 
+
- {{< notice note >}}
-
-`project-monitor` 角色在**监控告警**中仅被授予有限的权限,可能无法满足您的需求。此处仅为示例,您可以根据需求创建自定义角色。
-
-{{ notice >}}
-
-## 邀请成员
-
-1. 选择**项目设置**下的**项目成员**,点击**邀请成员**。
-2. 邀请一个用户加入当前项目,对其授予 `project-monitor` 角色。
-
- 
-
- {{< notice note >}}
-
-要进行此操作,该用户必须先被邀请至当前项目的企业空间。
-
- {{ notice >}}
-
-3. 点击**确定**。用户被邀请至当前项目后会显示在**项目成员**页面。
-
-4. 您可以修改现有成员的角色或将其从项目中移除。
-
- 
diff --git a/content/zh/docs/project-user-guide/configuration/configmaps.md b/content/zh/docs/project-user-guide/configuration/configmaps.md
index a54757f08..92a57524c 100644
--- a/content/zh/docs/project-user-guide/configuration/configmaps.md
+++ b/content/zh/docs/project-user-guide/configuration/configmaps.md
@@ -20,75 +20,54 @@ Kubernetes [ConfigMap](https://kubernetes.io/docs/concepts/configuration/configm
## 创建 ConfigMap
-### 步骤 1:进入 ConfigMap 页面
+1. 以 `project-regular` 用户登录控制台并进入项目,在左侧导航栏中选择**配置中心**下的**配置**,然后点击**创建**。
-以 `project-regular` 用户登录控制台并进入项目,在左侧导航栏中选择**配置中心**下的**配置**,然后点击**创建**。
+2. 在出现的对话框中,设置 ConfigMap 的名称(例如 `demo-configmap`),然后点击**下一步**。
-
-
-### 步骤 2:配置基本信息
-
-设置 ConfigMap 的名称(例如 `demo-configmap`),然后点击**下一步**。
-
-{{< notice tip >}}
+ {{< notice tip >}}
您可以在对话框右上角启用**编辑模式**来查看 ConfigMap 的 YAML 清单文件,并通过直接编辑清单文件来创建 ConfigMap。您也可以继续执行后续步骤在控制台上创建 ConfigMap。
{{ notice >}}
-
+3. 在**配置设置**选项卡,点击**添加数据**以配置键值对。
-### 步骤 3:配置键值对
-
-1. 在**配置设置**选项卡,点击**添加数据**以配置键值对。
-
- 
-
-2. 配置一个键值对。下图为示例:
+4. 输入一个键值对。下图为示例:

{{< notice note >}}
- - 配置的键值对会显示在清单文件中的 `data` 字段下。
+- 配置的键值对会显示在清单文件中的 `data` 字段下。
- - 目前 KubeSphere 控制台只支持在 ConfigMap 中配置键值对。未来版本将会支持添加配置文件的路径来创建 ConfigMap。
+- 目前 KubeSphere 控制台只支持在 ConfigMap 中配置键值对。未来版本将会支持添加配置文件的路径来创建 ConfigMap。
- {{ notice >}}
+{{ notice >}}
-3. 点击对话框右下角的 **√** 以保存配置。您可以再次点击**添加数据**继续配置更多键值对。
-
- 
-
-4. 配置完成后点击**创建**来生成 ConfigMap。
+5. 点击对话框右下角的 **√** 以保存配置。您可以再次点击**添加数据**继续配置更多键值对。
+6. 点击**创建**以生成 ConfigMap。
## 查看 ConfigMap 详情
-1. ConfigMap 创建后会显示在如图所示的列表中。您可以点击右边的三个点,并从下拉菜单中选择操作来修改 ConfigMap。
-
- 
+1. ConfigMap 创建后会显示在**配置**页面。您可以点击右侧的
,并从下拉菜单中选择操作来修改 ConfigMap。
- **编辑**:查看和编辑基本信息。
- **编辑配置文件**:查看、上传、下载或更新 YAML 文件。
- **修改配置**:修改 ConfigMap 键值对。
- **删除**:删除 ConfigMap。
-
+
2. 点击 ConfigMap 名称打开 ConfigMap 详情页面。在**详情**选项卡,您可以查看 ConfigMap 的所有键值对。
- 
+ 
3. 点击**更多操作**对 ConfigMap 进行其他操作。
- 
-
- **编辑配置文件**:查看、上传、下载或更新 YAML 文件。
- **修改配置**:修改 ConfigMap 键值对。
- **删除**:删除 ConfigMap 并返回 ConfigMap 列表页面。
-
+
4. 点击**编辑信息**来查看和编辑 ConfigMap 的基本信息。
- 
-
## 使用 ConfigMap
diff --git a/content/zh/docs/project-user-guide/grayscale-release/blue-green-deployment.md b/content/zh/docs/project-user-guide/grayscale-release/blue-green-deployment.md
index 4d3a23e09..5372445df 100644
--- a/content/zh/docs/project-user-guide/grayscale-release/blue-green-deployment.md
+++ b/content/zh/docs/project-user-guide/grayscale-release/blue-green-deployment.md
@@ -21,19 +21,13 @@ weight: 10520
## 创建蓝绿部署任务
-1. 以 `project-regular` 身份登录 KubeSphere,在**灰度策略**选项卡下,点击**蓝绿部署**右侧的**发布任务**。
-
- 
+1. 以 `project-regular` 身份登录 KubeSphere,转到**灰度发布**页面,在**灰度策略**选项卡下,点击**蓝绿部署**右侧的**发布任务**。
2. 输入名称然后点击**下一步**。
- 
+3. 在**灰度组件**选项卡,从下拉列表选择您的应用以及想实现蓝绿部署的服务。如果您也使用示例应用 Bookinfo,请选择 **reviews** 并点击**下一步**。
-3. 从下拉列表选择您的应用以及想实现蓝绿部署的服务。如果您也使用示例应用 Bookinfo,请选择 **reviews** 并点击**下一步**。
-
- 
-
-4. 如下图所示,在**灰度版本**页面,为其添加另一个版本(例如 `v2`),然后点击**下一步**:
+4. 如下图所示,在**灰度版本**选项卡,添加另一个版本(例如 `v2`),然后点击**下一步**:

@@ -43,9 +37,7 @@ weight: 10520
{{ notice >}}
-5. 要让应用版本 `v2` 接管所有流量,请选择**接管所有流量**,然后点击**创建**。
-
- 
+5. 在**策略配置**选项卡,要让应用版本 `v2` 接管所有流量,请选择**接管所有流量**,然后点击**创建**。
6. 蓝绿部署任务创建后,会显示在**任务状态**选项卡下。点击可查看详情。
diff --git a/content/zh/docs/project-user-guide/grayscale-release/canary-release.md b/content/zh/docs/project-user-guide/grayscale-release/canary-release.md
index 7cc280cf8..8de79b0c3 100644
--- a/content/zh/docs/project-user-guide/grayscale-release/canary-release.md
+++ b/content/zh/docs/project-user-guide/grayscale-release/canary-release.md
@@ -21,19 +21,13 @@ KubeSphere 基于 [Istio](https://istio.io/) 向用户提供部署金丝雀服
## 步骤 1:创建金丝雀发布任务
-1. 以 `project-regular` 身份登录 KubeSphere 控制台,在**灰度策略**选项卡下,点击**金丝雀发布**右侧的**发布任务**。
-
- 
+1. 以 `project-regular` 身份登录 KubeSphere 控制台,转到**灰度发布**页面,在**灰度策略**选项卡下,点击**金丝雀发布**右侧的**发布任务**。
2. 设置任务名称,点击**下一步**。
- 
+3. 在**灰度组件**选项卡,从下拉列表中选择您的应用和要实现金丝雀发布的服务。如果您同样使用示例应用 Bookinfo,请选择 **reviews** 并点击**下一步**。
-3. 从下拉列表中选择您的应用和要实现金丝雀发布的服务。如果您同样使用示例应用 Bookinfo,请选择 **reviews** 并点击**下一步**。
-
- 
-
-4. 在**灰度版本**页面,添加另一个版本(例如 `kubesphere/examples-bookinfo-reviews-v2:1.13.0`;将 `v1` 改为 `v2`)并点击**下一步**,如下图所示:
+4. 在**灰度版本**选项卡,添加另一个版本(例如 `kubesphere/examples-bookinfo-reviews-v2:1.13.0`;将 `v1` 改为 `v2`)并点击**下一步**,如下图所示:

diff --git a/content/zh/docs/project-user-guide/grayscale-release/traffic-mirroring.md b/content/zh/docs/project-user-guide/grayscale-release/traffic-mirroring.md
index 5328abfd6..47b006c8a 100644
--- a/content/zh/docs/project-user-guide/grayscale-release/traffic-mirroring.md
+++ b/content/zh/docs/project-user-guide/grayscale-release/traffic-mirroring.md
@@ -16,25 +16,17 @@ weight: 10540
## 创建流量镜像任务
-1. 以 `project-regular` 用户登录 KubeSphere 并进入项目。在左侧导航栏选择**灰度发布**,在页面右侧点击**流量镜像**右边的**发布任务**。
-
- 
+1. 以 `project-regular` 用户登录 KubeSphere 并进入项目。转到**灰度发布**页面,在页面右侧点击**流量镜像**右侧的**发布任务**。
2. 设置发布任务的名称并点击**下一步**。
- 
+3. 在**灰度组件**选项卡,从下拉列表中选择需要进行流量镜像的应用和对应的服务(本教程以 Bookinfo 应用的 reviews 服务为例),然后点击**下一步**。
-3. 从下拉列表中选择需要进行流量镜像的应用,选择所需的服务(本教程以 Bookinfo 应用的 reviews 服务为例),然后点击**下一步**。
-
- 
-
-4. 在**灰度版本**页面,为应用添加另一个版本(例如 `v2`),然后点击**下一步**。
+4. 在**灰度版本**选项卡,为应用添加另一个版本(例如 `v2`),然后点击**下一步**。

-5. 在最后一步点击**创建**。
-
- 
+5. 在**策略配置**选项卡,点击**创建**。
6. 新建的流量镜像任务显示在**任务状态**页面。点击该任务查看详情。
diff --git a/content/zh/docs/project-user-guide/image-builder/s2i-and-b2i-webhooks.md b/content/zh/docs/project-user-guide/image-builder/s2i-and-b2i-webhooks.md
index c1bcce012..d161a3b94 100644
--- a/content/zh/docs/project-user-guide/image-builder/s2i-and-b2i-webhooks.md
+++ b/content/zh/docs/project-user-guide/image-builder/s2i-and-b2i-webhooks.md
@@ -20,7 +20,7 @@ This tutorial demonstrates how to configure S2I and B2I webhooks.
### Step 1: Expose the S2I trigger Service
-1. Log in to the KubeSphere web console as `admin`. Click **Platform** in the top left corner and then select **Cluster Management**.
+1. Log in to the KubeSphere web console as `admin`. Click **Platform** in the top-left corner and then select **Cluster Management**.
2. In **Services** under **Application Workloads**, select **kubesphere-devops-system** from the drop-down list and click **s2ioperator-trigger-service** to go to its detail page.
diff --git a/content/zh/docs/reference/api-docs.md b/content/zh/docs/reference/api-docs.md
index 10687e33d..423d4c6b8 100644
--- a/content/zh/docs/reference/api-docs.md
+++ b/content/zh/docs/reference/api-docs.md
@@ -112,9 +112,9 @@ $ curl -X GET -H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ
## API 参考
-KubeSphere API Swagger JSON 文件可以在 https://github.com/kubesphere/kubesphere/tree/release-3.0/api 仓库中找到。
+KubeSphere API Swagger JSON 文件可以在 https://github.com/kubesphere/kubesphere/tree/release-3.1/api 仓库中找到。
-- KubeSphere 已指定 API [Swagger Json](https://github.com/kubesphere/kubesphere/blob/release-3.0/api/ks-openapi-spec/swagger.json) 文件,它包含所有只适用于 KubeSphere 的 API。
-- KubeSphere 已指定 CRD [Swagger Json](https://github.com/kubesphere/kubesphere/blob/release-3.0/api/openapi-spec/swagger.json) 文件,它包含所有已生成的 CRD API 文档,与 Kubernetes API 对象相同。
+- KubeSphere 已指定 API [Swagger Json](https://github.com/kubesphere/kubesphere/blob/release-3.1/api/ks-openapi-spec/swagger.json) 文件,它包含所有只适用于 KubeSphere 的 API。
+- KubeSphere 已指定 CRD [Swagger Json](https://github.com/kubesphere/kubesphere/blob/release-3.1/api/openapi-spec/swagger.json) 文件,它包含所有已生成的 CRD API 文档,与 Kubernetes API 对象相同。
您也可以[点击这里](https://kubesphere.io/api/kubesphere)查看 KubeSphere API 文档。
diff --git a/content/zh/docs/upgrade/_index.md b/content/zh/docs/upgrade/_index.md
index 82c267c9b..780db17df 100644
--- a/content/zh/docs/upgrade/_index.md
+++ b/content/zh/docs/upgrade/_index.md
@@ -11,4 +11,4 @@ icon: "/images/docs/docs.svg"
---
-本章演示集群管理员如何将 KubeSphere 升级到 v3.0.0。
\ No newline at end of file
+本章演示集群管理员如何将 KubeSphere 升级到 v3.1.0。
\ No newline at end of file
diff --git a/content/zh/docs/workspace-administration/department-management.md b/content/zh/docs/workspace-administration/department-management.md
index b0db5c28b..178f6d1f7 100644
--- a/content/zh/docs/workspace-administration/department-management.md
+++ b/content/zh/docs/workspace-administration/department-management.md
@@ -3,7 +3,7 @@ title: "企业组织"
keywords: 'KubeSphere, Kubernetes, 部门, 角色, 权限, 用户组'
description: '在企业空间中创建部门,将用户分配到不同部门中并授予权限。'
linkTitle: "企业组织"
-weight: 9700
+weight: 9800
---
本文档介绍如何管理企业空间中的部门。
diff --git a/content/zh/docs/workspace-administration/project-quotas.md b/content/zh/docs/workspace-administration/project-quotas.md
index 3a3b23d66..353de54df 100644
--- a/content/zh/docs/workspace-administration/project-quotas.md
+++ b/content/zh/docs/workspace-administration/project-quotas.md
@@ -48,7 +48,13 @@ KubeSphere 使用请求 (Request) 和限制 (Limit) 来控制项目中的资源
6. 要更改项目配额,请在**基本信息**页面点击**项目管理**,然后选择**编辑配额**。
-7. 在**项目配额**页面直接更改项目配额,然后点击**确定**。
+ {{< notice note >}}
+
+ 对于[多集群项目](../../project-administration/project-and-multicluster-project/#多集群项目),**项目管理**下拉菜单中不会显示**编辑配额**选项。若要为多集群项目设置配额,前往**项目设置**下的**配额管理**,并点击**编辑配额**。请注意,由于多集群项目跨集群运行,您可以为多集群项目针对不同集群分别设置资源配额。
+
+ {{ notice >}}
+
+7. 在**项目配额**页面更改项目配额,然后点击**确定**。
## 另请参见
diff --git a/content/zh/docs/workspace-administration/role-and-member-management.md b/content/zh/docs/workspace-administration/role-and-member-management.md
index 87607c6a9..6a039eace 100644
--- a/content/zh/docs/workspace-administration/role-and-member-management.md
+++ b/content/zh/docs/workspace-administration/role-and-member-management.md
@@ -41,7 +41,7 @@ weight: 9400

-2. 点击**授权用户**选项卡,查看被授予该角色的所有用户。
+2. 点击**授权用户**选项卡,查看所有被授予该角色的用户。
## 创建企业角色
diff --git a/content/zh/docs/workspace-administration/workspace-quotas.md b/content/zh/docs/workspace-administration/workspace-quotas.md
new file mode 100644
index 000000000..68cb695e3
--- /dev/null
+++ b/content/zh/docs/workspace-administration/workspace-quotas.md
@@ -0,0 +1,43 @@
+---
+title: "企业空间配额"
+keywords: 'KubeSphere, Kubernetes, 企业空间, 配额'
+description: '设置企业空间配额以管理企业空间中所有项目和 DevOps 工程的总资源用量。'
+linkTitle: "企业空间配额"
+weight: 9700
+---
+
+Workspace quotas are used to control the total resource usage of all projects and DevOps projects in a workspace. Similar to [project quotas](../project-quotas/), workspace quotas contain requests and limits of CPU and memory. Requests make sure projects in the workspace can get the resources they needs as they are specifically guaranteed and reserved. On the contrary, limits ensure that the resource usage of all projects in the workspace can never go above a certain value.
+
+In [a multi-cluster architecture](../../multicluster-management/), as you need to [assign one or multiple clusters to a workspace](../../cluster-administration/cluster-settings/cluster-visibility-and-authorization/), you can decide the amount of resources that can be used by the workspace on different clusters.
+
+This tutorial demonstrates how to manage resource quotas for a workspace.
+
+## Prerequisites
+
+You have an available workspace and an account (`ws-manager`). The account must have the `workspaces-manager` role at the platform level. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../quick-start/create-workspace-and-project/).
+
+## Set Workspace Quotas
+
+1. Log in to the KubeSphere web console as `ws-manager` and go to a workspace.
+
+2. Navigate to **Quota Management** under **Workspace Settings**.
+
+3. The **Quota Management** page lists all the available clusters assigned to the workspace and their respective requests and limits of CPU and memory. Click **Edit Quota** on the right of a cluster.
+
+4. In the dialog that appears, you can see that KubeSphere does not set any requests or limits for the workspace by default. To set requests and limits to control CPU and memory resources, use the slider to move to a desired value or enter numbers directly. Leaving a field blank means you do not set any requests or limits.
+
+ 
+
+ {{< notice note >}}
+
+ The limit can never be lower than the request.
+
+ {{ notice >}}
+
+5. Click **OK** to finish setting quotas.
+
+## See Also
+
+[Project Quotas](../project-quotas/)
+
+[Container Limit Ranges](../../project-administration/container-limit-ranges/)
\ No newline at end of file
diff --git a/content/zh/live/3.1-live.md b/content/zh/live/3.1-live.md
index 8673e14ad..a7482ca79 100644
--- a/content/zh/live/3.1-live.md
+++ b/content/zh/live/3.1-live.md
@@ -19,3 +19,5 @@ section1:
KubeSphere 3.1 全新发布!主打 “延伸至边缘侧的容器混合云”,新增了对 “边缘计算” 场景的支持。v3.1.0 支持 “计量计费”,让基础设施的运营成本更清晰,进一步优化了在 “多云、多集群、多团队、多租户” 等应用场景下的使用体验,增强了 “多集群管理、多租户管理、可观测性、DevOps、应用商店、微服务治理” 等特性。
此次交流会特面向社区开放交流,将为大家演示 KubeSphere 3.1 新特性与后续规划。
+
+
\ No newline at end of file
diff --git a/content/zh/live/_index.md b/content/zh/live/_index.md
index 2d3578d64..97509fef8 100644
--- a/content/zh/live/_index.md
+++ b/content/zh/live/_index.md
@@ -15,9 +15,9 @@ section2:
notice:
title: Kubernetes and Cloud Native Meetup ——成都站
timeIcon: /images/live/clock.svg
- time: 2021/06/19 14:00 – 18:00
+ time: 2021/06/19 13:00 – 18:00
baseIcon: /images/live/base.svg
- base: 线上 + 线下
+ base: 四川省成都市高新区天府大道中段 500 号天祥广场 B 座 45A + 线上
tag: 预告
url: ./meetup-chengdu/
@@ -28,71 +28,120 @@ section2:
section3:
videos:
+ - title: 初识云原生 FaaS 平台及 Serverless 生态
+ link: ./faas-hangzhou/
+ snapshot: https://pek3b.qingstor.com/kubesphere-community/images/Faas-serverless.png
+ type: iframe
+ createTime: 2021.05.29
+ group: Meetup
+
+ - title: 基于 KubeSphere 的 Nebula Graph 多云架构管理实践
+ link: ./nebulagraph-hangzhou/
+ snapshot: https://pek3b.qingstor.com/kubesphere-community/images/nebulagraph.png
+ type: iframe
+ createTime: 2021.05.29
+ group: Meetup
+
+ - title: KubeSphere + KubeEdge——打造云原生边缘计算服务
+ link: ./kubeedge-hangzhou/
+ snapshot: https://pek3b.qingstor.com/kubesphere-community/images/KubeSphere-KubeEdge.png
+ type: iframe
+ createTime: 2021.05.29
+ group: Meetup
+
+ - title: SegmentFault 基于 K8s 的容器化与持续交付实践
+ link: ./segmentfault/
+ snapshot: https://pek3b.qingstor.com/kubesphere-community/images/SegmentFault-hangzhou.png
+ type: iframe
+ createTime: 2021.05.29
+ group: Meetup
+
+ - title: 如何利用云原生架构控制系统复杂度-从构建云原生向量搜索 Milvus 讲起
+ link: ./milvus-hangzhou/
+ snapshot: https://pek3b.qingstor.com/kubesphere-community/images/Milvus-hangzhou.png
+ type: iframe
+ createTime: 2021.05.29
+ group: Meetup
+
+ - title: 基于 Kubernetes 的新一代 MySQL 高可用架构实现方案
+ link: ./mysql-hangzhou/
+ snapshot: https://pek3b.qingstor.com/kubesphere-community/images/MySQL-hangzhou.png
+ type: iframe
+ createTime: 2021.05.29
+ group: Meetup
+
+ - title: “开源社区运营与治理”圆桌交流
+ link: ./roundtable-hangzhou/
+ snapshot: https://pek3b.qingstor.com/kubesphere-community/images/round-table.png
+ type: iframe
+ createTime: 2021.05.29
+ group: Meetup
+
- title: 跳离云原生深水区,KubeSphere 带你远航
- link: //player.bilibili.com/player.html?aid=375675566&bvid=BV1Fo4y117xt&cid=340529916&page=1&high_quality=1
+ link: ./ray-shanghai/
snapshot: https://pek3b.qingstor.com/kubesphere-community/images/yuanhang-kubesphere.jpeg
type: iframe
createTime: 2021.05.15
group: Meetup
- title: 混合云下的 K8s 多集群管理及应用部署
- link: //player.bilibili.com/player.html?aid=248246237&bvid=BV17v411L7tG&cid=340534276&page=1&high_quality=1
+ link: ./multicluster-shanghai/
snapshot: https://pek3b.qingstor.com/kubesphere-community/images/multicluster-kubesphere.jpeg
type: iframe
createTime: 2021.05.15
group: Meetup
- title: Kubernetes 在媒体直播行业的落地实践
- link: //player.bilibili.com/player.html?aid=205640169&bvid=BV1Jh411v7kG&cid=340538245&page=1&high_quality=1
+ link: ./medialive-shanghai/
snapshot: https://pek3b.qingstor.com/kubesphere-community/images/kubesphere-live.jpeg
type: iframe
createTime: 2021.05.15
group: Meetup
- title: 在云原生场景下构建企业级存储方案
- link: //player.bilibili.com/player.html?aid=503177493&bvid=BV1uN411Z7J1&cid=340539595&page=1&high_quality=1
+ link: ./neonio-shanghai/
snapshot: https://pek3b.qingstor.com/kubesphere-community/images/qingstor-meetup.jpeg
type: iframe
createTime: 2021.05.15
group: Meetup
- title: MySQL on K8s:开源开放的高可用容器编排方案
- link: //player.bilibili.com/player.html?aid=205670397&bvid=BV1bh411v7Ph&cid=340545938&page=1&high_quality=1
+ link: ./mysql-shanghai/
snapshot: https://pek3b.qingstor.com/kubesphere-community/images/MySQLonkubernetes.jpeg
type: iframe
createTime: 2021.05.15
group: Meetup
- title: 中通快递关键业务和复杂架构挑战下的 K8S 集群服务暴露实践
- link: //player.bilibili.com/player.html?aid=760635980&bvid=BV1Z64y1C75y&cid=340544087&page=1&high_quality=1
+ link: ./zhongtong-shanghai/
snapshot: https://pek3b.qingstor.com/kubesphere-community/images/cluster-zhongtong.jpeg
type: iframe
createTime: 2021.05.15
group: Meetup
- title: 基于云原生架构下的 DevOps 实践
- link: //player.bilibili.com/player.html?aid=205642662&bvid=BV1Jh411v7jc&cid=340549646&page=1&high_quality=1
+ link: ./devops-shanghai/
snapshot: https://pek3b.qingstor.com/kubesphere-community/images/DevOps-cloudnative.jpeg
type: iframe
createTime: 2021.05.15
group: Meetup
- title: KubeSphere v3.1 开源社区交流会直播回放
- link: //player.bilibili.com/player.html?aid=247784540&bvid=BV1Bv411L7Hx&cid=331253914&page=1&high_quality=1
+ link: ./3.1-live/
snapshot: https://pek3b.qingstor.com/kubesphere-community/images/v3.1-live.png
type: iframe
createTime: 2021.04.30
group: 直播回放
- title: 基于 KubeSphere 与 BotKube 搭建 K8s 多集群监控告警体系
- link: //player.bilibili.com/player.html?aid=501141287&bvid=BV13K411u7w9&cid=282696732&page=1&high_quality=1
+ link: ./botkube-live/
snapshot: https://pek3b.qingstor.com/kubesphere-community/images/botkube-kubesphere.jpeg
type: iframe
createTime: 2021.01.15
group: 直播回放
- title: 企业级云原生多租户通知系统 Notification Manager
- link: //player.bilibili.com/player.html?aid=373555176&bvid=BV1Eo4y1f7Mi&cid=277936370&page=1&high_quality=1
+ link: ./nm-live/
snapshot: https://pek3b.qingstor.com/kubesphere-community/images/notification-kubesphere.jpeg
type: iframe
createTime: 2021.01.06
@@ -113,7 +162,7 @@ section3:
group: Meetup
- title: 使用(KubeSphere)QKE管理多个ACK集群
- link: //player.bilibili.com/player.html?aid=801598359&bvid=BV1Xy4y1n764&cid=294877842&page=1&high_quality=1
+ link: ./qke-ack/
snapshot: https://pek3b.qingstor.com/kubesphere-community/images/qke-akc.jpeg
type: iframe
createTime: 2020.12.19
@@ -127,7 +176,7 @@ section3:
group: Meetup
- title: 云原生的 WebAssembly 能取代 Docker 吗?
- link: //player.bilibili.com/player.html?aid=374255852&bvid=BV1wo4y1R7x2&cid=302625819&page=1&high_quality=1
+ link: ./webassembly/
snapshot: https://pek3b.qingstor.com/kubesphere-community/images/webassembly-docker.jpeg
type: iframe
createTime: 2020.12.19
@@ -148,7 +197,7 @@ section3:
group: 直播回放
- title: CNCF 网研会:使用 PorterLB 和 KubeSphere 在物理机 Kubernetes 轻松暴露服务
- link: //player.bilibili.com/player.html?aid=885471683&bvid=BV17K4y177YG&cid=261965895&page=1&high_quality=1
+ link: ./poterlb-live/
snapshot: https://pek3b.qingstor.com/kubesphere-community/images/duan-kubesphere.jpeg
type: iframe
createTime: 2020.12.02
@@ -162,7 +211,7 @@ section3:
group: 直播回放
- title: Kubernetes 混合云在教育服务行业的最佳实践
- link: //player.bilibili.com/player.html?aid=500396313&bvid=BV14K411V7Zw&cid=259917913&page=1&high_quality=1
+ link: ./qingjiao-live/
snapshot: https://pek3b.qingstor.com/kubesphere-community/images/luxingmin-zhibo.jpeg
type: iframe
createTime: 2020.11.26
diff --git a/content/zh/live/botkube-live.md b/content/zh/live/botkube-live.md
index e2ac36677..a572908e8 100644
--- a/content/zh/live/botkube-live.md
+++ b/content/zh/live/botkube-live.md
@@ -24,6 +24,8 @@ section1:
并进一步演示如何使用 KubeSphere 纳管多个 Kubernetes 集群,结合开源的 BotKube 工具快速搭建多集群监控告警体系,以实现无人驾驶场景云脑服务的监控告警。
+
+
## 下载 PPT
-关注 「KubeSphere 云原生」公众号,后台回复 0114 即可下载 PPT。
+可扫描官网底部二维码,关注 「KubeSphere 云原生」公众号,后台回复 0114 即可下载 PPT。
diff --git a/content/zh/live/devops-shanghai.md b/content/zh/live/devops-shanghai.md
new file mode 100644
index 000000000..a21db5cfd
--- /dev/null
+++ b/content/zh/live/devops-shanghai.md
@@ -0,0 +1,38 @@
+---
+title: 基于云原生架构下的 DevOps 实践
+description: 在 DevOps 能力建设过程中,对于种类繁多的系统工具选型既要适合自身状况也需适应新技术发展趋势。因此云原生和 DevOps 的技术融合才会发挥最大价值。
+keywords: KubeSphere,Kubernetes,DevOps,bank
+css: scss/live-detail.scss
+
+section1:
+ snapshot: https://pek3b.qingstor.com/kubesphere-community/images/jianglijie-1.webp
+ videoUrl: //player.bilibili.com/player.html?aid=205642662&bvid=BV1Jh411v7jc&cid=340549646&page=1&high_quality=1
+ type: iframe
+ time: 2021-05-15 13:00-18:00
+ timeIcon: /images/live/clock.svg
+ base: 线下 + 线上
+ baseIcon: /images/live/base.svg
+---
+
+## 分享人简介
+
+蒋立杰
+
+苏宁银行,云计算负责人
+
+国内云计算业首批技术从业者,阿里云 MVP-云原生领域最有价值专家, 前阿里云金融云首家战略生态系公司云计算架构/DevOps 负责人、前中兴通讯云计算架构专家、中国 DevOps 社区技术专家、DockOne 社区技术专家,KubeSphere 开源社区技术专家、K8sMeetup 社区技术成员。
+
+## 分享主题介绍
+
+“传统” DevOps 必然会技术演进为“云原生” DevOps,在 DevOps 能力建设过程中,对于种类繁多的系统工具选型既要适合自身状况也需适应新技术发展趋势。因此云原生和 DevOps 的技术融合才会发挥最大价值。
+
+
+
+## 下载 PPT
+
+可扫描官网底部二维码,关注 「KubeSphere 云原生」公众号,后台回复 “2021 上海” 即可下载 PPT。
+
+
+
+
+
diff --git a/content/zh/live/faas-hangzhou.md b/content/zh/live/faas-hangzhou.md
new file mode 100644
index 000000000..d2e433fe3
--- /dev/null
+++ b/content/zh/live/faas-hangzhou.md
@@ -0,0 +1,35 @@
+---
+title: 初识云原生 FaaS 平台及 Serverless 生态
+description: 以 Kubernetes 为代表的云原生技术极大的推动了 Serverless 的发展与落地,但目前现有的开源 FaaS 平台都没有充分利用这些云原生 Serverless 技术, OpenFunction 的出现则弥补了这方面的空白。
+keywords: KubeSphere,Kubernetes,FaaS,Serverless,OpenFunction
+css: scss/live-detail.scss
+
+section1:
+ snapshot: https://pek3b.qingstor.com/kubesphere-community/images/ben-hangzhou.jpeg
+ videoUrl: //player.bilibili.com/player.html?aid=248447658&bvid=BV1Dv411V7Ku&cid=347150253&page=1&high_quality=1
+ type: iframe
+ time: 2021-05-29 13:00-18:00
+ timeIcon: /images/live/clock.svg
+ base: 线下 + 线上
+ baseIcon: /images/live/base.svg
+---
+
+## 分享人简介
+
+霍秉杰
+
+青云科技,KubeSphere 架构师
+
+OpenFunction 项目发起人,KubeSphere 可观测性、边缘计算相关产品负责人。专注云原生 Serverless、可观测性、边缘计算等领域,是多个云原生项目如 prometheus-operator, Thanos, Loki, kube-state-metrics 等的 Contributor。
+
+## 分享主题介绍
+
+以 Kubernetes 为代表的云原生技术极大的推动了 Serverless 的发展与落地。Knative, Tekton, Cloud Native Buildpacks, Dapr 和 KEDA 等众多 Serverless 相关领域的云原生技术相继涌现,但目前现有的开源 FaaS 平台都没有充分利用这些云原生 Serverless 技术,OpenFunction 的出现弥补了这方面的空白。
+
+本次演讲将介绍云原生 Serverless 领域的最新进展,以及如何利用这些技术打造开源的云原生 FaaS 平台。
+
+
+
+## 下载 PPT
+
+可扫描官网底部二维码,关注 「KubeSphere 云原生」公众号,后台回复 “2021 杭州” 即可下载 PPT。
diff --git a/content/zh/live/kubeedge-hangzhou.md b/content/zh/live/kubeedge-hangzhou.md
new file mode 100644
index 000000000..7e9493d24
--- /dev/null
+++ b/content/zh/live/kubeedge-hangzhou.md
@@ -0,0 +1,45 @@
+---
+title: KubeSphere + KubeEdge——打造云原生边缘计算服务
+description: KubeEdge 是非常流行的边缘计算平台,但是缺少开源容器管理平台在云端控制层面的的支持,此外需要经过较复杂和繁琐的配置才能实现边缘节点纳管及和可观测,KubeSphere 在与 KubeEdge 集成的过程中,着重解决了上述问题,使得 KubeEdge 纳管边缘节点更加方便,并自动实现边缘节点及工作负载的可观测。
+keywords: KubeSphere,Kubernetes,KubeEdge,边缘计算
+css: scss/live-detail.scss
+
+section1:
+ snapshot: https://pek3b.qingstor.com/kubesphere-community/images/xufei-hangzhou.jpeg
+ videoUrl: //player.bilibili.com/player.html?aid=845966924&bvid=BV1654y137iR&cid=347155149&page=1&high_quality=1
+ type: iframe
+ time: 2021-05-29 13:00-18:00
+ timeIcon: /images/live/clock.svg
+ base: 线下 + 线上
+ baseIcon: /images/live/base.svg
+---
+
+## 分享人简介
+
+### 分享人一:
+
+徐飞
+
+KubeEdge 社区 Maintainer,华为云高级工程师
+
+专注于云原生边缘容器领域,曾在 Kubernetes、Istio 等社区及云原生领域工作多年,协作出版《云原生服务网格Istio》书籍,在云原生和边缘容器等领域拥有丰富的开源社区与商业落地实践经验。
+
+### 分享人二:
+
+霍秉杰
+
+KubeSphere 架构师
+
+OpenFunction 项目发起人,KubeSphere 可观测性、边缘计算相关产品负责人。专注云原生 Serverless、可观测性、边缘计算等领域,是多个云原生项目如 prometheus-operator, Thanos, Loki, kube-state-metrics 等的 Contributor。
+
+## 分享主题介绍
+
+本次演讲将由 KubeEdge 和 KubeSphere 社区的 Maintainer 介绍 CNCF 孵化项目 KubeEdge 的架构及最新进展,以及KubeEdge 如何在 KubeSphere 深度集成,共同打造一个通用的云计算的边缘计算平台。
+
+
+
+
+
+## 下载 PPT
+
+可扫描官网底部二维码,关注 「KubeSphere 云原生」公众号,后台回复 “2021 杭州” 即可下载 PPT。
diff --git a/content/zh/live/medialive-shanghai.md b/content/zh/live/medialive-shanghai.md
new file mode 100644
index 000000000..25b149ec3
--- /dev/null
+++ b/content/zh/live/medialive-shanghai.md
@@ -0,0 +1,36 @@
+---
+title: Kubernetes 在媒体直播行业的落地实践
+description: 苏州广播电视总台通过 KubeSphere 满足了媒体处理流程中的海量计算资源需求,KubeSphere 提供的容器编排能力,帮助实现了视频直播节目的高效制作和灵活调度,并且将容器平台集成进了 CI/CD 流程,极大提升了系统的可维护性和安全性。
+keywords: KubeSphere,Kubernetes,CI/CD,媒体
+css: scss/live-detail.scss
+
+section1:
+ snapshot: https://pek3b.qingstor.com/kubesphere-community/images/tangming-1.webp
+ videoUrl: //player.bilibili.com/player.html?aid=205640169&bvid=BV1Jh411v7kG&cid=340538245&page=1&high_quality=1
+ type: iframe
+ time: 2021-05-15 13:00-18:00
+ timeIcon: /images/live/clock.svg
+ base: 线下 + 线上
+ baseIcon: /images/live/base.svg
+---
+
+## 分享人简介
+
+唐明
+
+苏州市广播电视台,企业 IT 负责人
+
+主要工作方向为满足业务需求进行 IT 架构的规划和实施。目前关注包括云原生技术、分布式存储及信息安全方向。
+
+## 分享主题介绍
+
+苏州广播电视总台通过自建 KubeSphere 容器平台,满足了媒体处理流程中的海量计算资源需求。使用 KubeSphere 容器平台提供的容器编排能力,实现了视频直播节目的高效制作和灵活调度,并且将容器平台集成进了 CI/CD 流程,极大提升了系统的可维护性和安全性,达到了良好的效果。
+
+
+
+## 下载 PPT
+
+可扫描官网底部二维码,关注 「KubeSphere 云原生」公众号,后台回复 “2021 上海” 即可下载 PPT。
+
+
+
diff --git a/content/zh/live/meetup-chengdu.md b/content/zh/live/meetup-chengdu.md
index 5d9b799ae..21c03ac72 100644
--- a/content/zh/live/meetup-chengdu.md
+++ b/content/zh/live/meetup-chengdu.md
@@ -10,7 +10,7 @@ section1:
type: iframe
time: 2021-06-19 13:00-18:00
timeIcon: /images/live/clock.svg
- base: 线下 + 线上
+ base: 四川省成都市高新区天府大道中段 500 号天祥广场 B 座 45A + 线上同步直播
baseIcon: /images/live/base.svg
---
@@ -21,10 +21,20 @@ section1:
KubeSphere 之所以能够如此快速发展,得益于开源社区带来的天然优势,以及社区里长期活跃的用户、贡献者积极参与社区,帮助推动产品和社区快速成长,我们坚持认为 KubeSphere 开源社区的每一位用户和贡献者朋友都是 KubeSphere 生态中的重要组成部分。
-为了跟社区新老朋友们零距离交流,我们将联合 CNCF 和其他合作伙伴,从五月到七月,在上海、杭州、深圳、成都这四个城市分别为大家带来技术的交流与碰撞。2021 年继上海站首次 Meetup 火爆全场之后,我们将依旧延续 KubeSphere and Friends 的主题,于 6 月 19 日在成都为大家带来 Kubernetes and Cloud Native Meetup。
+为了跟社区新老朋友们零距离交流,我们将联合 CNCF、APISIX 以及其他合作伙伴,从五月到七月,在上海、杭州、成都、深圳这四个城市分别为大家带来技术的交流与碰撞。上海站和杭州站圆满落幕之后,我们将延续 KubeSphere and Friends 的主题,于 6 月 19 日在成都为大家带来 Kubernetes and Cloud Native Meetup。
## 活动议程
待定
-敬请期待!
\ No newline at end of file
+敬请期待!
+
+## 活动时间和地点
+
+活动时间:6月19日 下午 13:00-18:00
+
+活动地点:四川省成都市高新区天府大道中段 500 号天祥广场 B 座 45A
+
+## 报名已经开启
+
+
\ No newline at end of file
diff --git a/content/zh/live/meetup-hangzhou.md b/content/zh/live/meetup-hangzhou.md
index 055f65de0..9c7a23f62 100644
--- a/content/zh/live/meetup-hangzhou.md
+++ b/content/zh/live/meetup-hangzhou.md
@@ -1,39 +1,118 @@
---
title: KubeSphere and Friends | Kubernetes and Cloud Native Meetup ——杭州站
-description: 为了跟社区新老朋友们零距离交流,我们将联合 CNCF 和其他合作伙伴,从五月到七月,在上海、杭州、深圳、成都这四个城市分别为大家带来技术的交流与碰撞。2021 年继上海站首次 Meetup 火爆全场之后,我们将依旧延续 KubeSphere and Friends 的主题,于 5 月 29 日杭州为大家带来 Kubernetes and Cloud Native Meetup。
-keywords: KubeSphere,Meetup,Hangzhou
+description: KubeSphere and Friends 2021,Kubernetes and Cloud Native Meetup 第二站杭州站顺利举办,围绕“云原生、边缘云、Serverless、DevOps”等火热话题,来自 IT、KubeEdge 社区、SementFault(思否)社区等行业技术大牛、嘉宾以及社区伙伴带来最新的思考与实践。
+keywords: KubeSphere,Meetup,Hangzhou,Serverless,FaaS,OpenFunction,KubeEdge
css: scss/live-detail.scss
section1:
- snapshot: https://pek3b.qingstor.com/kubesphere-community/images/meetup-hangzhou-kv.png
- liveUrl: http://live.bilibili.com/22580654
+ snapshot:
+ videoUrl:
type: iframe
- time: 2021-05-29 14:00-18:00
+ time: 2021-05-29 13:00-18:00
timeIcon: /images/live/clock.svg
- base: 浙江省杭州市拱墅区丰潭路430号丰元国际大厦A座硬趣空间地下一层 + 线上直播
+ base: 浙江省杭州市拱墅区丰潭路 430 号丰元国际大厦 A 座硬趣空间地下一层 + 线上同步直播
baseIcon: /images/live/base.svg
---
+{{ .content }}
+ {{ if .content2 }} +{{ .content2 }}
+ {{ end }} {{ if .showDownload }} -{{ .content }}
-{{ .content }}
{{ end }}